id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.14763
Vehicular Behavior-Aware Beamforming Design for Integrated Sensing and Communication Systems
Communication and sensing are two important features of connected and autonomous vehicles (CAVs). In traditional vehicle-mounted devices, communication and sensing modules exist but in an isolated way, resulting in a waste of hardware resources and wireless spectrum. In this paper, to cope with the above inefficiency, we propose a vehicular behavior-aware integrated sensing and communication (VBA-ISAC) beamforming design for the vehicle-mounted transmitter with multiple antennas. In this work, beams are steered based on vehicular behaviors to assist driving and meanwhile provide spectral-efficient uplink data services with the help of a roadside unit (RSU). Specifically, we first predict the area of interest (AoI) to be sensed based on the vehicles' trajectories. Then, we formulate a VBA-ISAC beamforming design problem to sense the AoI while maximizing the spectral efficiency of uplink communications, where a trade-off factor is introduced to balance the communication and sensing performance. A semi-definite relaxation-based beampattern mismatch minimization (SDR-BMM) algorithm is proposed to solve the formulated problem. To reduce the hardware cost and power consumption, we further improve the proposed VBA-ISAC beamforming design by introducing the hybrid analog-digital (HAD) structure. Numerical results verify the effectiveness of VBA-ISAC scheme and show that the proposed beamforming design outperforms the benchmarks in both spectral efficiency and radar beampattern.
Dingyan Cong, Shuaishuai Guo, Shuping Dang, Haixia Zhang
2023-02-27T02:40:28Z
http://arxiv.org/abs/2302.14763v1
# Vehicular Behavior-Aware Beamforming Design for Integrated Sensing and Communication Systems ###### Abstract Communication and sensing are two important features of connected and autonomous vehicles (CAVs). In traditional vehicle-mounted devices, communication and sensing modules exist but in an isolated way, resulting in a waste of hardware resources and wireless spectrum. In this paper, to cope with the above inefficiency, we propose a vehicular behavior-aware integrated sensing and communication (VBA-ISAC) beamforming design for the vehicle-mounted transmitter with multiple antennas. In this work, beams are steered based on vehicular behaviors to assist driving and meanwhile provide spectral-efficient uplink data services with the help of a roadside unit (RSU). Specifically, we first predict the area of interest (AoI) to be sensed based on the vehicles' trajectories. Then, we formulate a VBA-ISAC beamforming design problem to sense the AoI while maximizing the spectral efficiency of uplink communications, where a trade-off factor is introduced to balance the communication and sensing performance. A semi-definite relaxation-based beampattern mismatch minimization (SDR-BMM) algorithm is proposed to solve the formulated problem. To reduce the hardware cost and power consumption, we further improve the proposed VBA-ISAC beamforming design by introducing the hybrid analog-digital (HAD) structure. Numerical results verify the effectiveness of VBA-ISAC scheme and show that the proposed beamforming design outperforms the benchmarks in both spectral efficiency and radar beampattern. Integrated sensing and communication (ISAC), vehicular behavior-aware beamforming design, the intelligent transportation system (ITS), and vehicular networks. ## I Introduction Connected and autonomous vehicles (CAVs) are the next frontier of the automotive revolution and the key of innovation in the next-generation intelligent transportation systems (ITS) [1]. In future ITS, CAVs will be not only a means of smart transportation but also a service platform similar to a mobile phone, providing passengers with fast and secure data services. To achieve this vision, a vehicle needs to have two typical capabilities: sensing and communications [2]. On the one hand, to support safe and autonomous driving, the vehicle needs to sense the environment through radars to obtain environmental information, such as the distance to the vehicle ahead and its speed. On the other hand, the vehicle needs to communicate with other vehicles, passengers, and infrastructures through vehicle-mounted transceivers. In existing vehicle-mounted devices, communication and sensing functional modules exist but in an isolated way, resulting in a waste of hardware resources and wireless spectrum [3]. To enable the efficient use of the spectrum resources and reduce the hardware cost, the integrated sensing and communication (ISAC) technology was put forward, where two functions of communication and sensing are integrated into the same device [4]. For vehicular-mounted ISAC devices with multi-antennas, how to design the beamformer is an important and intricate task, since the beamforming design has to meet two kinds of quality of service (QoS) and balance the communication and sensing performance. In this paper, we investigate this crucial research problem by being aware of vehicular behaviors. ### _Prior Works_ The investigation on the ISAC beamforming design is quickly gaining traction in the communication and signal processing community due to the prospect of integrating dual functionalities of radar sensing and communications. Previous myriad ISAC designs can be mainly classified into three categories: communication-centric ISAC design, sensing-centric ISAC design, and balanced ISAC design. #### I-A1 Communication-centric ISAC design In such systems, sensing comes into play with the assistance of communications. Sensing is adopted to mitigate interference and improve spectral efficiency for communications. Biswas _et al_ in [5] proposed a multiple-input multiple-output (MIMO) radar to MIMO communication systems to tackle imperfect channel estimation and hardware impairments while improving the QoS for cellular users. Based on the jointly phased arrays, Feng and Huang in [6] designed the beamformer by jointly optimizing the interference between communications and radar sensing to improve the received signal-to-noise ratio (SNR). Liu _et al_ in [7] proposed to simultaneously transmit the integrated radar waveform and constellation symbols while ensuring the SNR of each user is above a preset threshold. To reduce the pilot overhead for beam alignment and channel estimation, [8, 9] introduced a millimeter-wave (mmWave) radar in the base station. Huang _et al_ in [10] proposed a deep-learning-enabled MIMO-radar-assisted channel estimation scheme. They designed the transmission frame structure of the combined radar sensing module and communication module while estimating the angle of departure and the angle of arrival and establishing a stable communication link. Shen _et al_ in [11] proposed to use the orthogonal time frequency space (OTFS) modulation waveform to sense the delay and the Doppler shift of wireless channels. Since the channel state information (CSI) is obtained with low pilot overhead, the spectral efficiency of the communication system is significantly improved. Shaham _et al_ in [12] have achieved a similar research goal of channel estimation by using sensing to assist communications. #### I-A2 Sensing-centric ISAC design In this category, communication is an auxiliary function to assist sensing, enhancing the sensing performance, including improving sensing accuracy and developing sensing functionality in existing communication systems. Barneto _et al_ in [13] and Damith _et al_ in [14] optimized the transmitting and receiving beamformers to enable multi-beam sensing. Keskin _et al_ in [15] studied the time-frequency waveform design of radar and communication systems. And they focused on the research of the radar optimal waveform design that minimizes the Cramer-Rao bound on the delay-Doppler estimation in the delay-Doppler ambiguity domain, aiming at improving the radar sensing accuracy and resolution. Takahara _et al_ in [16] proposed a communication-assisted ultra-wideband radar system to achieve high-precision ranging and positioning. Wymeersch _et al_ in [17] proposed to combine cellular networks with the existing vehicle positioning and map systems. Besides, early scholars working on ISAC paid more attention to increasing sensing in communication systems. The authors of [18, 19] extended the WiFi technology to radar systems. Daniels _et al_ in [18] proposed a method to determine the average normalized channel energy from the frequency-domain channel estimation and modeled it as a simple sinusoidal of the target distance so as to achieve the closest target distance estimation. Kumari _et al_ in [19] had done similar work by applying a radar in WiFi systems. They developed single-frame and multi-frame radar receiver algorithms for target detection as well as distance and speed estimations for single-target and multi-target scenarios. #### I-A3 Balanced ISAC design Unlike the communication-centric ISAC design and sensing-centric ISAC design, in the balanced ISAC design, sensing and communication functions are of equal importance and both play crucial roles. The authors of [20, 21, 22, 23, 24, 25] focused on the flexible performance trade-off between communications and sensing. Specifically, Cheng _et al_ in [20] maximized the communication rate while having good sensing beampattern characteristics under power constraints. Tang _et al_ in [21] used a dual-function MIMO array, which can match a desired transmit beampattern for radar sensing and to communicate with multiple users simultaneously. Dokhanchi _et al_ in [22] dedicated to the beamforming design of the Internet of Vehicles (IoV) system. In the IoV system, the transmitter communicates with multiple vehicles, and in the meantime, the radar detects multiple targets. The beam is designed to maximize the communication rate under a constraint on radar detection performance. Liu _et al_ in [23] considered the full-digital beamforming design for MIMO dual-function radar and communication system by weighting optimization for the flexible trade-off between radar sensing and communication. [24] and [25] exploited the ISAC beamforming design with a hybrid radio frequency (RF) structure in vehicle-to-everything (V2X) scenarios, minimizing the weighted sum of communication and radar sensing beamforming errors. The aforementioned works have well investigated the realization of ISAC beamforming designs. In particular, in [24, 25], they consider the ISAC beamforming design for V2X scenarios. Their works [23, 24, 25] brings important insights of beamforming design for ISAC systems, but it is not appropriate to apply directly on V2X systems. The most significant problem is that the targets to be sensed are assumed to be fixed in their designs. In more detail, their schemes do not take vehicle behavior into account when determining the sensing area. When the vehicle is moving, their schemes can't get the surrounding environment information in advance, and cannot accurately set the required pointing angles. Moreover, their beamforming design also cannot obtain the required sensing distance at each pointing angle. In the case of limited energy of vehicular-mounted ISAC device, it can be hardly to accurately cover the area that needs to be sensed. Therefore, such an assumption in ISAC design is not suitable for vehicular networks with high mobility. When CAVs are moving, they need to be aware of surrounding environments in advance. The ISAC beamforming design for vehicle-mounted transmitters should thereby take the mobility and behaviors of vehicles into consideration to perform predictive sensing and simultaneous communications. In this regard, different from myriad previous works, we propose a vehicular behavior-aware ISAC (VBA-ISAC) beamforming design for vehicle-mounted multi-antenna transmitters in this paper. Specifically, beams are designed to steer based on vehicles' behavior to predicatively sense the area of interest (AoI) and meanwhile provide spectrum-efficient uplink data services with the help of roadside unit (RSU). ### _Contributions_ To be clear, we summarize the technical contributions of this paper in detail as follows: * We propose a VBA-ISAC beamforming scheme for multi-antenna vehicle-mounted transmitters to simultaneously provide communication and predictive sensing capabilities to vehicles. The proposed scheme is capable of predicting the AoI according to the real-time behavior of the vehicle via in-vehicle sensors. And the optimal radar beamformer is designed to exactly cover the AoI. * We formulate an optimization problem to maximize the spectral efficiency of communications while minimizing the beampattern mismatch to optimal radar beamformer, where a trade-off factor is introduced to balance the communication and sensing performance. To solve the formulated optimization problem, we also propose a semi-definite relaxation-based beampattern mismatch minimization (SDR-BMM) algorithm. * To reduce the power consumption and hardware cost of multiple radio frequency (RF) chains supporting multi-antenna transmitters, we further investigate the VBA-ISAC beamforming design with hybrid RF chains structure. We analyze and simulate the sensing performance and communication performance of the proposed designs in this paper, which demonstrate and quantify clear performance advantages over the benchmarks. ### _Organization_ The remainder of this paper is organized as follows. We present the system model in Section II. In Section III, we formulate the VBA-ISAC beamforming design problem and propose the solution to the formulation. We study the VBA-ISAC beamforming designs with the hybrid structure in Section IV. The numerical results are presented and discussed in Section V, and we finally conclude this paper in Section VI. ### _Notations_ In this paper, the notations are defined and used in the following manner. \(\mathbf{A}\) and \(\mathbf{a}\) stand for a matrix and a column vector, respectively; \(\mathbf{A}_{i,j}\) is the entry on the \(i\)th row and \(j\)th column of matrix \(\mathbf{A}\); \((\cdot)^{*}\), \((\cdot)^{T}\) and \((\cdot)^{H}\) stand for the conjugate, transpose, and conjugate transpose operations of the matrix or a vector enclosed; The determinant and Frobenius norm of a matrix are represented by \(\det(\mathbf{A})\) and \(\|\mathbf{A}\|_{F}\), respectively; \(\mathbf{A}^{-1}\) and \(\mathbf{A}^{\dagger}\) are the inverse and Moore-Penrose pseudo inverse of matrix \(\mathbf{A}\); \(\mathrm{vec}(\cdot)\) indicates vectorization of the matrix enclosed; Expectation and the real part of the complex variable enclosed are denoted by \(\mathbb{E}[\cdot]\) and \(\Re[\cdot]\); Hadamard and Kronecker products between two matrices are represented by \(\circ\) and \(\otimes\), respectively. ## II System Model In this section, we present the system model of VBA-ISAC by expatiating on the sensing model and communication model respectively. Specifically, we first give the kinematic vehicle model based on the vehicle behaviors and predict the AoI. Then, the sensing model is presented by specifying the relation between beampattern design and AoI. Lastly, we adopt the communication model from the perspective of maximizing the spectral efficiency of the transceiver. In particular, we consider a scenario where a vehicle-mounted transmitter communicates with an RSU while sensing AoI for driving safety purposes, as depicted in Fig. 1. In the scenario, the RSU serves as the infrastructure for vehicle-to-infrastructure (V2I) communications. In the depicted figure, the red car is about to turn right. Considering the specific vehicular behaviors, it needs to sense the road conditions on the right in advance. ### _Vehicle Kinematic Model_ First, we establish the kinematic model of the vehicle. As shown in Fig. 1, the red vehicle is driven on the road. At the intersection, it plans to turn right. As the vehicle's height does not affect the following modeling and analysis and is thereby negligible, it can be simplified that the movement of the vehicle is on the x-y (horizontal) plane. For simplicity, we further suppose that the left and right wheels of the vehicle have the same steering angle and velocity at any time [26]. Overall, for ease of analysis, we model the movement of the vehicle via a bicycle model as demonstrated in Fig. 2. Through in-device sensors, the current state of the vehicle can be obtained. Without loss of generality, we denote the distance between the front and rear wheels, displacements on the x-axis and y-axis, velocity, acceleration, driving direction, and steering angle of the vehicle by \(l\), \(d^{x}\), \(d^{y}\), \(v\), \(a\), \(\vartheta\), and \(\phi\), respectively. The motion state of the vehicle can be expressed as \[\mathbf{s}=[d^{x},d^{y},v,\vartheta]^{T}. \tag{1}\] In the model, acceleration \(a\) and steering angle \(\phi\) are the driver's controllable input. We consider the influence of mechanical inertia, and therefore, \(a\) and \(\phi\) remain unchanged in an instant. As a consequence, the movement of the vehicle in an instant is approximate to a circular curve [27]. According to the geometric relation depicted in Fig. 2, the radius can be calculated as \[r=\frac{l}{\tan\phi}. \tag{2}\] Furthermore, the vehicle is assumed to move in the direction of the body in an instant [28]. The moving distance of the vehicle within duration \(\Delta t\) can be determined as \[\Delta s=r\Delta\vartheta. \tag{3}\] Fig. 1: Application scenario: a vehicle-mounted transmitter communicates with an RSU while sensing the AoI for driving safety enhancement. Fig. 2: Kinematic model of vehicle referring to the bicycle model. According to (2) and (3), the change of driving direction can be calculated as \[\Delta\vartheta=\frac{\tan\phi}{l}\Delta s. \tag{4}\] We divide both sides of (4) by \(\Delta t\). The rate of change of driving direction can be expressed as \[\dot{\vartheta}=\frac{\tan\phi}{l}v. \tag{5}\] In addition, it needs to consider the displacements of the vehicle on the x-axis and y-axis in an instant. As shown in Fig. 2, it can be obtained as \[\frac{\Delta d^{x}}{\Delta d^{y}}=\tan\vartheta. \tag{6}\] The velocities on the x-axis and y-axis can be denoted by \(\dot{d}^{x}=\frac{\Delta d^{x}}{\Delta t}\) and \(\dot{d}^{y}=\frac{\Delta d^{y}}{\Delta t}\). According to (6), the relation between them can be established as \[-\dot{d}^{x}\mathrm{cos}\,\vartheta+\dot{d}^{y}\mathrm{sin}\,\vartheta=0. \tag{7}\] At this point, we attain a simplified kinematic vehicle model, summarized as follows \[\begin{bmatrix}\dot{\vartheta}\\ \dot{d}^{x}\end{bmatrix}=\begin{bmatrix}v\frac{\tan\phi}{l}\\ v\sin\vartheta\\ v\cos\vartheta\end{bmatrix}. \tag{8}\] Based on this simplified kinematic vehicle model, given the controllable input \(a\) and \(\phi\) at a certain moment, the state information of the vehicle at the next moment can be estimated. ### _Sensing Model_ As shown in Fig. 1 and Fig. 3, the red vehicle needs to generate beams to sense the AoI for driving safety enhancement. Unlike an omnidirectional MIMO radar, the transmitted beam designed in this paper is allowed to be directional. The transmitter needs to point some beams to the right to sense the AoI. For MIMO radar probing purposes, it is desirable to focus the transmit energy on the spatial sections of interest. Hence, the radar beamformer should be designed with good beampattern behavior. The radar beampattern located at \(\theta\) direction can be written as [29] \[P(\theta)=\mathbf{a}_{t}^{H}(\theta)\mathbf{R}_{d}\mathbf{a}_{t}(\theta), \tag{9}\] where \(\mathbf{a}_{t}(\theta)\in\mathbb{C}^{N_{t}\times 1}\) is the transmit array response vector; \(N_{t}\) is the number of transmit antennas; \(\mathbf{R}_{d}\in\mathbb{C}^{N_{t}\times N_{t}}\) is the covariance matrix of the radar beampattern. In this paper, we consider a uniform linear array (ULA) at the transmitter and the receiver. The array response vector corresponding to \(\theta\) can be expressed as \[\mathbf{a}(\theta)=\left[1,\ e^{j\frac{2\pi}{\lambda}d\sin\theta},\ \cdots,\ e^{j\frac{2\pi}{\lambda}d(N-1)\sin\theta}\right]^{T}, \tag{10}\] where \(N\) denotes the number of antennas; \(\lambda\) stands for the wavelength; and \(d\) stands for the antenna spacing. The covariance matrix of the radar beampattern can be obtained by the radar beamformer matrix. Specifically, covariance matrix \(\mathbf{R}_{d}\in\mathbb{C}^{N_{t}\times N_{t}}\) can be expressed as [24] \[\mathbf{R}_{d}=\mathbf{F}_{rad}\mathbf{F}_{rad}^{H}. \tag{11}\] The radar beamformer matrix \(\mathbf{F}_{rad}\in\mathbb{C}^{N_{t}\times K}\) of non-overlapping subarrays can be expressed as [30] \[\mathbf{F}_{rad}=\left[\begin{array}{cccc}\mathbf{v}_{1}&0&\cdots&0\\ 0&\mathbf{v}_{2}&&0\\ \vdots&&\ddots&\vdots\\ 0&0&\cdots&\mathbf{v}_{K}\end{array}\right]\in\mathbb{C}^{N_{t}\times K}, \tag{12}\] where \(\mathbf{v}_{k}\in\mathbb{C}^{N_{k}\times 1}\) (\(1\leq k\leq K\)) is the \(k\)-th sub-array steering vector and can be expressed as \[\mathbf{v}_{k}=\left[1,\ e^{j\frac{2\pi}{\lambda}d\sin\theta_{k}},\ \cdots,\ e^{j\frac{2\pi}{\lambda}d(N_{k}-1)\sin\theta_{k}}\right]^{T}, \tag{13}\] where \(N_{k}\) denotes the number of antennas at the \(k\)-th pointing angle, and \(\sum_{k=1}^{K}N_{k}=N_{t}\); and \(K\) is the number of radar pointing angles. The radar pointing angle \(\theta_{k}\) is determined by the AoI. \(N_{k}\) is related to the required sensing distance. In short, the more antennas forming narrow beams, the greater the beam power toward the interesting pointing angle will be. The radar beamformer is adapted by adjusting \(N_{k}\) and \(\theta_{k}\). ### _Communication Model_ In Fig. 1, the vehicle-mounted transmitter provides spectral-efficient uplink data services with the help of the RSU. Without loss of generality, it is assumed that the vehicle-mounted transmitter is equipped with \(N_{t}\) transmit antennas, and the RSU is with \(N_{r}\) receive antennas. As shown in Fig. 3, \(N_{s}\) data streams are transmitted from the vehicle-mounted transmitter to the RSU. Conditioned on the assumption that both the RSU and the vehicle are equipped with full-digital RF chain structures, the \(N_{s}\) data streams have to pass through a transmit beamformer \(\mathbf{F}_{D}\in\mathbb{C}^{N_{t}\times N_{s}}\) before being sent by \(N_{t}\) transmit antennas. Through the wireless channels denoted by \(\mathbf{H}\in\mathbb{C}^{N_{r}\times N_{t}}\), the sent radio waves reach the receiving side and then pass through the receive combiner \(\mathbf{W}_{D}\in\mathbb{C}^{N_{s}\times N_{r}}\). Let \(\mathbf{s}\in\mathbb{C}^{N_{s}\times 1}\) represent the data symbol vector, and \(\mathbb{E}\left[\mathbf{s}\mathbf{s}^{H}\right]=\mathbf{I}_{N_{s}}\). The normalized power constraint can be expressed as \(||\mathbf{F}_{D}||_{K}^{2}=N_{s}\). With the above formulations, the Fig. 3: Multiple data streams are transmitted from a vehicle-mounted multi-antenna transmitter to an RSU for simultaneous sensing and communications. transmitted signal \(\mathbf{x}\) can be expressed as \(\mathbf{x}=\mathbf{F}_{D}\mathbf{s}\), and the signal on the receiving side can be expressed as \[\mathbf{y}=\sqrt{p}\mathbf{W}_{D}\mathbf{H}\mathbf{F}_{D}\mathbf{s}+\mathbf{W}_ {D}\mathbf{n}, \tag{14}\] where \(p\) stands for the average power of the received signal, and \(\mathbf{n}\in\mathbb{C}^{N_{r}\times 1}\) represents the vector of complex additive white Gaussian noises distributed over each element obeying \(\mathcal{CN}(0,\sigma_{n}^{2})\). To meet the high-rate requirements for modern vehicular networks, the mmWave band is adopted for V2I communications. The high-frequency band of mmWave also enables high-resolution sensing performance. In this paper, the Saleh-Valenzuela model [31] is adopted to characterize mmWave channel matrix \(\mathbf{H}\) as \[\mathbf{H}=\sum_{l=1}^{L}\alpha_{l}\mathbf{a}_{r}(\theta_{r,l})\mathbf{a}_{t}^ {H}(\theta_{t,l}), \tag{15}\] where \(L\) represents the number of paths through a wireless channel; \(\alpha_{l}\) stands for the gain of each path; \(\mathbf{a}_{r}(\theta_{r,l})\) and \(\mathbf{a}_{t}(\theta_{t,l})\) denote the array response vectors at the receiving side and the transmitting side, respectively; \(\theta_{r,l}\) and \(\theta_{t,l}\) represent the angle of arrival and angle of departure, respectively. In this paper, we assume that the CSI for communications has been fully acquired by channel estimation, which is a commonly accepted assumption in the literature of mmWave and vehicular communications [23, 24, 25]. Considering that the transmitter and the receiver of vehicular communication systems are distant compared to the wavelength, it is reasonable to assume that the receiving side is with an optimal combiner, denoted by \(\mathbf{W}_{opt}\). By replacing \(\mathbf{W}_{D}\) with \(\mathbf{W}_{opt}\) in (14), the received signal can be rewritten as \[\mathbf{y}=\sqrt{p}\mathbf{W}_{opt}\mathbf{H}\mathbf{F}_{D}\mathbf{s}+ \mathbf{W}_{opt}\mathbf{n}. \tag{16}\] ## III VBA-ISAC Beamforming Designs for Vehicle-Mounted Multi-Antenna Transmitter For the application scenario shown in Fig. 1, we have formulated the vehicle's kinematic model, sensing model, and communication model in the last section, based on which we propose the VBA-ISAC beamforming designs for vehicle-mounted multi-antenna transmitter in this section. ### _AoI Prediction_ Based on the kinematic model and the current vehicular state, the state of the vehicle in the next instant can be predicted. The displacement on the x-axis in the next instant can be expressed as \[d^{x}=\int_{t_{0}}^{t_{0}+\Delta t}d^{x}\mathrm{d}t=\int_{t_{0}}^{t_{0}+\Delta t }\sin(\vartheta_{t_{0}}+\dot{\vartheta}t)(v_{t_{0}}+at)\mathrm{d}t, \tag{17}\] where \(t_{0}\) represents the initial instant; \(\vartheta_{t_{0}}\) represents driving direction at \(t_{0}\); \(v_{t_{0}}\) represents velocity at \(t_{0}\). In the same way, the displacement on the y-axis can be expressed, ditto, as \[d^{y}=\int_{t_{0}}^{t_{0}+\Delta t}d^{y}\mathrm{d}t=\int_{t_{0}}^{t_{0}+\Delta t }\cos(\vartheta_{t_{0}}+\dot{\vartheta}t)(v_{t_{0}}+at)\mathrm{d}t. \tag{18}\] According to the above derivations, we can forecast the position of the vehicle in the next instant. As a continuous driving process, the driving path of the vehicle within a short period of time is thus predictable. The driving path of the vehicle on the two-dimensional plane can be expressed as \[d^{y}=f_{c}(d^{x}), \tag{19}\] where \(f_{c}(\cdot)\) is the curve function on the two-dimensional axis. To avoid vehicle collisions, we assume a vehicle safety zone, which is a circle with a radius \(r_{s}\). Consequently, the AoI needed to be predicted can be simplified as the area covered by the circle moving, and the trajectory of the circular center is curvilinear (19). According to the above description, we can predict the AoI that needs to be sensed according to the real-time behaviors and the state of the vehicle. For clarity, the AoI is illustrated in Fig. 4. ### _Desired Radar Beamformer Calculation_ From (12), it is known that the desired radar beamformer is determined by \(\theta_{k}\) and \(N_{k}\). To get these parameters, we evenly divide the driving process of the vehicle within \(\Delta t\) into \(K\) stages so that there are \(K\) vehicular positions in AoI. The \(K\) beams are used to sense these \(K\) positions to fully cover the AoI, as shown in Fig. 4. The pointing angle \(\theta_{k}\) of \(k\)-th sub-array steering vector \(\mathbf{v}_{k}\) can be calculated as \[\theta_{k}=\arctan\left(\frac{d_{k}^{x}}{d_{k}^{y}}\right), \tag{20}\] where \(d_{k}^{y}\) and \(d_{k}^{x}\) stand for the distances on the x-axis and y-axis of the \(k\)-th position relative to the initial position of the vehicle, respectively. For the MIMO radar sensing, the effective sensing range is positively correlated with the peak value of the main lobe of the beam. The main lobe of the beam can be made narrower by adjusting the number of antennas to achieve a longer radar sensing range. According to the MIMO radar equation [32], the maximum radar range can be expressed as \[d_{k}^{max}=\left(\frac{P_{k}G^{2}\lambda^{2}S_{\sigma}}{\left(4\pi\right)^{3} P_{min}}\right)^{\frac{1}{4}}=\Omega\left(P_{k}\right)^{\frac{1}{4}}, \tag{21}\] Fig. 4: An illustration of the AoI prediction based on the real-time behaviors and the state of the vehicle. where \(P_{k}\) represents the radar transmit power at the \(k\)-th beam, which is determined by the radar beamformer; \(G\) represents antenna gain; \(\lambda\) stands for wavelength; \(S_{\sigma}\) stands for radar cross section; \(P_{min}\) stands for the minimum detectable signal power, and we denote \(\Omega=\left(\frac{G^{2}\lambda^{2}S_{\sigma}}{(4\pi)^{n}P_{min}}\right)^{\frac {1}{4}}\) for simplicity. Assuming the total power is evenly distributed to all antennas and the transmit power of each antenna is \(P_{0}\), the radar transmit power at the \(k\)-th beam can be expressed as [32] \[P_{k}=N_{k}P_{0}. \tag{22}\] The detection distance at the pointing angle of interest \(\theta_{k}\) required by the vehicle is determined by the AoI. According to the vehicle's kinematic model, the sensing distance at the pointing angle of interest \(\theta_{k}\) can be expressed as \[d_{k}^{\ast}=\sqrt{(d_{k}^{x})^{2}+(d_{k}^{y})^{2}}+r_{s}. \tag{23}\] For safe and autonomous driving, it should make \(d_{k}^{max}\geq d_{k}^{\ast}\) by adjusting \(N_{k}\). Through the above analysis, we have established the link between the radar beampatterns and the behavior of the vehicle. Once an AoI is selected, the desired beamformer \(\mathbf{F}_{rad}\) can be obtained. To yield the desired beampattern for a given AoI, the transmit beamformer \(\mathbf{F}_{D}\) needs to be designated to approach \(\mathbf{F}_{rad}\). To simplify the formulated optimization problem, we define that the dimensions of \(\mathbf{F}_{D}\) are equal to those of \(\mathbf{F}_{rad}\), which can be easily satisfied by multiplying \(\mathbf{F}_{rad}\) with a unitary matrix [24]. Mathematically, the optimization can be modeled as \[||\mathbf{F}_{D}-\mathbf{F}_{rad}||_{F}^{2}\leq\varepsilon_{r}, \tag{24}\] where \(\varepsilon_{r}\) is a threshold parameter to control the level of similarity between \(\mathbf{F}_{D}\) and \(\mathbf{F}_{rad}\). After the above analysis, we have formulated the joint sensing and beamformer design problem. The covariance matrix of the transmitted beamforming can be expressed as \[\mathbf{R}_{d} =\mathbb{E}(\mathbf{F}_{D}\mathbf{ss}^{H}\mathbf{F}_{D}^{H})= \mathbf{F}_{D}\mathbb{E}(\mathbf{ss}^{H})\mathbf{F}_{D}^{H} \tag{25}\] \[=\mathbf{F}_{D}\mathbf{F}_{D}^{H},\] which determines the power distributed in the space. ### _Optimal Transmit Beamforming Calculation_ The design of \(\mathbf{F}_{D}\) also directly affects communication performance. In this work, we utilize spectral efficiency as the communication performance metric. The spectral efficiency of the above system model can be formulated as [33] \[R=\log\left(\det\left(\mathbf{I}_{N_{s}}+\frac{p}{\sigma_{n}^{2}}\mathbf{W}_{ opt}\mathbf{H}\mathbf{F}_{D}\mathbf{F}_{D}^{H}\mathbf{H}^{H}\mathbf{W}_{opt}^{H} \right)\right), \tag{26}\] and is upper bounded by \[R_{up}=\log\left(\det\left(\mathbf{I}_{N_{s}}+\frac{p}{\sigma_{n}^{2}}\mathbf{ W}_{opt}\mathbf{H}\mathbf{F}_{opt}\mathbf{F}_{opt}^{H}\mathbf{H}^{H}\mathbf{W}_{ opt}^{H}\right)\right), \tag{27}\] where \(\mathbf{F}_{opt}\) represents the optimal beamforming matrix. \(\mathbf{W}_{opt}\) and \(\mathbf{F}_{opt}\) can be obtained by performing singular value decomposition on channel matrix \(\mathbf{H}\)[34]. To maximize the spectral efficiency of VBA-ISAC systems, the beamformer can be designed to minimize the Euclidean distance between \(\mathbf{F}_{D}\) and \(\mathbf{F}_{opt}\). We formulate it as \[||\mathbf{F}_{D}-\mathbf{F}_{opt}||_{F}^{2}\leq\varepsilon_{c}, \tag{28}\] where \(\varepsilon_{c}\) is a threshold parameter to control the level of the similarity between \(\mathbf{F}_{D}\) and \(\mathbf{F}_{opt}\). ### _Problem Formulation for VBA-ISAC_ According to the above communication model and sensing model, to optimize the VBA-ISAC system, we need to minimize the Euclidean distances between \(\mathbf{F}_{D}\) and \(\mathbf{F}_{opt}\) as well as \(\mathbf{F}_{rad}\) at the same time. However, the optimization of these two distances is not compatible. Such incompatibility results from the performance trade-off between communications and sensing, which is worth considering and devising. In this paper, we strike this trade-off by introducing a trade-off factor \(\rho\), which is ranging from \(0\) to \(1\). The joint communication and sensing optimization problem of beamformer design can be formulated as \[\begin{array}{l}\min\limits_{\mathbf{F}_{D}}\ \rho||\mathbf{F}_{D}- \mathbf{F}_{opt}||_{F}^{2}+(1-\rho)||\mathbf{F}_{D}-\mathbf{F}_{rad}||_{F}^{2} \\ s.t.\ ||\mathbf{F}_{D}||_{F}^{2}=N_{s}.\end{array} \tag{29}\] From the form of (29), it is obviously a quadratic constraint quadratic programming (QCQP) optimization problem. It is non-convex and intricate to be optimally solved. Fortunately, the classic semi-definite relaxation (SDR) algorithm, as a commonly used tool in the fields of communications and signal processing, can be leveraged to obtain approximate and sub-optimal solutions to QCQP optimization problems [23]. To apply the SDR algorithm to solve the joint optimization problem formulated in (29), we first simplify (29) to make it look more concise and facilitate subsequent analysis. It can be converted into the following form \[\begin{array}{l}\min\limits_{\mathbf{F}_{D}}||\mathbf{A}\mathbf{F}_{D}- \mathbf{B}||_{F}^{2}\\ s.t.\ ||\mathbf{F}_{D}||_{F}^{2}=N_{s},\end{array} \tag{30}\] where \(\mathbf{A}=[\sqrt{\rho}\mathbf{I}_{N_{t}}^{T},\sqrt{1-\rho}\mathbf{I}_{N_{t}} ^{T}]^{T}\), and \(\mathbf{B}=[\sqrt{\rho}\mathbf{F}_{opt}^{T},\sqrt{1-\rho}\mathbf{F}_{rad}^{T}]^ {T}\). Based on the proven theorems and transformations given in [35], we reformulate (30) as a homogeneous QCQP optimization problem \[\begin{array}{l}\min\limits_{\mathbf{x}}\mathrm{Tr}(\mathbf{C}\mathbf{X}) \\ s.t.\ \mathrm{Tr}(\mathbf{A}_{1}\mathbf{X})=N_{s}\\ \mathrm{Tr}(\mathbf{A}_{2}\mathbf{X})=1\\ \mathbf{X}\succeq 0,\ \mathrm{rank}(\mathbf{X})=1,\end{array} \tag{31}\] where \(\mathbf{X}=\mathbf{f}_{D}\mathbf{F}_{D}^{H}\) is an \((N_{t}N_{s}+1)\)-dimension complex Hermitian matrix with \(\mathbf{f}_{D}=[\mathrm{vec}(\mathbf{F}_{D})\ \ t]^{T}\) in which \(t\) is used to judge whether the optimal solution is \(\mathbf{f}_{D}\) or \(-\mathbf{f}_{D}\), i.e., \(t=1\) or \(t=-1\); \(\mathbf{A}_{1}\), \(\mathbf{A}_{2}\), and \(\mathbf{C}\) are given by \[\mathbf{A}_{1}=\left[\begin{array}{cc}\mathbf{I}_{N_{t}N_{s}}&\mathbf{0}\\ \mathbf{0}&0\end{array}\right],\mathbf{A}_{2}=\left[\begin{array}{cc}\mathbf{0 }_{N_{t}N_{s}}&\mathbf{0}\\ \mathbf{0}&1\end{array}\right],\] and \[\mathbf{C}=\left[\begin{array}{cc}(\mathbf{I}_{N_{s}}\otimes\mathbf{A})^{H} (\mathbf{I}_{N_{s}}\otimes\mathbf{A})&-(\mathbf{I}_{N_{s}}\otimes\mathbf{A})^{H }\mathrm{vec}(\mathbf{B})\\ -\mathrm{vec}(\mathbf{B})^{H}(\mathbf{I}_{N_{s}}\otimes\mathbf{A})&\mathrm{ vec}(\mathbf{B})^{H}\mathrm{vec}(\mathbf{B})\end{array}\right],\] where the dimensions are both \(\mathbb{C}^{(N_{t}N_{s}+1)\times(N_{t}N_{s}+1)}\). However, due to the rank-one constraint, (31) is still non-convex. After removing the rank-one constraint, it becomes a semi-definite programming (SDP) problem \[\begin{split}\min_{\mathbf{x}}&\operatorname{Tr}( \mathbf{CX})\\ s.t.&\operatorname{Tr}(\mathbf{A}_{1}\mathbf{X})=N_{s} \\ &\operatorname{Tr}(\mathbf{A}_{2}\mathbf{X})=1\\ &\mathbf{X}\succeq 0.\end{split} \tag{32}\] (32) becomes a classic convex optimization problem, which can be solved by conventional convex optimization tools, such as CVX toolbox. In (32), we relax the rank-one constraint, resulting in the globally optimal solution to (32). However, the global optimum of (32) is not equivalent to an even feasible solution to (30) when the rank of \(\mathbf{X}\) is higher than one. Utilizing the approach proposed in [35], we apply the eigen-decomposition of \(\mathbf{X}\) to yield an approximate solution to (30). We summarize the approach tailored for our formulated optimization problem as in Algorithm 1. ``` Input:\(\mathbf{F}_{opt},\mathbf{F}_{rad},N_{s},\varepsilon_{r,c}>0,0\leqslant\rho\leqslant 1\) Randomly initialize \(\mathbf{F}_{D}\). 1. Convert (29) into (30). Then reformulate it as a homogeneous QCQP optimization problem as (31). 2. Relax the rank-one constraint in (31) and turn it into an SDP problem as (32). 3. Solve the SDP problem by using convex optimization toolbox CVX, 4. Apply the eigen-decomposition of \(\mathbf{X}\) to yield an approximate and suboptimal solution to the original optimization problem. Output:\(\mathbf{F}_{D}\) ``` **Algorithm 1**Semi-definite relaxation (SDR) algorithm for VBA-ISAC beamforming design. ### _VBA-ISAC Algorithm and Computational Complexity Analysis_ Through the above analysis, we can design the ISAC beamforming based on the vehicle's behavior and state. We summarize the VBA-ISAC beamforming scheme as in Algorithm 2. According to the proposed scheme, the AoI is obtained by calculating the value of a one-dimensional function. The computational complexity of such calculation can be omitted in general because the complexity of the entire scheme is dominated by the SDR algorithm. According to the methodology adopted in [35], the worst-case complexity of solving (32) is \(\mathcal{O}\left({{{N_{t}}^{3.5}}{{N_{s}}^{3.5}}}\right)\). ``` Input:Acceleration \(a\) and steering angle \(\phi\) 1. Predict the AoI according to the real-time behavior and state of the vehicle from in-vehicle sensors. 2. Formulate the desired radar beamformer \(\mathbf{F}_{rad}\) based on the predicted AoI. 3. Obtain the optimal transmit beamformer \(\mathbf{F}_{opt}\) by performing singular value decomposition on channel matrix \(\mathbf{H}\). 4. Formulate the joint optimization problem by introducing a trade-off factor to balance the communication and sensing performance. 5. Solve the Formulate the joint optimization problem by applying the SDR algorithm. Output: The desired VBA-ISAC beamforming matrix. ``` **Algorithm 2**VBA-ISAC beamforming scheme. ## IV VBA-ISAC Beamforming Design with the Hybrid Architecture For multi-antennas ISAC systems, full-digital beamforming demands RF chains, including signal mixers and analog-to-digital converters, comparable in number to the antenna elements. The prohibitive cost and power consumption of RF chains make full-digital ISAC beamforming design uneconomical and difficult to apply in the practical vehicular system. To reduce the hardware complexity and the associated costs, the hybrid analog-digital (HAD) beamforming structure is more suitable for vehicle-mounted ISAC systems, which requires much fewer RF chains compared to full-digital transceivers [36]. Therefore, it makes perfect sense that HAD beamforming design is an attractive technology for practical vehicular ISAC systems. In this section, we study the VBA-ISAC beamforming design with a HAD architecture. ### _Problem Formulation for VBA-ISAC with a HAD Architecture_ Similarly, we present a VBA-ISAC beamforming design scenario with a HAD architecture as depicted in Fig. 5. The main difference in introducing the HAD architecture is that the transmitting beamformer is now composed of a digital beamformer and an analog beamformer. We assume the receiving side with an optimal combiner, denoted by \(\mathbf{W}_{opt}\). In Fig. 5, the vehicle-mounted transmitter is equipped with \(N_{RF}^{t}\) RF chains. Each RF chain is connected to all antennas through phase shifters. The number of RF chains is limited as to be \(N_{s}\leqslant N_{RF}^{t}\leqslant N_{t}\). Accordingly, transmitted signal vector \(\mathbf{x}\) can now be expressed as \(\mathbf{x}=\mathbf{F}_{RF}\mathbf{F}_{BB}\mathbf{s}\). The normalized power constraint can be expressed as \(||\mathbf{F}_{RF}\mathbf{F}_{BB}||_{F}^{2}=N_{s}\). The signal on the receiving side is thus given by \[\mathbf{y}=\sqrt{p}\mathbf{W}_{opt}\mathbf{H}\mathbf{F}_{RF}\mathbf{F}_{BB} \mathbf{s}+\mathbf{W}_{opt}\mathbf{n}, \tag{33}\] where \(\mathbf{F}_{RF}\in\mathbb{C}^{N_{t}\times N_{RF}^{t}}\) is the analog beamformer, and \(\mathbf{F}_{BB}\in\mathbb{C}^{N_{RF}^{t}\times N_{s}}\) is the digital beamformer. Exactly the same as full-digital beamforming VBA-ISAC systems, the spectral efficiency of the above communication model considering the HAD architecture can be formulated as \[\begin{split} R=\log\bigg{(}\det\bigg{(}\mathbf{I}_{N_{s}}+\frac{ p}{\sigma_{n}^{2}}\mathbf{W}_{opt}\mathbf{H}\mathbf{F}_{RF}\mathbf{F}_{BB}\\ \times\mathbf{F}_{BB}^{H}\mathbf{F}_{RF}^{H}\mathbf{H}^{H}\mathbf{ W}_{opt}^{H}\bigg{)}\bigg{)}.\end{split} \tag{34}\] Similarly, to maximize the spectral efficiency of hybrid VBA-ISAC systems, we should design the hybrid beamformer to achieve the smallest Euclidean distance between \(\mathbf{F}_{RF}\mathbf{F}_{BB}\) and \(\mathbf{F}_{opt}\), which can be explicitly expressed as \[||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{opt}||_{F}^{2}\leq\varepsilon_{c}. \tag{35}\] For the sensing model with the HAD architecture, it is similar to full-digital beamformer VBA-ISAC systems, and the desired radar beampattern is designed based on AoI. The radar covariance matrix of HAD beamforming can be expressed as \[\begin{split}\mathbf{R}_{d}&=\mathbb{E}(\mathbf{F }_{RF}\mathbf{F}_{BB}\mathbf{s}^{H}\mathbf{F}_{BB}^{H}\mathbf{F}_{RF}^{H})\\ &=\mathbf{F}_{RF}\mathbf{F}_{BB}\mathbb{E}(\mathbf{s}\mathbf{s}^{ H})\mathbf{F}_{BB}^{H}\mathbf{F}_{RF}^{H}\\ &=\mathbf{F}_{RF}\mathbf{F}_{BB}\mathbf{F}_{BB}^{H}\mathbf{F}_{ RF}^{H}.\end{split} \tag{36}\] Similarly, to generate a satisfying beampattern, we need to design the hybrid beamformer to achieve the smallest Euclidean distance between \(\mathbf{F}_{RF}\mathbf{F}_{BB}\) and \(\mathbf{F}_{rad}\), which can be similarly expressed as \[||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{rad}||_{F}^{2}\leq\varepsilon_{r}. \tag{37}\] For simplicity, again, we assume that the dimensions of \(\mathbf{F}_{RF}\mathbf{F}_{BB}\) are equal to \(\mathbf{F}_{rad}\). According to the above communication model and sensing model with the HAD architecture, the formulation of hybrid beamforming VBA-ISAC systems can be written as \[\begin{split}\min_{\mathbf{F}_{RF},\mathbf{F}_{BB}}\rho||\mathbf{ F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{opt}||_{F}^{2}\\ +(1-\rho)||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{rad}||_{F}^{ 2}\\ s.t.&|\mathbf{F}_{RFi,j}|=1,\forall i,j\\ &\quad||\mathbf{F}_{RF}\mathbf{F}_{BB}||_{F}^{2}=N_{s}.\end{split} \tag{38}\] Since the phase shifters can only adjust the signal phase, not the signal amplitude, \(\mathbf{F}_{RF}\) has to abide by the unit-modulus constraint. ### _Alternating Minimization for VBA-ISAC with the Hybrid Architecture_ From the form of (38), the formulated joint optimization problem can be regarded as a matrix factorization problem. Since it has two variables to be optimized, alternating minimization is applicable, which adapts one while fixing the other. Specifically, when optimizing digital beamformer \(\mathbf{F}_{BB}\), we should first fix the analog beamformer \(\mathbf{F}_{RF}\). Therefore, (38) can be rewritten as \[\min_{\mathbf{F}_{BB}}\rho||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{opt}|| _{F}^{2}+(1-\rho)||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{rad}||_{F}^{2} \tag{39}\] To facilitate the following formulation and analysis, it can be further converted to \[\min_{\mathbf{F}_{BB}}||\mathbf{A}\mathbf{F}_{BB}-\mathbf{B}||_{F}^{2}, \tag{40}\] where \(\mathbf{A}=[\sqrt{\rho}\mathbf{F}_{RF}^{T},\sqrt{1-\rho}\mathbf{F}_{RF}^{T}]^ {T}\), and \(\mathbf{B}=[\sqrt{\rho}\mathbf{F}_{opt}^{T},\sqrt{1-\rho}\mathbf{F}_{rad}^{T}]^ {T}\). Now, it becomes obvious that (40) is a classic matrix factorization problem. Due to the transformation, the problem can be solved by SDR method shown in Section III.D, or least squares (LS) method proposed in [34] as \[\mathbf{F}_{BB}=\mathbf{A}^{\dagger}\mathbf{B}. \tag{41}\] Regarding the power constraint, we multiply \(\frac{\sqrt{N_{s}}}{||\mathbf{F}_{RF}\mathbf{F}_{BB}||_{F}}\) by the final optimized result. Similarly, when optimizing analog beamformer \(\mathbf{F}_{RF}\), we need to fix digital beamformer \(\mathbf{F}_{BB}\), and, hence, (29) can be refactored as \[\begin{split}\min_{\mathbf{F}_{RF}}\rho||\mathbf{F}_{RF}\mathbf{ F}_{BB}-\mathbf{F}_{opt}||_{F}^{2}&+(1-\rho)||\mathbf{F}_{RF}\mathbf{F}_{BB}- \mathbf{F}_{rad}||_{F}^{2}\\ s.t.&|\mathbf{F}_{RFi,j}|=1,\forall i,j.\end{split} \tag{42}\] However, (42), as a non-convex optimization problem, is difficult to tackle. Constraint \(|\mathbf{F}_{RFi,j}|=1,\forall i,j\) represents the unit-modulus constraint, which cannot be solved by conventional optimization algorithms. Fortunately, it can be transformed into a typical manifold structure, which can subsequently be solved as a manifold optimization problem [34, 37, 38]. Specifically, we transform (42) to optimization problem on manifold as follows: \[\begin{split}\min_{\mathbf{F}_{RF\in\mathcal{M}}}f_{m}(\mathbf{F }_{RF})=&\rho||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{opt}||_{F }^{2}\\ &+(1-\rho)||\mathbf{F}_{RF}\mathbf{F}_{BB}-\mathbf{F}_{rad}||_{F}^{ 2},\end{split} \tag{43}\] where \(\mathcal{M}\) stands for the manifold, and \(f_{m}(\cdot)\) is the objective function on manifold. Accordingly, inspired by the method Fig. 5: A vehicle-mounted transmitter with HAD architecture communicates with an RSU while sensing the AoI. utilized in [34, 37], we consider a complex circle manifold of the vector \(\mathbf{p}=\mathrm{vec}(\mathbf{F}_{RF})\), which can be expressed as \[\mathcal{M}_{cc}=\left\{\mathbf{p}\in\mathbb{C}^{N_{t}N_{RF}^{t}}:|\mathbf{p}_{ i}|=1,i=1,2,\ldots,N_{t}N_{RF}^{t}\right\}, \tag{44}\] where \(\mathbf{p}\) is a point on the manifold. According to the defined complex circle manifold given in (44), the optimization problem (43) can be expressed as \[\begin{split}\min_{\mathbf{p}\in\mathcal{M}_{cc}}f_{m}(\mathbf{ p})=&\rho||\mathbf{f}_{BB}\mathbf{p}-\mathbf{f}_{opt}||_{F}^{2}\\ &+(1-\rho)||\mathbf{f}_{BB}\mathbf{p}-\mathbf{f}_{rad}||_{F}^{2},\end{split} \tag{45}\] where \(\mathbf{f}_{BB}=\mathbf{F}_{BB}^{*}\otimes\mathbf{I}_{N_{t}}\), \(\mathbf{f}_{opt}=\mathrm{vec}(\mathbf{F}_{opt})\), and \(\mathbf{f}_{rad}=\mathrm{vec}(\mathbf{F}_{rad})\). According to [34, 37], (45) can be solved well by the Manopt toolbox. ### _VBA-ISAC Algorithm with the Hybrid Architecture and Computational Complexity Analysis_ According to the above description, the solution of VBA-ISAC beamforming design with the hybrid architecture is summarized as in Algorithm 3. In Algorithm 3, the most critical step is to solve (42) for optimizing \(\mathbf{F}_{RF}\) by the manifold optimization method. In the manifold optimization, the computational complexity is mainly rendered by the conjugate gradient descent method [38]. Similar to the Euclidean unconstrained optimization, the number of iterations of the gradient descent method converging to the manifold gradient norm for satisfying the control threshold \(\varepsilon\) can be quantified by \(\mathcal{O}\left(1/\varepsilon^{2}\right)\) for the worst case [39]. For the conjugate gradient descent method on a complex circle manifold, the computational complexity of each iteration is characterized by \(\mathcal{O}\left(N_{t}^{2}N_{RF}^{t}N_{s}\right)\)[37]. ``` Input:\(\mathbf{F}_{opt},\mathbf{F}_{rad},N_{s},\varepsilon_{r,c}>0,0\leqslant\rho \leqslant 1,i_{max}>0\) Randomly initialize \(\mathbf{F}_{RF}^{0}\) and \(\mathbf{F}_{BB}^{0}\), and set \(i=0\) while\(i\leq i_{max}\)do 1. Fix \(\mathbf{F}_{RF}^{i}\), and optimize \(\mathbf{F}_{BR}^{i+1}\) by (41) 2. Fix \(\mathbf{F}_{BB}^{*}\), and optimize \(\mathbf{F}_{RF}^{*}\) by the manifold optimization method. 3. \(i\gets i+1\). 4. Judge whether the convergence condition is satisfied, and break the while loop if yes. endwhile Output:\(\mathbf{F}_{RF}^{i}\) and \(\mathbf{F}_{BB}^{i}\) ``` **Algorithm 3**Alternating minimization algorithm for VBA-ISAC beamforming design with the hybrid architecture. ### _Energy Efficiency Analysis_ In this subsection, we analyze the energy efficiency of applying the HAD architecture and full-digital architecture. According to the description in [34, 40], the energy efficiency at the transmitting side is defined as the ratio of spectral efficiency to power consumption, which is explicitly given by \[R_{p}=\frac{R}{P_{sum}}, \tag{46}\] where \(R\) represents the spectral efficiency, and \(P_{sum}\) is the total power consumption of the transmitter. For the HAD architecture VBA-ISAC system, \(P_{sum}\) is summed across \[P_{sum}=P_{BB}+N_{RF}^{t}P_{RF}+N_{t}P_{PA}+N_{RF}^{t}N_{t}P_{PS}, \tag{47}\] where \(P_{BB}\) represents the power consumption of the digital baseband in the transmitter; \(P_{RF}\) represents the power consumption of each RF chain; \(P_{PA}\) represents the power consumption of each linear amplifier; \(P_{PS}\) represents the power consumption of each phase shifter. Similarly, for the full-digital architecture VBA-ISAC system, \(P_{sum}\) is summed across \[P_{sum}=P_{BB}+N_{t}P_{RF}+N_{t}P_{PA}+N_{t}P_{PS}. \tag{48}\] In the full-digital structure, the number of RF chains is equal to the number of antennas \(N_{t}\). However, that of hybrid structure is only \(N_{RF}^{t}\). Since each RF chain in the hybrid structure is connected to all antennas, the number of phase shifters is thus \(N_{RF}^{t}N_{t}\). ## V Simulation Results In this section, we show the numerical results to illustrate and discuss the superiority of our proposed VBA-ISAC beamforming design. Specifically, we first show the simulation results of predicting AoI, which is obtained by calculating the driving path of the vehicle. Then, we show the simulation results of beampatterns and communication performance, respectively. In the simulations, the number of transmit antennas for ISAC systems is \(81\), where ULA is adopted. ### _Sensing Performance_ To carry out numerical simulations, we first need to set up parameters for the vehicle's kinematic model. Specifically, the acceleration of the vehicle is set to \(a=1\) m/s\({}^{2}\); the steering angle and initial driving direction of the vehicle are set to \(\phi=30^{\circ}\) and \(\vartheta=0^{\circ}\); the initial position on the two-dimensional plane is set to \((1,1)\); the time slot is set to \(\Delta t=0.2\) s; the initial velocity is set to \(v=20\) m/s; the distance between the front and rear wheels is set to \(l=2\) m; the radius of the vehicle safety zone is set to \(r_{s}=1\) m. The simulation parameters of the vehicle's kinematic model are summarized in Table I. Based on the simulation setups given above, the simulation results of the driving path of the vehicle on the two-dimensional plane and the predicted AoI are shown in Fig. 6. In Fig. 6, the red line is the driving path of the vehicle, and the circle represents the safety zone. For analytical simplicity, the driving path of the vehicle within \(\Delta t\) is evenly divided into three stages. There are accordingly three vehicle positions: \((1.387,2.581)\), \((2.526,3.861)\), and \((4.085,4.433)\) with corresponding AoI. The parameters of the sensing model are determined by the AoI. From the simulation results of AoI of the vehicle, the number of radar pointing angles is set to \(K=3\). According to (20), pointing angles \(\theta_{k}\) can be calculated as \(\theta_{1}=14.1^{\circ}\), \(\theta_{2}=28.1^{\circ}\), and \(\theta_{3}=41.9^{\circ}\), respectively. According to (23), the sensing distance at pointing angle \(\theta_{k}\) can be calculated as \(d_{1}^{*}=\)2.7 m, \(d_{2}^{*}=\)4.2 m, and \(d_{3}^{*}=\)5.6 m. Numbers of antennas \(N_{k}\) to form narrow beams are allocated as \(N_{1}:N_{2}:N_{3}=2.7^{4}:4.2^{4}:5.6^{4}\approx 4:18:59\). Based on the simulation setups given above, the simulation results of the beampatterns are shown in Fig. 7. In [23, 24, 25], the radar pointing angles depend on the entire area required to be scanned. And the number of antennas to form a desired beam at each pointing angle is evenly distributed. From Fig. 7, it can be hard to accurately cover the AoI. Compared with these intuitive benchmarks, setting radar pointing angle \(\theta_{k}\) and the number of antennas \(N_{k}\) based on AoI for a vehicle is more directional and accurate to cover the AoI, as the power of the radar can be more concentrated on the target area. Besides, we investigate the impact of trade-off parameter \(\rho\) on the sensing performance. The simulation results of beampatterns with \(\rho=0\), \(0.5\) and \(1\) are shown in Fig. 8. In an extreme case when \(\rho=0\), the VBA-ISAC system becomes a radar-only system, where the communication performance is not considered. In this case, we obtain an optimal beampattern. It can also be seen that when trade-off \(\rho\) is smaller, the beampattern is closer to the optimal beampattern. That is because the smaller the trade-off parameter is, the greater weight of sensing in the objective function of (29) will be. Therefore, the more main lobe of the beam is used to sense the radar pointing angles. On the other hand, when \(\rho\) is close to 1, the beampattern becomes rather poor and can hardly satisfy the sensing demand, since the power for sensing is rarely concentrated on the AoI. In this case, the VBA-ISAC system becomes a communication-only system. ### _Communication Performance_ In the simulation, the parameter setup is as follows. The number of receive antennas at the communication receiver \(N_{r}=16\), where ULA is adopted. The number of transmission paths in the channel \(L=10\), and the channel power gain \(\alpha_{l}\) of each path is with deviation \(\sigma_{\alpha}^{2}=1\). Departure angle \(\theta_{t,l}\) and arrival angle \(\theta_{r,l}\) are uniformly distributed within \([-90^{\circ},90^{\circ}]\). The number of data streams is set to \(N_{s}=3\). The communication performance of VBA-ISAC in spectral efficiency is demonstrated in Fig. 9, where the trade-off factor \(\rho=0.5\). It can be seen from Fig. 9 that the spectral efficiency of VBA-ISAC is lower than that of the communication-only system. That is because part of the main beam of the VBA-ISAC system is used to sense the AoI. As a result, the main lobe of the beam focusing on communication is reduced, leading to an inevitable performance loss. Fortunately, the spectral efficiency of VBA-ISAC is sufficiently close to the optimal spectral efficiency. Moreover, the spectral efficiency of Fig. 8: Influence on the sensing performance with different trade-off parameters \(\rho\) = 0, 0.5 or \(1\). Fig. 6: Simulation results of the driving path and the predicted AoI. Fig. 7: Simulation results of the beampatterns by our proposed method and the benchmarks. VBA-ISAC is significantly higher than the spectral efficiency in [23, 24, 25]. This is because, more beam power is required in [23, 24, 25] to cover the AoI. As a result, the beam power used for communication in [23, 24, 25] is reduced, resulting in a decrease in spectral efficiency. This phenomenon also validates the superiority of the proposed VBA-ISAC beamforming design scheme. The simulation results of spectral efficiency with \(\rho=0.2\), \(0.5\), \(0.8\) and \(1\) are shown in Fig. 10. It is observed that as \(\rho\) gradually increases, the spectral efficiency increases. This observed trend is aligned with the expectation because the main lobe of the beam used for communication increases as \(\rho\) increases. Combining the simulation results in Fig. 8 and Fig. 10, one can conclude that the communication and sensing performance of the ISAC system can be modified by adjusting the trade-off factor \(\rho\). In practical use, \(\rho\) is determined by the functional requirements of users. If the users wish to achieve better communication performance, a larger \(\rho\) can be set. Conversely, if the users wish to achieve better sensing performance, a smaller \(\rho\) can be set. Thus, the best setting of \(\rho\) depends on the user's quality of service requirements. For comprehensiveness, we also compare the performance of VBA-ISAC with fully digital and HAD structures in terms of energy efficiency. In the simulations, the power consumption parameters are set as follows: \(P_{BB}=10\) W, \(P_{RF}=300\) mW, \(P_{PA}=100\) mW, and \(P_{PS}=10\) mW. The number of RF chains of the transmitter is set to \(N_{RF}^{t}=3\). The simulation results of the spectral efficiency VBA-ISAC system with the full-digital and hybrid RF structures are shown in Fig. 9. It is observed that the spectral efficiency of the hybrid structure is lower than that of a full-digital structure. The simulation results of energy efficiency are shown in Fig. 11, where \(\rho=0.5\). It can be clearly seen from Fig. 11 that the energy efficiency of VBA-ISAC with the hybrid structure is higher than that of the full-digital structure. ### _Performance Over Time-Varying Channels_ In (15), we mainly consider a quasi-static mmWave channel model during a coherent time. To evaluate the impact of time-varying channels on the proposed scheme, we further conduct more simulations. Specifically, we split the time-varying channel into a static part and a time-varying part, and write the mmWave channel model in time-varying scenarios as \[\mathbf{H}_{d}=\mathbf{H}+\mathbf{H}_{e},\] where \(\mathbf{H}\) stands for static channels shown in (15); \(\mathbf{H}_{e}\) represents time-varying part. For ISAC beamforming designs, \(\mathbf{H}_{e}\) can also be regarded as the channel estimation errors caused by Doppler shift in practical time-varying scenarios. Without loss of generality, we assume each entry of \(\mathbf{H}_{e}\) obeying zero mean and variance \(\sigma_{e}\) complex Gaussian distribution [41]. Simulation results are demonstrated in Fig. 12. It is shown that the spectral efficiency of all schemes decreases when the time-varying part is not known. Under the same level of unknown time-varying part, the spectral efficiency of the proposed VBA-ISAC scheme is still higher than the benchmarks in [23, 24, 25]. Fig. 11: Energy efficiency of VBA-ISAC with full-digital structure and hybrid structure, given \(\rho\) = 0.5. Fig. 10: Spectral efficiency versus trade-off parameter \(\rho\). Fig. 9: Simulation results of spectral efficiency for VBA-ISAC and the benchmarks. ## VI Conclusion The communication and sensing modules of traditional vehicle-mounted equipment are placed in isolation, resulting in low utilization of wireless spectrum and hardware resources. To address this problem, we proposed a VBA-ISAC beamforming design for vehicle-mounted transmitters. By the proposed design, we predicted the trajectory based on the behavior of the vehicle. By introducing a safe zone, the AoI was determined according to the predicted driving path. After selecting the interesting pointing angles in the AoI, a desired radar beamformer was devised. Simultaneously, the vehicular transmitter was also able to communicate with the RSU. Then, we formulated the VBA-ISAC beamforming design as an optimization problem and introduced a trade-off factor to balance the communication and sensing performance. A tailored SDR algorithm was proposed to solve the formulated optimization problem. To cope with the large power consumption and high cost of VBA-ISAC system with full-digital architecture, we proposed and analyzed the VBA-ISAC beamforming design with the hybrid architecture. The numerical results demonstrated that the proposed beamforming design outperformed the benchmarks in both spectral efficiency and radar beampattern.
2310.14572
Unveiling the Multi-Annotation Process: Examining the Influence of Annotation Quantity and Instance Difficulty on Model Performance
The NLP community has long advocated for the construction of multi-annotator datasets to better capture the nuances of language interpretation, subjectivity, and ambiguity. This paper conducts a retrospective study to show how performance scores can vary when a dataset expands from a single annotation per instance to multiple annotations. We propose a novel multi-annotator simulation process to generate datasets with varying annotation budgets. We show that similar datasets with the same annotation budget can lead to varying performance gains. Our findings challenge the popular belief that models trained on multi-annotation examples always lead to better performance than models trained on single or few-annotation examples.
Pritam Kadasi, Mayank Singh
2023-10-23T05:12:41Z
http://arxiv.org/abs/2310.14572v1
Unveiling the Multi-Annotation Process: Examining the Influence of Annotation Quantity and Instance Difficulty on Model Performance ###### Abstract The NLP community has long advocated for the construction of multi-annotator datasets to better capture the nuances of language interpretation, subjectivity, and ambiguity. This paper conducts a retrospective study to show how performance scores can vary when a dataset expands from a single annotation per instance to multiple annotations. We propose a novel multi-annotator simulation process to generate datasets with varying annotation budgets. We show that similar datasets with the same annotation budget can lead to varying performance gains. Our findings challenge the popular belief that models trained on multi-annotation examples always lead to better performance than models trained on single or few-annotation examples. ## 1 Introduction The process of creating datasets often involves practical constraints such as time, resources, and budget that limit the number of annotators or experts available for collecting annotations (Sheng et al., 2008). As a result, there is a prevalence of single or few labels per instance (depending on the limited number of annotators) in the collected data. However, training models on these datasets pose challenges to their generalization abilities, primarily because the data lacks diversity. With a scarcity of different perspectives and variations in the training data (Basile et al., 2021; Plank, 2022), models may struggle to learn robust representations and fail to generalize effectively (Nie et al., 2020; Meissner et al., 2021). To address these challenges, the NLP community has highlighted the advantages of utilizing multi-annotator datasets (Davani et al., 2022) and also emphasized the importance of releasing multi-annotator datasets and associated information (cultural and demographic, etc.) (Sap et al., 2022; Hershovich et al., 2022). However, this approach introduces its own set of challenges. Collecting data with multiple annotators requires significant time, annotation budget, and annotator expertise to ensure the creation of high-quality datasets with diverse perspectives. Moreover, with a limited annotation budget, it becomes crucial to determine the optimal number of annotators within the given constraints. This not only helps save annotation time and budget but also ensures efficient utilization of available resources. While some research (Wan et al., 2023; Zhang et al., 2021) has provided insights and suggestions on finding the optimal number of annotators, a definitive solution to this problem has yet to be achieved. Another challenge is the restricted number of annotations available per instance, typically not exceeding \(6-10\), even with a large number of recruited annotators (Plank, 2022). This limitation arises from the considerable annotation efforts required for a large volume of instances. As a result, when models are trained on such datasets, they only capture the opinions and information of a small subset of the annotator pool. Additionally, certain datasets have not released annotator-specific labels or established mappings to individual annotators (Nie et al., 2020; Jigsaw, 2018; Davidson et al., 2017). However, the trend is gradually shifting, and there is a growing recognition that annotator-level labels should be made available (Prabhakaran et al., 2021; Basile et al., 2021; Denton et al., 2021). This study aims to tackle the challenge of lacking annotator-specific labels by simulating a multi-annotation process. Through this study, we provide insights into how the inclusion of more annotators can introduce variations in model performance and identify the factors that influence this variation. Considering that previous research (Swayamdipta et al., 2020) has highlighted the influence of individual instance difficulty on model performance, we examine how the addition of more annotations alters the difficulty level of instances and consec quently affects model performance. In summary, our main contributions are: * We propose a novel multi-annotator simulation process to address the issue of missing annotator-specific labels. * We demonstrate, that increasing the number of annotations per instance does not necessarily result in significant performance gains. * We also demonstrate, that altering the number of annotations per instance has a noticeable impact on the difficulty of instances as perceived by the model and consequently affects the model performance. ## 2 The Multi-annotated Dataset In practical scenarios, the annotation process begins by hiring one or more annotators who annotate each instance in the dataset. To enhance the representation of the true label distribution, we have the option to extend this process by recruiting additional annotators. We continue this iterative process until either the annotation budget is exceeded or we observe saturation in the model's performance in predicting the true label distribution. As a result, we obtain multiple annotations assigned to each instance in this multi-annotated dataset. A multi-annotator dataset \(\mathcal{D}\) is formally characterized as a triplet \(\mathcal{D}=(X,A,Y)\) in this research paper. The set \(X\) represents \(N\) text instances, denoted as \(x_{1},x_{2},\ldots,x_{N}\). The set \(A\) corresponds to \(M\) annotators, represented as \(a_{1},a_{2},\ldots,a_{M}\). The annotation matrix \(Y\) captures the annotations, with rows indexed by \(X\) and columns indexed by \(A\). Specifically, \(Y=Y[X;A]=Y[x_{1},x_{2},\ldots,x_{N};a_{1},a_{2},\ldots,a_{M}]\). In simpler terms, the entry \(Y[x_{i};a_{j}]\) stores the label \(y_{i,j}\) assigned to instance \(x_{i}\) by annotator \(a_{j}\). Furthermore, an _annotator-set_\(A_{k}\), which comprises \(k\) annotators where \(1\leq k\leq M\), is defined. Consequently, the subset of \(\mathcal{D}\) restricted to \(A_{k}\) is denoted as \(\mathcal{D}_{k}=(X,A_{k},Y^{{}^{\prime}})\), where \(Y^{{}^{\prime}}=Y[X;A_{k}]\). This paper refers to \(\mathcal{D}_{k}\) as the dataset subset with \(k\) annotations per instance. Figure 1 illustrates a toy multi-annotator dataset, showcasing \(M\) annotators, and \(N\) instances along with its subsets comprising 2 and \(k\) annotators. ## 3 Simulating the Multi-annotation Process Based on our current knowledge, it is worth noting that existing multi-annotator datasets typically do not include annotator-specific labels. Instead, the available information is limited to the label distribution for each instance Nie et al. (2020); Jigsaw (2018); Davidson et al. (2017). For instance, in cases with \(M\) annotations per instance and three possible labels, the label distribution is commonly represented by a list \([p,q,r]\), where \(p\), \(q\), and \(r\) are positive integers that sum up to \(M\). To address this constraint, we introduce a simulation process for multi-annotator scenarios that leverages the instance-level label distribution. Our proposed approach (see Algorithm 1), encompasses the following steps: * Initially, we generate a list of annotations for each instance by considering the actual instance-level label distribution. [Line 1] * Subsequently, we randomize these annotation lists using a consistent random seed across instances. [Lines 5-6] * Next, we select the first \(k\) annotations from each randomized list, creating the dataset subset \(\mathcal{D}_{k}\). [Lines 4-8] By employing this algorithm, we can generate \(k\) annotations per instance, thereby addressing the limitation of annotator-specific labels in existing multi-annotator datasets. By repeating the algorithm with different random seeds or parameters, we can create multiple datasets subsets \(\mathcal{D}_{k}\), each containing \(k\) annotations per instance. This flexibility enables the generation of diverse subsets, expanding the range of multi-annotator scenarios that can be explored and analyzed in our research. ## 4 Experiments ### Datasets We selected the ChaosNLI dataset Nie et al. (2020) for our study, as it contains the highest number of Figure 1: A Toy Multi-Annotator Dataset annotations (=100) per instance among the publicly available datasets (Plank, 2022). ChaosNLI is a Natural Language Inference (NLI) task dataset known for its high ambiguity. Additionally, the ChaosNLI dataset includes sub-datasets, namely ChaosNLI-S and ChaosNLI-M, which are subsets extracted from the development sets of SNLI (Bowman et al., 2015) and MNLI-matched(Williams et al., 2018), respectively. Another sub-dataset, ChaosNLI-\(\alpha\), is created from the entire development set of AbductiveNLI hereafter, referred to as \(\alpha\)-NLI (Bhagavatula et al., 2019). The ChaosNLI dataset consists of 4,645 instances, each annotated with 100 new annotations. Additionally, the dataset already includes 5 old annotations for ChaosNLI-S and ChaosNLI-M, and 1 old annotation for ChaosNLI-\(\alpha\). Subsequently, we create \(\mathcal{D}_{k}\)'s (see SS3) utilizing these datasets and then divide these \(\mathcal{D}_{k}\)'s into train, development, and test sets using an 80:10:10 ratio. Table 1 provides detailed statistics of the datasets used in our study. ### Pretrained Language Models (PLMs) In our study, we utilize all the pretrained language models (PLMs) reported in the ChaosNLI work by Nie et al. (2020). Specifically, we experiment with BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2020), ALBERT (Lan et al., 2020), and DistilBERT (Sanh et al., 2020). It is important to clarify that our objective is not to showcase state-of-the-art (SOTA) performance using these models, but rather to demonstrate the variations in performance as we incrementally add annotations to the dataset. ### Training Strategies In this section, we describe two variants of training strategies. **Majority Label (ML):** The PLMs are finetuned using the majority label, which is determined by aggregating annotations from the target list of annotations. The training objective aims to minimize the cross-entropy between the output probability distribution and the one-hot encoded majority label. **Label Distribution (LD):** The PLMs are finetuned using the label distribution from the target list of annotations (Meissner et al., 2021). The training objective aims to minimize the cross-entropy between the output probability distribution and the target label distribution. ### Evaluation To evaluate the performance of our models, we utilize the classification accuracy computed on the test dataset. In the ML setting, the accuracy is computed by comparing the label associated with the highest softmax probability predicted by the model with the majority label derived from the target annotations. In the LD setting, the accuracy is computed by comparing the label corresponding to the highest softmax probability predicted by the model with the label that has the highest relative frequency in the target label distribution. ### Experimental Settings Following the approaches described in the studies (Nie et al., 2020; Meissner et al., 2021), we construct base models by finetuning PLMs (described in SS4.2) on the combined train sets of SNLI and \begin{table} \begin{tabular}{l c c c} \hline \hline **Datasets** & **\#Instances** & \begin{tabular}{c} **\#Annotations** \\ **Per Instance** \\ \end{tabular} & \begin{tabular}{c} **\#Class** \\ **Labels** \\ \end{tabular} \\ \hline **SNLI** & 550,152 & 5 & 3 \\ **MNLI** & 392,702 & 5 & 3 \\ \(\alpha\)-NLI & 169,654 & 1 & 2 \\ **ChaosNLI-S** & 1,524 & 100 & 3 \\ **ChaosNLI-M** & 1,599 & 100 & 3 \\ **ChaosNLI-\(\alpha\)** & 1,532 & 100 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset Statistics1 MNLI for both ChaosNLI-S and ChaosNLI-M. For the ChaosNLI-\(\alpha\) dataset, we construct base models by finetuning on the train set of \(\alpha\)-NLI. We further finetune these base models with increasing sizes of annotators. Specifically, we finetune models for each \(\mathcal{D}_{k}\), where \(k\in[1,100]\). For each \(k\), we report average performance scores over test sets of 10 \(\mathcal{D}_{k}\)'s (see SS3) We choose hyperparameters from the experimental settings of the following work Nie et al. (2020); Meissner et al. (2021); Bhagavatula et al. (2019). Our optimization technique involves employing the AdamW optimizer Loshchilov and Hutter (2019). More details on hyperparameters can be found in SSA.2. To ensure reproducibility, we conduct our experiments using the open-source Hugging Face Transformers2 library Wolf et al. (2020). Furthermore, all experiments are performed using 2 \(\times\) NVIDIA RTX 2080 Ti GPUs. Footnote 2: [https://huggingface.co/docs/transformers/](https://huggingface.co/docs/transformers/) ## 5 Results and Discussion ### Is higher performance always guaranteed by increasing the number of annotations? Figure 2 presents the accuracy scores as the number of annotations increases. Notably, the trends observed in the performance of ChaosNLI-S, ChaosNLI-M, and ChaosNLI-\(\alpha\) challenge the prevailing belief that increased annotations invariably lead to improved performance. Specifically, for ChaosNLI-S and ChaosNLI-M, the accuracy scores exhibit a non-monotonic increasing pattern. In contrast, the trend observed for ChaosNLI-\(\alpha\), particularly with BERT and DistilBERT models, deviates from this expected behavior. In these cases, the accuracy scores show a decreasing trend as the number of annotations increases. Upon examining the RoBERTa accuracy scores for the LD setting in ChaosNLI-S, it is observed that the performance reaches a saturation point between 20 to 80 annotations. This means that increasing the number of annotations beyond this range does not result in significant improvement in the accuracy scores. Table 2 provides a complementary perspective on the observed trends. It highlights that the minimum performance is not consistently associated with the dataset having the fewest annotations, and vice versa. In the case of ChaosNLI-\(\alpha\) with BERT and DistilBERT, it is interesting to note that the optimal performance is achieved with just three annotations. This represents an extreme scenario where a minimal number of annotations can lead to the best performance. In general, these findings shed light on the optimization of our annotation budget. Similarly, the performance gain (maximum - minimum accuracy) across different datasets also significantly varies. The average performance gain for ChaosNLI-M, ChaosNLI-S and ChaosNLI-\(\alpha\) is 0.106, 0.177, and 0.031, respectively. The notable variability in performance gain across different datasets further emphasizes that the impact of increasing annotations on performance improvement is not consistent. It underscores the need to carefully analyze and understand the specific characteristics of each dataset and model combination to ascertain the relationship between annotation quantity and performance. To provide an explanation for the observed complex behavior, we utilize the \(\mathcal{V}\)-Information Ethayarajh et al. (2022). \(\mathcal{V}\)-information is a measure that quantifies the ease with which a model can predict the output based on a given input. The higher the \(\mathcal{V}\)-information, the easier it is for the model to predict the output given input. Furthermore \(\mathcal{V}\)-information cannot be negative unless model overfits, etc. (see SSA.1). Figure 3 provides a visual representation of \begin{table} \begin{tabular}{c|c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Min. Accuracy**} & \multicolumn{4}{c}{**Max. Accuracy**} \\ \cline{2-13} & \multicolumn{2}{c}{**ClassNLI-S**} & \multicolumn{2}{c}{**ClassNLI-M**} & \multicolumn{2}{c}{**ClassNLI-S**} & \multicolumn{2}{c}{**ClassNLI-M**} & \multicolumn{2}{c}{**ClassNLI-S**} \\ \cline{2-13} & **ML** & **LD** & **ML** & **LD** & **ML** & **LD** & **ML** & **LD** & **ML** & **LD** & **ML** & **LD** \\ \hline **RoBERTa** & **0.647** (1) & **0.647** (1) & **0.558** (1) & **0.558** (1) & **0.695** (1) & **0.695** (1) & **0.741** (10) & **0.714** (100) & **0.717** (100) & **0.744** (100) & **0.751** (100) & **0.679** (100) & **0.685** (100) \\ **XLNet** & **0.647** (1) & **0.643** (1) & **0.564** (1) & **0.561** (1) & **0.647** (2) & **0.648** (1) & 0.743 (100) & 0.770 (100) & 0.744 (100) & **0.751** (100) & **0.679** (100) & **0.695** (100) & **0.695** (100) \\ **ALBERT** & **0.639** (1) & **0.659** (1) & **0.568** (1) & **0.568** (1) & **0.668** (1) & 0.668 (1) & 0.796 (100) & 0.737 (100) & 0.706 (100) & **0.751** (100) & **0.695** (100) & **0.695** (100) & **0.695** (100) \\ **BERT** & **0.643** (1) & **0.643** (1) & **0.579** (1) & **0.579** (1) & **0.598** (**6) & **0.585** (**6) & **0.753** (**90) & 0.757 (100) & **0.751** (**90) & **0.769** (100) & **0.613** (**3) & **0.616** (**3) \\ **DistilBERT** & **0.632** (1) & **0.632** (1) & **0.533** (1) & **0.533** (1) & **0.582** (**70) & **0.584** (**90) & 0.724 (100) & 0.73 (100) & **0.692** (**80) & **0.682** (**90) & **0.688** (**3) & **0.613** (**3) \\ \hline \hline \end{tabular} \end{table} Table 2: The performance of various models in both the ML and LD settings is presented in this table. Values indicate accuracy, and values in braces indicate \(k\). The values highlighted in bold indicate the optimal number of annotators where the performance reaches its peak compared to the maximum annotation budget allocated (100). Conversely, the highlighted values in the minimum accuracy column indicate the lowest performance achieved compared to the minimum budget allocated (1). This information provides insights into the impact of the number of annotators on the model’s performance. the \(\mathcal{V}\)-information scores for the three datasets across five different PLMs. As anticipated, the \(\mathcal{V}\)-information scores are higher for the ChaosNLI-S and ChaosNLI-M datasets. Models that exhibit higher \(\mathcal{V}\)-information scores also tend to yield higher accuracy scores in the LD-based performance evaluation. For instance, RoBERTa outperforms other models (except XLNet, for which the performance is similar) in terms of accuracy for the ChaosNLI-S dataset. The saturation of \(\mathcal{V}\)-information scores starting at \(k=20\) for the ChaosNLI-S dataset effectively explains the observed saturation of LD-based accuracy after 20 annotations, as depicted in Figure 2. This phenomenon suggests that the model reaches a point where additional annotations provide diminishing returns in terms of extracting valuable insights from the instances. Therefore, the model's performance ceases to improve significantly beyond this threshold. For the ChaosNLI-\(\alpha\) dataset, except RoBERTa and XLNet (\(\mathcal{V}\)-Information \(\in[0,0.25]\), comparatively low), all models yielded approximately zero \(\mathcal{V}\)-information scores3. This implies that adding Figure 2: The figure displays accuracy scores for various models across \(k\) for datasets ChaosNLI-S, ChaosNLI-M and ChaosNLI-\(\alpha\). For every \(k\) on X-axis, the mean and standard deviation of the accuracy scores of models trained on 10 \(\mathcal{D}_{k}\)’s are displayed. The detailed plots for ChaosNLI-\(\alpha\) BERT and ChaosNLI-\(\alpha\) DistilBERT can be found in Figure 5 in the Appendix. more annotations to the ChaosNLI-\(\alpha\) dataset does not establish a clear relationship between the input and output label distribution. This observation suggests that, for this particular variant of the dataset, the model might rely on factors other than the provided annotations to make accurate predictions. Footnote 1: [https://github.com/hugging-and-play/](https://github.com/hugging-and-play/) Footnote 2: [https://github.com/hugging-and-play/](https://github.com/hugging-and-play/) The aforementioned findings indicate that not all datasets yield similar performance when trained under the same budget, underscoring the importance of selecting the appropriate dataset for a specific task. Furthermore, these findings emphasize the significance of determining the optimal number of annotators, as the model's performance varies with the increase in annotations. ### Does the number of annotations influence the difficulty of instances as perceived by the model? To investigate this question, we employ the concept of dataset cartography as proposed by Swayamdipta et al. (2020), which leverages training dynamics to distinguish instances based on their (1) confidence, measured as the mean probability of the correct label across epochs, and (2) variability, represented by the variance of the aforementioned confidence. This analysis generates a dataset map that identifies three distinct regions of difficulty: _easy-to-learn_, _hard-to-learn_, and instances that are _ambiguous_ with respect to the trained model. _Easy-to-learn_ (**e**) instances exhibit consistently high confidence and low variability, indicating that the model can classify them correctly with confidence. _hard-to-learn_ (**h**) instances, on the other hand, have low confidence and low variability, indicating the model's struggle to consistently classify them correctly over multiple epochs. _Ambiguous_ (**a**) instances display high variability in predicted probabilities for the true label. We investigate the proportion of the transitions between these categories with the incorporation of additional annotations. For example, \(\textbf{e}\rightarrow\textbf{a}\) represents proportion of the transitions from _easy-to-learn_ to _ambiguous_ category among all transitions. This provides valuable insights into the underlying factors that contribute to the observed improvements or lack thereof in the model's performance. Figure 4 illustrates an interesting pattern in ChaosNLI-S and ChaosNLI-M datasets: as the number of annotations increases, a significant proportion of training instances transition from the **a**\(\rightarrow\textbf{e}\) category. For instance, more than 60% of all transitions between 1 to 10 annotations involve instances moving from the \(\textbf{a}\rightarrow\textbf{e}\) category. However, beyond 10 annotations, the proportion of instances transitioning to the **e** from the **a** category does not show a substantial increase. On the other hand, the reverse transition from the \(\textbf{e}\rightarrow\textbf{a}\) category is the second most common transition, with an average proportion of 20%. The difference in proportions between the transition from \(\textbf{a}\rightarrow\textbf{e}\) and the transition from \(\textbf{e}\rightarrow\textbf{a}\) becomes more substantial (at least 29%) as more annotations are added. In the ChaosNLI-M dataset, we observe a higher proportion of instances transitioning from category **a** to category **h** compared to the ChaosNLI-S dataset. Specifically, over 15% of the ambiguous instances in ChaosNLI-M exhibit a shift towards the hard region, which is more than 50% of similar transitions observed in ChaosNLI-S. We argue that this substantial difference in transition patterns has a direct impact on the performance of models on the ChaosNLI-S dataset compared to ChaosNLI-M. Figure 3: The figure displays the \(\mathcal{V}\)-Information values for various models in the LD setting. A higher value indicates that the data is easier for the respective model \(\mathcal{V}\) with respect to extracting information from it. These values can be compared across datasets and models. Despite the presence of higher proportions of \(\mathbf{a}\) to \(\mathbf{e}\) transitions in ChaosNLI-M compared to ChaosNLI-S, the \(\mathbf{a}\) to category \(\mathbf{h}\) consistently leads to better performance on the ChaosNLI-S dataset across all models analyzed. ChaosNLI-\(\alpha\) exhibits distinct trends across various models. Specifically, in the case of BERT and DistillBERT, where accuracy scores decline as the annotation increases (see Figure 2), we witness significant proportions of \(\mathbf{e}\rightarrow\mathbf{a}\) (\(\sim 80\%\)) and \(\mathbf{a}\rightarrow\mathbf{h}\) (\(\sim 43\%\)) transitions, respectively. These transitions suggest that the models struggle to comprehend the instances and classify them with reduced confidence. For XLNet and ALBERT, the combined proportion of low confidence transitions, \(\mathbf{e}\rightarrow\mathbf{a}\) and \(\mathbf{a}\rightarrow\mathbf{h}\) either surpasses or remains equal to the proportion of high confidence transition \(\mathbf{a}\rightarrow\mathbf{e}\). In the case of RoBERTa, it behaves the same as ChaosNLI-S and ChaosNLI-M. These results suggest adding more annotations has indeed its effects on the difficulty of instance thereby affecting the performance of the model. Figure 4: The figure provides a visual representation of the transition of instances between different categories during training as the number of annotators increase from \(A_{1}\) to \(A_{10},\ldots,A_{100}\). \(\mathbf{e}\rightarrow\mathbf{a}\) indicates percentage of instances that transtioned from category \(\mathbf{e}\) to \(\mathbf{a}\). Related Works Human disagreements in annotations.Traditional approaches like majority voting or averaging can overlook important nuances in subjective NLP tasks, where human disagreements are prevalent. To address this issue, Multi-annotator models treat annotators' judgments as separate subtasks, capturing the distribution of human opinions, which challenges the validity of models relying on a majority label with the high agreement as ground truth (Davani et al., 2022; Nie et al., 2020). Human variation in labeling, which is often considered noise (Pavlick and Kwiatkowski, 2019), should be acknowledged to optimize and maximize machine learning metrics, as it impacts all stages of the ML pipeline (Plank, 2022). Incorporating annotation instructions that consider instruction bias (Parmar et al., 2023), which leads to the over-representation of similar examples, is crucial. This bias can limit model generalizability and performance. Future data collection efforts should focus on evaluating model outputs against the distribution of collective human opinions to address this issue. All of the above works study annotator disagreements and how they affect the performance of models on downstream tasks. However, in our work, considering disagreements' effect on model performance, we try to find out how the model performance varies as we increase the number of annotations per instance, i.e., varying the annotator disagreement, Overall, we try to answer, does more annotation per instance leads to better performance or is the other way around? Annotation under restricted annotation budget.Also, prior studies have investigated how to achieve optimal performance in natural language processing (NLP) models under restricted annotation budgets. One such study by (Sheng et al., 2008) examined the impact of repeated labeling on the quality of data and model performance when labeling is imperfect and/or costly. Another study by (Bai et al., 2021) framed domain adaptation with a constrained budget as a consumer choice problem and evaluated the utility of different combinations of pretraining and data annotation under varying budget constraints. Another study by (Zhang et al., 2021) explored new annotation distribution schemes, assigning multiple labels per example for a small subset of training examples, and proposed a learning algorithm that efficiently combines signals from uneven training data. Finally, a study by (Chen et al., 2022) proposed an approach that reserves a fraction of annotations to explicitly clean up highly probable error samples to optimize the annotation process. All these studies contribute to the understanding of how to maximize the performance of NLP models under restricted annotation budgets. Our study aimed to address a specific question within this context: assuming a fixed annotation budget, which dataset would yield the highest performance? Previous studies have demonstrated that annotation disagreements affect model performance. However, our study aims to explore how performance varies as we change the level of disagreement. we consider ideas from (Zhang et al., 2021) who proposed a learning algorithm that can learn from training examples with different amounts of annotation (5-way, 10-way, 20-way) in a multilabel setting, but we expand the number of annotations from 1-way till 100-way and train our model in a label distribution setting rather than in a multi-label setting. To investigate the reasons for performance variation as we increase the number of annotations, we incorporate (Swayamdipta et al., 2020)'s ideas and (Ethayarajh et al., 2022)'s concepts of dataset difficulty. While previous studies focused on building datasets and models and their impact on performance when the annotation budget is restricted, our work answers whether increasing the annotation budget necessarily leads to improved model performance. Overall, our study aims to demonstrate that, even with less annotation budget than its upper bound, it is possible to achieve optimal performance compared to the performance at the upper bound thereby saving annotation budget and time. Our findings provide insights into optimizing annotation budgets. ## 7 Conclusion In this paper, we introduced a novel approach to handle the absence of annotator-specific labels in the dataset through a multi-annotator simulation process. Additionally, we investigated the impact of varying the number of annotations per instance on the difficulty of instances and its effect on model performance. Our results highlighted that increasing the number of annotations does not always lead to improved performance, emphasizing the need to determine an optimal number of annotators. This has important implications for optimizing annota tion budgets and saving time. Our findings provide valuable insights for optimizing annotation strategies and open up new possibilities for future research in this direction. ## Limitations The current study acknowledges several limitations that deserve attention. Firstly, the experiments were conducted using small-size Language Models due to resource constraints. It is important to recognize that employing larger language models, such as BLOOM, GPT, and others, could potentially yield different outcomes and should be explored in future research. Furthermore, the scope of the discussion is constrained by the availability of datasets with a large number of labels per instance, leading to the utilization of the ChaosNLI dataset Nie et al. (2020). Consequently, the generalizability of the findings to other datasets, if they emerge in the future, might be restricted. ## Acknowledgements We express our gratitude to the anonymous reviewers for their insightful feedback. Our research has received support through the UGC-JRF fellowship from the Ministry of Education, Government of India. Additionally, we would like to extend our thanks to our colleague, Mr. Shrutimoy Das, a Ph.D. student at IIT Gandhinagar, who provided the initial review of this paper and generously shared GPU resources to conduct essential side experiments during critical phases of our research. We are grateful for these contributions, which significantly contributed to the success of this study.
2301.02451
FMCW Radar Sensing for Indoor Drones Using Learned Representations
Frequency-modulated continuous-wave (FMCW) radar is a promising sensor technology for indoor drones as it provides range, angular as well as Doppler-velocity information about obstacles in the environment. Recently, deep learning approaches have been proposed for processing FMCW data, outperforming traditional detection techniques on range-Doppler or range-azimuth maps. However, these techniques come at a cost; for each novel task a deep neural network architecture has to be trained on high-dimensional input data, stressing both data bandwidth and processing budget. In this paper, we investigate unsupervised learning techniques that generate low-dimensional representations from FMCW radar data, and evaluate to what extent these representations can be reused for multiple downstream tasks. To this end, we introduce a novel dataset of raw radar ADC data recorded from a radar mounted on a flying drone platform in an indoor environment, together with ground truth detection targets. We show with real radar data that, utilizing our learned representations, we match the performance of conventional radar processing techniques and that our model can be trained on different input modalities such as raw ADC samples of only two consecutively transmitted chirps.
Ali Safa, Tim Verbelen, Ozan Catal, Toon Van de Maele, Matthias Hartmann, Bart Dhoedt, André Bourdoux
2023-01-06T10:20:00Z
http://arxiv.org/abs/2301.02451v1
# FMCW Radar Sensing for Indoor Drones Using Learned Representations ###### Abstract Frequency-modulated continuous-wave (FMCW) radar is a promising sensor technology for indoor drones as it provides range, angular as well as Doppler-velocity information about obstacles in the environment. Recently, deep learning approaches have been proposed for processing FMCW data, outperforming traditional detection techniques on range-Doppler or range-azimuth maps. However, these techniques come at a cost; for each novel task a deep neural network architecture has to be trained on high-dimensional input data, stressing both data bandwidth and processing budget. In this paper, we investigate unsupervised learning techniques that generate low-dimensional representations from FMCW radar data, and evaluate to what extent these representations can be reused for multiple downstream tasks. To this end, we introduce a novel dataset of raw radar ADC data recorded from a radar mounted on a flying drone platform in an indoor environment, together with ground truth detection targets. We show with real radar data that, utilizing our learned representations, we match the performance of conventional radar processing techniques and that our model can be trained on different input modalities such as raw ADC samples of only two consecutively transmitted chirps. ## Supplementary Material We release the dataset used in this work in the link below1. Footnote 1: [https://thesmartrobot.github.io/datasets](https://thesmartrobot.github.io/datasets) ## I Introduction Indoor flying with drones is significantly more difficult than flying outdoors, due to the lack of exact positioning information such as GPS, and the more stringent requirements on obstacle detection and avoidance [1]. Building a coherent view of the world from sensory observations is therefore an important challenge for autonomous indoor drones. An often overlooked sensor, which is useful for this purpose, is the frequency-modulated continuous-wave (FMCW) radar. The ability to get robust range, velocity and angle estimates from a single radar frame provides excellent additional information to the traditionally used accelerometer, gyro and camera sensors. Conventional radar processing approaches process raw radar ADC values through Fourier processing into range, velocity and angular information [2]. However, the resulting data format is still high dimensional and thus also requires high bandwidth connections between the radar sensor and the downstream processing unit. Often, this Fourier processing is followed by Constant False Alarm Rate (CFAR) algorithms that result in a list of detected targets, their distance, velocity and angle of arrival [3]. Recently, deep learning techniques have proven to be well suited for various sensor processing tasks. This is also the case for radar, where their feature learning capabilities often make them better suited than classical algorithms, outperforming techniques based on handcrafted feature extraction [4]. Such deep neural networks (DNNs) typically operate on the Fourier processed data, such as range-azimuth-Doppler maps [5] or micro-Doppler maps [6]. The weakness of these approaches, however, is that retraining of the DNN on high-dimensional preprocessed radar maps is needed for each task at hand. In this paper, we investigate whether _unsupervised_ deep learning approaches can be leveraged to yield a low-dimensional radar representation that can be reused for a wide range of downstream tasks. In addition, we also research to what extent such representations can be learned from raw ADC data of _only two chirps_, instead of a complete chirp frame typically used in radar processing [5, 6], potentially saving energy usage and data bandwidth. To this end, we focus on an indoor drone flight scenario, and introduce a novel large-scale dataset of raw FMCW radar samples as well as ground truth information for a number of downstream target detection tasks. Our results show that low-dimensional, unsupervised learned representations from range-Doppler and range-angular maps can successfully embed range, velocity and angular information of targets in the environment. Also representations trained on data from only two chirps contain relevant range, velocity and angular information. To summarize, we make the following contributions: * We present a novel, large scale dataset of RGBD and raw FMCW radar data captured on an indoor flying drone 1. Footnote 1: [https://thesmartrobot.github.io/datasets](https://thesmartrobot.github.io/datasets) * We propose a generic unsupervised learning method for acquiring low-dimensional representations of radar data given different input formats. * We evaluate the effectiveness of these representations on a set of downstream tasks, and compare our results against traditional radar processing techniques. * We show that our method can successfully perform with even raw ADC samples of only two consecutive chirps. This paper is organized as follows. Related work is covered in Section II. Our dataset acquisition is explained in Section III. Our various neural network models are presented in Section IV. Experimental results are shown in Section V. Conclusions are provided in Section VI. ## II Related work FMCW radar has been extensively used for range, velocity and angle-of-arrival detection [2], especially in the context of automotive applications. Recently, there has been increasing interest in utilizing deep neural networks to improve radar processing, i.e. focusing on radar antenna design, radar signal recognition, automatic target recognition based on high range resolution profiles, and clutter recognition and suppression [7]. Most related work operates in the application area of autonomous driving. For example Major et al. [5] used a convolutional neural network that processes range-azimuth-Doppler tensors for vehicle detection. Patel et al. [8] similarly utilize a convolutional neural network for object classification in a supervised manner, while Wheeler et al. [4] proposed a VAE architecture fine-tuned to range-azimuth maps of autonomous driving data to learn an accurate radar sensor model. In the context of indoor flying drones, radars have been used as a sensor for obstacle detection [3] and odometry estimation [9]. Deep learning techniques have been further proposed for localization and activity classification of drones using micro-Doppler signatures [6]. The main drawback of current deep learning approaches for radar is that they operate on the high-dimensional radar maps, and have to be retrained for each novel task. In contrast, we propose to learn generic low-dimensional radar representations that can be reused for a number of downstream tasks. ## III Dataset We collected a novel indoor drone flying dataset using an NXP hovergames drone in a warehouse lab setting. This dataset is the first of its kind since it contains both RGBD camera data as well as FMCW radar. In addition, we log the raw ADC radar samples, in contrast to the more common post processed radar maps. Our drone is shown in Fig. 2, and carries both a TI IWR 1443 mmWave radar and an Intel Realsense D435 camera. The specific sensor configuration parameters used are given in Table I. In addition to RGBD and FMCW data, we also store the internal Extended Kalman Filter (EKF) state of the PX4 flight controller, as well as the actuator commands (yaw-pitch-roll-thrust) sent by the radio remote control. All data is recorded on a Jetson Nano onboard computer, and is synchronized to the radar frame rate of 5 FPS. We also record a 6-DOF pose using a Qualisys marker-based motion capture system (MOCAP), providing a ground truth signal for a subset of the downstream regression tasks defined in Section V-A. Data is collected in three recording scenarios in which the drone is manually flown through the lab, as shown in Fig. 3). In the first scenario, the drone flies throughout the lab, in between narrow aisles mimicking a warehouse layout. In the second scenario, the drone flies from wall to wall in the open space, tracked by the MOCAP system. In the third scenario, a corner reflector, i.e. a aluminium foil pyramidal reflector (also depicted on Fig.1e), is placed at the center of the open space (i.e. at the origin of the MOCAP reference frame), and the drone hovers in front of the corner reflector. For scenario 1 to 3, we collected 11274, 5005 and 2798 frames, respectively, totaling 38GB of data. In addition to the raw ADC samples, we also include post-processed range-Doppler and range-azimuth maps in the dataset. The range-Doppler maps are generated by 2D Fourier transforming the raw ADC samples over the fast- and slow-time, and then averaging the amplitude over all antennas. For the range-azimuth map, we use Capon beamforming [10], with 128 angle bins, resulting in range-azimuth maps of identical size as the range-Doppler maps. This allows us to re-use the same neural network architecture for both modalities as described later. Fig. 1 shows examples of the collected dataset, with the camera and depth data, as well as corresponding range-Doppler and range-azimuth maps of the radar data. ## IV Models As we are interested in generating relevant intermediate representations suitable for a wide array of downstream tasks, we propose three different generative models for radar processing, all based on the well-established variational autoencoder (VAE) framework [11, 12]. The high-level data-flow is shown in Fig.4. Similarly to existing related work on object detection in automotive scenarios, we build a VAE model that operates on range-azimuth (RA) and range-Doppler (RD) maps. Finally, we also propose a novel approach to radar processing which leverages a generative modelling approach to generating RA and RD maps from raw ADC samples directly. ### _Variational Autoencoders_ All our proposed architectures are based on the Variational AutoEncoder (VAE) framework for generative modelling [11, 12]. A VAE amortizes the variational Bayesian inference process using two neural networks: an encoder and decoder network. These are jointly optimized by maximizing the \begin{table} \begin{tabular}{|l c|} \hline **TI IWR 1443 mmWave radar** & \\ \hline number of chirps & 128 \\ number of transmit antennas & 2 \\ number of receive antennas & 4 \\ number of samples per chirp & 256 \\ start frequency & 77GHz \\ frequency slope & 50e12 \\ sample rate & 6.24e6 \\ ADC start time & 11 \(\mu\)s \\ ramp end time & 68 \(\mu\)s \\ idle time & 40 \(\mu\)s \\ \hline **Intel Realsense D435 camera** & \\ \hline color resolution (W x H) & 640 \(\times\) 480 \\ depth resolution (W x H) & 640 \(\times\) 480 \\ \hline \end{tabular} \end{table} TABLE I: Sensors configuration. evidence lower bound (ELBO), or equivalently by minimizing the following loss function: \[\mathcal{L}=E_{q}\big{[}\log p_{\theta}(x|z)\big{]}-D_{KL}\big{[}q_{\phi}(z|x)||p (z)\big{]}, \tag{1}\] where \(p(z)\) represents a (fixed) prior distribution, \(p_{\theta}(x|z)\) the learned likelihood (decoder) distribution, and \(q_{\phi}(z|x)\) the learned variational posterior (encoder) distribution, with \(x\) and \(z\) the observed and latent variables respectively. Concretely, the encoder maps high-dimensional sensor inputs to a low dimensional state space, which is parameterized as a multivariate, isotropic Gaussian by outputting distribution means and standard deviations from the encoder model. As a prior distribution, a standard Normal distribution is typically chosen \(p(z)=\mathcal{N}(0,1)\). ### _Range-Doppler VAE_ The first model (Fig. 3(a)) we investigate uses the range-Doppler maps as input and target output. The input RD-map is generated out of the raw radar ADC samples through range-Doppler Fourier processing [3]. As likelihood loss the mean squared error (MSE) is used. The model is parameterized using convolutional neural nets using 32 latent dimensions with the architecture described in Table II. ### _Range-azimuth VAE_ Similarly, we extract the RA-map from the raw ADC by using FFT along the fast-time after which we use the Fig. 1: Example of camera, depth, range-Doppler and range-azimuth views from scenario 1 (a,b,c,d), flying through the aisles, and scenario 3 (e,f,g,h), hovering in front of a corner reflector. Fig. 3: We record data in three scenarios: (a) scenario 1: flying trajectories through the aisles, (b) scenario 2: flying from one wall to another, and (c) scenario 3: hovering in front of a corner reflector. Fig. 2: We use an NXP drone equipped with TI 79GHz mmWave radar and Intel Realsense D435 RGBD camera. The drone reference frame is tracked by a Qualisys MOCAP system when in view of the cameras. Capon method for beam-forming [10] and treat them as inputs and targets of the RA-model (Fig.(b)b). Thanks to the similar shape of the input data, the RA-model can share the same architecture as the RD-model. ### _Chirp VAE_ The third model we propose is one that takes the first two raw radar chirps and encodes them in a latent space as in B and C. In this case however, two different decoders are used to decode the latent value either into a RA-map or an RD-map (Fig. (c)c). This way, the model can learn by itself the relevant features necessary to create these maps instead of depending upon the predefined Fourier features. This model reuses the same decoder architecture as both the RA and RD models, however it needs a different encoder. The parameters of which can be found in Table II. On the same architecture we also experimented with transforming the raw ADC samples through the Range-FFT transform, we call this model Chirp VAE + FFT further in the discussion, and then taking the amplitude and phase values of the resulting signal. ### _Training_ We train all models using the Adam optimizer [13] with initial learning rate of 1e-4 and use the GECO [14] optimization technique which puts more weight on the reconstruction term at the start of training to avoid posterior collapse. As a pre-processing step, the range-Doppler and range-azimuth maps are rescaled and squashed using a Sigmoid function to ensure inputs between 0 and 1 as neural network input. We train the models for 10k epochs using the scenario 1 dataset only. ## V Experimental results In order to validate the effectiveness of our learned representations, we define four downstream tasks for which we obtained ground truth information. These tasks can be addressed using traditional radar processing techniques and focus on estimating range information (i.e. distance to wall or target), velocity information (i.e. forward moving velocity) and angular information (i.e. angle of arrival w.r.t target). To evaluate the different approaches at test time on the down stream tasks, we fix the weights of the encoder models and in addition train a fully connected neural network with two layers of 128 hidden neurons, in order to regress the task \begin{table} \begin{tabular}{c|c|c|c} & Layer & Neurons/Filters & activation function \\ \hline \hline \multirow{4}{*}{Coupling} & Convolutional & 16 & Leaky ReLU \\ & Convolutional & 16 & Leaky ReLU \\ & Convolutional & 32 & Leaky ReLU \\ & Linear & 1024 & Leaky ReLU \\ & Linear & 64 & None (mean) and Softplus (stdev) \\ \hline \multirow{4}{*}{Coupling} & Convolutional & 512 & Leaky ReLU \\ & Convolutional & 512 & Leaky ReLU \\ & Convolutional & 32 & Leaky ReLU \\ & Linear & 512 & Leaky ReLU \\ & Linear & 64 & None (mean) and Softplus (stdev) \\ \hline \multirow{4}{*}{Coupling} & Linear & 1024 & Leaky ReLU \\ & Linear & 4096 & Leaky ReLU \\ & Convolutional & 32 & Leaky ReLU \\ & Convolutional & 32 & Leaky ReLU \\ & Convolutional & 16 & Leaky ReLU \\ & Convolutional & 16 & Leaky ReLU \\ & Convolutional & 1 & None (RD) or Sigmoid (RA) \\ \hline \end{tabular} \end{table} TABLE II: Neural network architectures of the various models. All convolutional layers have a 3x3 kernel. The convolutional layers in the Likelihood model have a stride and padding of 1 to ensure that they preserve the input shape. Upsampling is done by nearest neighbour interpolation. The encoder output represents a 32-dimensional isotropic Gaussian with 32 means and 32 standard deviations. Fig. 4: The different proposed sensor models. The range-Doppler (RD) VAE (figure a) takes a range-Doppler map as input. The range-azimuth (RA) VAE (figure b) takes a range-azimuth map as input. The chirp VAE (figure c) takes the ADC samples of two consecutively transmitted chirps as input and reconstructs both the range-Doppler and range-azimuth maps corresponding to that observation. target values. We then report the test set performance and compare to the hand-crafted radar processing baselines. In what follows, we first introduce the different tasks, how we acquired ground truth and performed baseline processing, after which we discuss our results. ### _Downstream tasks_ #### Iv-A1 Distance to wall For the first task we use the data collected in scenario 2, in which the drone flies straight ahead towards a wall, makes a U-turn and flies towards the other side. The task is to regress the distance towards the wall, as depicted on Fig. 4(a). To obtain ground truth data, we use pose information of the MOCAP system to calculate the distance to the wall. As radar processing baseline, we detect the highest peak in the range-azimuth map, and calculate the distance based on the range bin and the radar range resolution. #### Iv-A2 Forward velocity In addition to estimating the distance to the wall, we also use the data from scenario 2 to estimate the forward velocity of the drone flying towards the wall. The ground truth is established by averaging the distance covered between the previous and next time step, divided by the time interval provided by the timestamps. As traditional radar processing baseline, we now detect the highest peak in the range-Doppler map, and estimate the velocity from the Doppler bin. #### Iv-A3 Distance to corner reflector target For target detection we use the data collected in scenario 3, in which a corner reflector was put in the middle of the space, and the drone hovers at various ranges and angles w.r.t. the reflector. As the corner reflector was mounted at the origin of the MOCAP reference frame, the ground truth distance to target is recovered from the drone pose. As traditional radar processing baseline, we track the highest peak in the range-azimuth map (only considering ranges between 0 and 5m). #### Iv-A4 Angle to corner reflector target In addition to the target distance, we also recover the angle of arrival of the corner reflector as shown on Fig. 4(b). In this case, the traditional radar processing baseline tracks the angle bin of maximal amplitude of the reflector peak in the range-azimuth map. ### _Results_ For all tasks, we adopt a 5/6 to 1/6 train-test split ratio, and train a separate downstream-task neural network for each target task for 500 epochs. We report the test set RMSE against the ground truth, together with the median error as well as the inter-quartile difference. Table III summarizes the results. The best performing model for the distance and velocity tasks is the RDVAE, which is trained on range-Doppler maps. Note that in general the radar processing methods provide a strong baseline, but result in a worse RMSE. This is due to a number of outliers, i.e. when a faulty peak is detected in the spectrum, and is reflected by the lower inter-quartile difference. Also, for the distance estimation task, the radar processing baseline suffers from a bias w.r.t. the ground truth signal, as it overestimates the distance due to the drone not being perfectly perpendicularly aligned to the opposing wall when moving forward. This is reflected in the significant median error. Fig. 6 compares the results of the traditional radar processing baseline, the RDVAE and the CVAE against the ground truth. This illustrates how radar processing has a slight overestimate for larger distances, and suffers from some outliers in the distance predictions. For angular estimation the RAVAE model trained on range-azimuth maps provides the lowest RMSE. Fig. 7 compares the ground truth angle over time with CVAE, RAVAE and radar processing. Although radar processing closely matches the ground truth signal most of the time, occasional outliers again yield a large RMSE. Although the CVAE models perform worse, they do capture some of the characteristics of the distance, velocity and angular information (see Fig. 6 and 7). Given that the CVAEs only requires the radar to send and receive two chirps and adopts a less complex encoder model, they could provide a lower latency and lower power alternative to full resolution range-azimuth-Doppler maps, at the cost of less accurate predictions. ### _Discussion_ It is important to note that the results of Table III are obtained despite training the unsupervised models on the scenario 1 data, which has different characteristics to the Fig. 5: We consider four downstream tasks: (a) estimate distance \(d\) (i) to the wall and forward drone velocity \(v\) (ii), and (b) estimate distance \(d\) (iii) and angle \(\alpha\) to corner reflector (iv). Fig. 6: Ground truth distance to wall compared with radar processing, CVAE and RDVAE. data recorded in scenarios 2 and 3. The flights between the aisles contain more clutter and reflections compared to the flights in the open space. This restriction was due to the fact that we required ground truth positioning information for the downstream tasks in order to fairly conduct our evaluation. It is interesting to note that despite this discrepancy, the learned representations still capture the necessary information to address the downstream tasks. An important point of future work is to record data in a more diverse set of environments in order to further evaluate to what extent our learned representations can generalize. In addition, one could think of more complex scenarios and downstream tasks, such as detecting moving obstacles or detecting loop closures in a simultaneous localization and mapping (SLAM) setting. Despite the inferior performance of the models acting on raw ADC data, these might still be an interesting avenue for future research, given that they need 64 times less chirps and less pre-processing steps. For example, different chirp encoder architectures using e.g., recurrent layers could be investigated for refining the radar representations. Doing so, an adaptive power versus accuracy trade-off could be provided during radar processing, by only increasing the number of chirps to improve the accuracy of the system when needed. ## VI Conclusion In this paper, we have proposed a method to learn low-dimensional representations of FMCW radar data. We have experimented with a number of encoder-decoder architectures respectively operating on range-Doppler maps, range-azimuth maps or raw chirp data. To benchmark the effectiveness of the resulting representations, we have compared neural network regressors against traditional hand-crafted radar processing for a number of downstream tasks. For this purpose, we have presented a novel dataset recording both camera RGBD and FMCW radar data from a drone flying in an indoor warehouse environment. This paper has provided what is, to the best of our knowledge, one the first works that address the issue of learning robust radar representations by developing novel VAE-based neural architectures for processing radar data, and evaluate these on a set of distinct downstream tasks. As future work, we will record data in a more diverse set of environments, in order to also address to what extent such representations can generalize to multiple environments, and experiment with other chirp encoder architectures.
2302.04570
NeuKron: Constant-Size Lossy Compression of Sparse Reorderable Matrices and Tensors
Many real-world data are naturally represented as a sparse reorderable matrix, whose rows and columns can be arbitrarily ordered (e.g., the adjacency matrix of a bipartite graph). Storing a sparse matrix in conventional ways requires an amount of space linear in the number of non-zeros, and lossy compression of sparse matrices (e.g., Truncated SVD) typically requires an amount of space linear in the number of rows and columns. In this work, we propose NeuKron for compressing a sparse reorderable matrix into a constant-size space. NeuKron generalizes Kronecker products using a recurrent neural network with a constant number of parameters. NeuKron updates the parameters so that a given matrix is approximated by the product and reorders the rows and columns of the matrix to facilitate the approximation. The updates take time linear in the number of non-zeros in the input matrix, and the approximation of each entry can be retrieved in logarithmic time. We also extend NeuKron to compress sparse reorderable tensors (e.g. multi-layer graphs), which generalize matrices. Through experiments on ten real-world datasets, we show that NeuKron is (a) Compact: requiring up to five orders of magnitude less space than its best competitor with similar approximation errors, (b) Accurate: giving up to 10x smaller approximation error than its best competitors with similar size outputs, and (c) Scalable: successfully compressing a matrix with over 230 million non-zero entries.
Taehyung Kwon, Jihoon Ko, Jinhong Jung, Kijung Shin
2023-02-09T11:17:34Z
http://arxiv.org/abs/2302.04570v2
# NeuKron: Constant-Size Lossy Compression of Sparse Reorderable Matrices and Tensors ###### Abstract. Many real-world data are naturally represented as a sparse reorderable matrix, whose rows and columns can be arbitrarily ordered (e.g., the adjacency matrix of a bipartite graph). Storing a sparse matrix in conventional ways requires an amount of space linear in the number of non-zeros, and lossy compression of sparse matrices (e.g., Truncated SVD) typically requires an amount of space linear in the number of rows and columns. In this work, we propose NkuKron for compressing a sparse reorderable matrix into a constant-size space. NeuKron generalizes Kronecker products using a recurrent neural network with a constant number of parameters. NeuKron updates the parameters so that a given matrix is approximated by the product and reorders the rows and columns of the matrix to facilitate the approximation. The updates take time linear in the number of non-zeros in the input matrix, and the approximation of each entry can be retrieved in logarithmic time. We also extend NeuKron to compress sparse reorderable tensors (e.g. multi-layer graphs), which generalize matrices. Through experiments on ten real-world datasets, we show that NeuKron is **(a) Compact**: requiring up to five orders of magnitude less space than its best competitor with similar approximation errors, **(b) Accurate**: giving up to \(10\times\) smaller approximation error than its best competitors with similar size outputs, and **(c) Scalable**: successfully compressing a matrix with over 230 million non-zero entries. Data Compression, Sparse Matrix, Sparse Tensor + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote † †: Both authors contributed equally to this research. + Footnote † †: Both authors contributed equally to this research. + Footnote † †: Both authors contributed equally to this research. + Footnote † † †: Both authors contributed equally to this research. + Footnote † † †: Both authors contributed equally to this research. + Footnote
2304.00498
Adversary-Aware Partial label learning with Label distillation
To ensure that the data collected from human subjects is entrusted with a secret, rival labels are introduced to conceal the information provided by the participants on purpose. The corresponding learning task can be formulated as a noisy partial-label learning problem. However, conventional partial-label learning (PLL) methods are still vulnerable to the high ratio of noisy partial labels, especially in a large labelling space. To learn a more robust model, we present Adversary-Aware Partial Label Learning and introduce the $\textit{rival}$, a set of noisy labels, to the collection of candidate labels for each instance. By introducing the rival label, the predictive distribution of PLL is factorised such that a handy predictive label is achieved with less uncertainty coming from the transition matrix, assuming the rival generation process is known. Nonetheless, the predictive accuracy is still insufficient to produce an sufficiently accurate positive sample set to leverage the clustering effect of the contrastive loss function. Moreover, the inclusion of rivals also brings an inconsistency issue for the classifier and risk function due to the intractability of the transition matrix. Consequently, an adversarial teacher within momentum (ATM) disambiguation algorithm is proposed to cope with the situation, allowing us to obtain a provably consistent classifier and risk function. In addition, our method has shown high resiliency to the choice of the label noise transition matrix. Extensive experiments demonstrate that our method achieves promising results on the CIFAR10, CIFAR100 and CUB200 datasets.
Cheng Chen, Yueming Lyu, Ivor W. Tsang
2023-04-02T10:18:30Z
http://arxiv.org/abs/2304.00498v1
# Adversary-Aware Partial Label Learning with Label distillation ###### Abstract To ensure that the data collected from human subjects is entrusted with a secret, rival labels are introduced to conceal the information provided by the participants on purpose. The corresponding learning task can be formulated as a noisy partial-label learning problem. However, conventional partial-label learning (PLL) methods are still vulnerable to the high ratio of noisy partial labels, especially in a large labelling space. To learn a more robust model, we present Adversary-Aware Partial Label Learning and introduce the _rival_, a set of noisy labels, to the collection of candidate labels for each instance. By introducing the rival label, the predictive distribution of PLL is factorised such that a handy predictive label is achieved with less uncertainty coming from the transition matrix, assuming the rival generation process is known. Nonetheless, the predictive accuracy is still insufficient to produce an sufficiently accurate positive sample set to leverage the clustering effect of the contrastive loss function. Moreover, the inclusion of rivals also brings an inconsistency issue for the classifier and risk function due to the intractability of the transition matrix. Consequently, an adversarial teacher within momentum (ATM) disambiguation algorithm is proposed to cope with the situation, allowing us to obtain a provably consistent classifier and risk function. In addition, our method has shown high resiliency to the choice of the label noise transition matrix. Extensive experiments demonstrate that our method achieves promising results on the CIFAR10, CIFAR100 and CUB200 datasets. ## 1 Introduction Deep learning algorithms depend heavily on a large-scale, true annotated training dataset. Nonetheless, the costs of accurately annotating a large volume of true labels to the instances are exorbitant, not to mention the time invested in the labelling procedures. As a result, weakly supervised labels such as partial labels that substitute true labels for learning have proliferated and gained massive popularity in recent years. Partial-label learning (PLL) is a special weakly-supervised learning problem associated with a set of candidate labels \(\vec{Y}\) for each instance, in which only one true latent label \(y\) is in existence. Nonetheless, without an appropriately designed learning algorithm, the limitations of the partial label are evident since deep neural networks are still vulnerable to the ambiguous issue rooted in the partial label problem because of noisy labels Zhou (2018); Patrini et al. (2017); Han et al. (2018). As a result, there have had many partial label learning works (PLL)Cour et al. (2011); Hullermeier and Beringer (2006); Feng and An (2019); Feng et al. (2020) successfully solved the ambiguity problem where there is a set of candidate labels for each instance, and only a true label exists. Apart from the general partial label, we have also seen a variety of partial label generations evolved, simulating different real-life scenarios. The independently and uniformly drawing is the one have seen the most Lv et al. (2020); Feng and An (2019). The other problem settings include the instance dependent partial label learning, where each partial label set is generated depending on the instance as well as the true label Xu et al. (2021). Furthermore, Lv et al. (2020) has introduced label specific partial label learning, where the uniform flipping probability of similar instances differs from dissimilar group instances. Overall, the learning objective of the previous works is all about disambiguation. More specifically, the goal is to design a classifier training with partial labels, aiming to correctly label the testing dataset, hoping the classification performance will be as close as the full supervised learning. On the contrary, there is a lack of discussion on previous works that shed light on the data privacy-enhancing techniques in general partial label learning. The privacy risk is inescapable; thus, privacy-preserving techniques need to be urgently addressed. Recently, we have seen surging data breach cases worldwide. These potential risks posed by the attacker are often overlooked and pose a detrimental threat to society. For instance, it is most likely for the adversary to learn from stolen or leaked partially labelled data for illegal conduct using the previous proposed partial-label learning methods. Subsequently, it has become an inherent privacy concerns in conventional partial label learning. In this paper, the Adversary-Aware partial label learning is proposed to address and mitigate the ramification of the data breach. In a nutshell, we propose an affordable and practical approach to manually corrupt the collected dataset to prevent the adversary from obtaining high-quality, confidential information meanwhile ensure the trustee has full access to the useful information. However, we have observed that adversary-aware partial label learning possesses some intrinsic learnability issues. Firstly, the intractability is raised from the transition matrix. Secondly, the classifier and risk inconsistency problem has been raised. Hence, we propose an the Adversarial teacher within momentum (ATM)(In section 2.1), adversary-aware loss function equation 19, and a new ambiguity condition equation 1 to counter the issues. Under the adversary-aware partial label problem setting, the rival is added to a candidate set of labels. To achieve that, we extend the original partial label generation equation 2 by factorisation to add the rival \(Y^{\prime}\). Subsequently, we have the adversary-aware partial label generation established as equation 3. Then, we decompose the second equation of equation 3 into the rival embedded intractable transition matrix term \(Q^{*}\) and class instance-dependent transition matrix \(T_{y,y^{\prime}}\), which is \(\mathrm{P}(Y^{\prime}=y^{\prime}\mid Y=y,X=x)\). In our problem setting, \(\bar{T}_{y,y^{\prime}}\), the class instance-independent transition matrix is utilised, which is defined as \(\mathrm{P}(Y^{\prime}=y^{\prime}\mid Y=y)\), with the assumption the rival is generated depending only on \(Y\) but instance \(X\). Under the assumption, the class instance-independent transition matrix is simplified and mathematically identifiable. Since all the instances share the same class instance-independent transition matrix in practice, such encryption is more affordable to implement. The rival variable serves as controllable randomness to enhance privacy against the potential adversary and information leakage. In contrast, the previous methods can not guarantee the privacy protection property. However, a fundamental problem has been raised, inclusion of the rival implies an inconsistent classifier according to the adversary-aware label generation equation 3. Learning a consistent partial label classifier is vital, but in our problem setting, the consistency classifier may not be obtained due to the intractability of \(Q^{*}\)(details are described in section 1.2). As a consequence, the Adversarial teacher within momentum (ATM) is proposed, which is designed to identify the term \(\mathrm{P}(\vec{Y}\mid Y,Y^{\prime},X)\) which is denoted as \(Q^{*}\). The Moco-style dictionary technique He et al. (2020) and Wang et al. (2022) have inspired us to explore exploiting the the soft label from instance embedding, leveraging \(\bar{T}_{y,y^{\prime}}\) to identify or reduce the uncertainty of the \(Q^{*}\) due to the property of informational preservation and tractability. Therefore, a consistent partial label learner is obtained if the uncertainty raised from the transition matrix is reduced greatly. Specifically, we transform the inference of label generation in Adversary-Aware PLL as an approximation for the transition matrix \(Q^{*}\). Ultimately, a tractable solution to the unbiased estimate of \(\mathrm{P}(\vec{Y}\mid Y,Y^{\prime},X)\) can be derived. Lastly, we have rigorously proven that a consistent Adversary-Aware PLL classifier can be obtained if \(\mathrm{P}(\vec{Y}\mid Y,Y^{\prime},X)\) and \(\mathrm{P}(Y^{\prime}\mid Y)\) are approximated accurately according to equation 3. In this work, we are mainly focusing on identifying the transition matrix term \(\mathrm{P}(\vec{Y}\mid Y,Y^{\prime},X)\). The rival is generated manually for privacy enhancement. Thus the \(\mathrm{P}(Y^{\prime}\mid Y)\) is given by design. Overall, our proposed method has not only solved the ambiguity problem in Adversary-Aware PLL but also addressed the potential risks from the data breach by using a rival as the encryption. Our proposed label generation bears some resemblance to local differential privacy Kairouz et al. (2014); Warner (1965), which aims to randomise the responses. The potential application is to randomise survey responses, a survey technique for improving the reliability of responses to confidential interviews or private questions. Depending on the sophistication of the adversary, our method offers a dynamic mechanism for privacy encryption that is more resilient and flexible to face the potential adversary or privacy risk. By learning from the previous attacks, we can design different levels of protection by adjusting the \(\vec{T}\) term. The **main contributions** of the work are summarized: * We propose a novel problem setting named adversary-aware partial label learning. * We propose a novel Adversary-Aware loss function and the Adversarial teacher within momentum (ATM) disambiguation algorithm. Our proposed paradigm and loss function can be applied universally to other related partial label learning methods to enhance the privacy protection. * A new ambiguity condition (equation 1) for Adversary-Aware Partial Label Learning is derived. Theoretically, we proven that the method is a Classifier-Consistent Risk Estimator. ### Related work **Partial Label Learning (PLL)** trains an instance associated with a candidate set of labels in which the true label is included. Many frameworks are designed and proposed to solve the label ambiguity issue in partial label learning. The probabilistic graphical model-based methodsZhang et al. (2016); Wang and Isola (2020); Xu et al. (2019); Lyu et al. (2019) as well as the clustering-based or unsupervised approaches Liu and Dietterich (2012) are proposed by leveraging the graph structure and prior information of feature space to do the label disambiguation. The average-based perspective methods Hullermeier and Beringer (2006); Cour et al. (2011); Zhang et al. (2016) are designed based on the assumption of uniform treatment of all candidates; however, it is vulnerable to the false positive label, leading to misled prediction. Identification perspective-based methods Jin and Ghahramani (2002) tackle disambiguation by treating the true label as a latent variable. The representative perspective approach uses the maximum margin method Nguyen and Caruana (2008); Wang et al. (2020, 2022) to do the label disambiguation. Most recently, self-training perspective methodsFeng and An (2019); Wen et al. (2021); Feng et al. (2020) have emerged and shown promising performance. In **Contrastive Learning** He et al. (2020); Oord et al. (2018), the augmented input is applied to learns from feature of the unlabeled sample data. The learning objective is to differentiate the similar and dissimilar parts of the input, in turn, maximise the learning of the high-quality representations. CL has been studied in unsupervised representation fashion Chen et al. (2020); He et al. (2020), which treats the same classes as the positive set to boost the performance. The weakly supervised learning has also borrowed the concepts of CL to tackle the partial label problem Wang et al. (2022). The CL has also been applied to semi-supervised learning Li et al. (2020). ### Adversary-Aware Partial Label Problem Setting Given the input space \(\mathcal{X}\in\mathbb{R}^{d}\) and label space is defined as \(\mathcal{Y}\) = [c] \(\mathrm{=}\in\{1\cdots c\}\) with the number of \(c>2\) classes. Under adversary-aware partial labels, each instance \(X\in\mathcal{X}\) has a candidate set of adversary-aware partial labels \(\vec{Y}\in\vec{\mathcal{Y}}\). The adversary-aware partial label set has space of \(\vec{\mathcal{Y}}:=\{\vec{y}\mid\vec{y}\subset\mathcal{Y}\}\)=\(2^{[c]}\), in which there is total \(2^{[c]}\) selection of subsets in \([c]\). The objective is to learn a classifier with the adversary-aware partially labelled sample \(n\), which was i.i.d drawn from the \(\vec{\mathcal{D}}=\{(X_{1},\vec{Y}_{1}),\ldots,(X_{n},\vec{Y}_{n})\}\), aiming that it is able to assign the true labels for the testing dataset. Given instance and the adversary-aware partial label \(\vec{Y}\) the adversary-aware partial label dataset distribution \(\vec{D}\) is defined as \((X,\vec{Y})\in\mathcal{X}\times\vec{\mathcal{Y}}\). The class instance-independent transition matrix \(P(Y^{\prime}\mid Y)\) is denoted as \(\bar{T}\in\mathbb{R}^{c\times c}\). \(\bar{T}_{y,y^{\prime}}=P(Y^{\prime}=y^{\prime}\mid Y=y)\) where \(\bar{T}_{y,y}=0,\forall y^{\prime},y\in[c]\). The adversary-aware means the designed paradigm can prevent the adversary from efficiently and reliably inferring certain information from the database without the \(\bar{T}\), even if the data was leaked. The rival is the controllable randomness added to the partial label set to enhance privacy. #### 1.2.1 Assertion Conditions in Label Generation Set The following conditions describe the learning condition for adversary-aware partial label. According to Cour et al. (2011) there needs to be certain degrees of ambiguity for the partial label learning. Lemma 1 is the new ERM learnability condition which is proposed as follows \[P_{y^{\prime},\vec{y}}:=\mathrm{P}(y^{\prime},\bar{y}\in\vec{Y}\mid Y^{\prime} =y^{\prime},\bar{Y}=\bar{y},X=x). \tag{1}\] The \(y^{\prime}\) is the rival, and \(\bar{y}\) is the false positive label that exists in the partial label set. It has to be met to ensure the Adversary-Aware PLL problem is learnable with \(y^{\prime}\neq y\) and \(\bar{y}\neq y\), these conditions ensure the ERM learnability Liu & Dietterich (2014) of the adversary-aware PLL problem if there is small ambiguity degree condition. In our case which is that, \(P_{y^{\prime},\bar{y}}<1\). The \(y\) is the true label corresponding to each instance \(x\). And \(P_{y}\):= \(\mathrm{P}(y\in\vec{Y}\mid Y=y,X=x)\), where \(P_{y}=1\) to ensure that the ground truth label is in the partial label set with respect to each instance. #### 1.2.2 Label Generation In the previous works of partial label generation procedure, only a candidate of the partial label was generated as such. **The Standard Partial Label Generation:** \[\begin{split}&\sum_{y\in Y}\mathrm{P}(\vec{Y}=\vec{y},Y=y\mid X=x )=\sum_{y\in Y}\mathrm{P}(\vec{Y}=\vec{y}\mid Y=y,X=x)\mathrm{P}(Y=y\mid X=x).\\ &=\sum_{y\in Y}\mathrm{P}(\vec{Y}=\vec{y}\mid Y=y)\mathrm{P}(Y=y \mid X=x),\end{split} \tag{2}\] where \(\mathrm{P}(\vec{Y}=\vec{y}\mid Y=y,X=x)\) is the label generation for the class instance-dependent partial label and \(\mathrm{P}(\vec{Y}=\vec{y}\mid Y=y)\) is the standard partial label learning framework. Then we present the difference between the general partial labels and the adversary-aware partial label. **The Adversary-Aware Partial Label Generation:** \[\begin{split}&\sum_{y\in Y}\mathrm{P}(\vec{Y}=\vec{y}\mid X=x )=\sum_{y\in Y}\sum_{y^{\prime}\in Y^{\prime}}\mathrm{P}(\vec{Y}=\vec{y},Y=y, Y^{\prime}=y^{\prime}\mid X=x)\\ &=\sum_{y\in Y}\sum_{y^{\prime}\in Y^{\prime}}\underbrace{ \mathrm{P}(\vec{Y}=\vec{y}\mid Y=y,Y^{\prime}=y^{\prime},X=x)}_{\textbf{ Adversary-Aware transition matrix}}\bar{T}_{y,y^{\prime}}\mathrm{P}(Y=y\mid X=x).\end{split} \tag{3}\] In the adversary-aware partial label problem setting, the transition matrix of the adversary-aware partial label is defined as \(\mathrm{P}(\vec{Y}\mid Y,Y^{\prime},X)\) and denoted as \(Q^{*}\in\mathbb{R}^{c\times(2^{x}-2)}\). The partial label transition matrix \(\mathrm{P}(\vec{Y}\mid Y)\) is denotes as \(\bar{Q}\in\mathbb{R}^{c\times(2^{x}-2)}\). Theoretically, if the true label \(Y\) of the vector \(\vec{Y}\) is unknown given an instance \(X\), where \(\vec{y}\in\vec{Y}\) and there are \(2^{c}-2\) candidate label sets.The \(\epsilon_{x}\) is the instance-dependent rival label noise for each instance where \(\epsilon_{x}\in\mathbb{R}^{1\times c}\). The entries of the adversary-aware transition matrix for each instance is defined as follows \[\sum_{j=1}^{2^{x}-2}Q^{*}[:,j]=\sum_{j=1}^{2^{x}-2}([\bar{Q}[:,j]^{T}+\epsilon _{x}]\bar{T})^{T}=\sum_{j=1}^{2^{x}-2}(A[:,j]^{T}\bar{T})^{T}, \tag{4}\] where \(A[:,j]^{T}=\bar{Q}[:,j]^{T}+\epsilon_{x}\). By formulating the rival as \(\textbf{{Q}}^{\textbf{*}}=\mathrm{P}(\vec{Y}\mid Y^{\prime},Y,X)\), which equal to \(\min\{1,A^{(2^{x}-2)\times c}T^{c,c}\}\) and \(Q^{*}_{i,j}\in[0,1]^{(2^{x}-2)\times c}\), for \(\forall_{i,j}\in[c]\). We now have the adversary aware partial label. The conditional distribution of the adversary-aware partial label set \(\vec{Y}\) based on Wen et al. (2021) is derived as below \[\mathrm{P}(\vec{Y}=\vec{y}\mid Y=y,Y^{\prime}=y^{\prime},X=x)=\prod_{b^{\prime} \in\vec{y},b^{\prime}\neq y}p_{b^{\prime}}\cdot\prod_{t^{\prime}\notin\vec{y}} \left(1-p_{t^{\prime}}\right), \tag{5}\] where \(p_{t^{\prime}}\) and \(p_{b^{\prime}}\) are defined as \[p_{t^{\prime}}:=\mathrm{P}(t\in\vec{Y}\mid Y=y,Y^{\prime}=y^{\prime},X=x)<1,p _{b^{\prime}}:=\mathrm{P}(b\in\vec{Y}\mid Y=y,Y^{\prime}=y^{\prime},X=x)<1. \tag{6}\] We summarize the equation 3 as a matrix form in equation 26. The inverse problem is to identify a sparse approximation matrix \(\mathbf{A}\) to use equation 8 to estimate the true posterior probability. \[\begin{array}{c}\underbrace{P(\vec{Y}\mid X=x)}_{\mbox{\small Adversary-aware PLL}}=\mathbf{Q^{*}}&\underbrace{P(Y\mid X=x)}_{\mbox{\small True posterior probability}}\\ \mathbf{Q^{*}}^{-1}&\underbrace{P(\vec{Y}\mid X=x)}_{\mbox{\small Adversary-aware PLL}}=\underbrace{P(Y\mid X=x)}_{\mbox{\small True posterior probability}}\\ \end{array}, \tag{7}\] \[\begin{array}{c}\bar{\mathbf{T}}^{-1}\mathbf{A}^{-1}&\underbrace{P(\vec{Y}\mid X=x) }_{\mbox{\small Adversary-aware PLL}}\approx\underbrace{P(Y\mid X=x)}_{\mbox{ \small True posterior probability}}\\ \end{array}. \tag{8}\] In reality, due to the computational complexity of the transition matrix, it would be a huge burden to estimate \(Q^{*}\) accurately for each instance. The \(2^{c}-2\) is an extremely large figure and increases exponentially as the label space increase. Therefore, we are no longer required to estimate the true transition matrix \(\mathrm{P}(\vec{Y}\mid Y,Y^{\prime},X)\). Instead, we resort to using instance embedding in the form of a soft label to identify the adversary-aware partial label transition matrix \(Q^{*}\). Specifically, we proposed to use a soft pseudo label from the instance embedding (Prototype) to approximate the adversary-aware transition matrix for each instance. The reason is that we can not achieve the true transition matrix \(Q^{*}\) directly due to the nature of the practical partial label problem. Therefore, we have used the self-attention prototype learning to approximate the true transition matrix. The detail is described in section 2.1. Since the Adversary-aware partial label is influenced by the rival label noise, it is challenging to accurately estimate both the class instance-independent transition matrix \(\bar{\mathbf{T}}\) and the sparse matrix \(\mathbf{A}\) simultaneously to estimate the true posterior. Considering that the \(\bar{\mathbf{T}}\) is private and given, it is easier for us just to approximate \(\mathbf{A}\) to estimate the posterior probability than the adversary. The equation 8 is implemented as the loss function in equation 17. ### Positive Sample Set The construction of a positive sample is used for contrastive learning to identify the transition matrix \(P(\vec{Y}\mid Y^{\prime},Y,X)\) via the label disambiguation. Nonetheless, the performance of the contrastive learning erodes drastically due to the introduced rival, which is manifest in the poorly constructed positive sample set, resulting in the degenerated classification performance (See Figure 2). Subsequently, the adversary-aware loss function is proposed in conjunction the contrastive learning to prevent classification performance degeneration. To start with, we define \(L_{2}\) norm embedding of \(u\) and \(k\) as the query and key latent feature from the feature extraction network \(\mathbf{f}_{\Theta}\) and key neural network \(f^{\prime}_{\Theta}\) respectively. Correspondingly, we have the output \(\mathbf{u}\in R^{1\times d}\) where \(\mathbf{u}_{i}=f_{\Theta}(\mathrm{Aug}_{q}(x))\) and \(\mathbf{z}\in R^{1\times d}\) where \(\mathbf{z}_{i}\)=\(f^{\prime}_{\Theta}(\mathrm{Aug}_{k}(\mathbf{x}_{i}))\). The construction of a positive sample set is shown as follows. In each mini-batch, we have \(\vec{D}_{b}\) where \(\vec{D}_{b}\in\vec{D}\). The \(f(x_{i})\) is the function of a neural network with a projection head of 128 feature dimensionality. The outputs of \(D_{q}\) and \(D_{k}\) are defined as follows, \[D_{q}=\{\mathbf{u}_{i}=f\left(\mathrm{Aug}_{q}\left(\mathbf{x}_{i}\right) \right)\mid\mathbf{x}_{i}\in\vec{D_{b}}\}, \tag{9}\] \[D_{k}=\{\mathbf{z}_{i}=f^{\prime}\left(\mathrm{Aug}_{k}\left(\mathbf{x}_ {i}\right)\right)\mid\mathbf{x}_{i}\in\vec{D_{b}}\}, \tag{10}\] where \(\bar{S}(\mathbf{x})\) is the sample set excluding the query set \(q\) and is defined as \(\bar{S}(\mathbf{x})=\bar{C}\backslash\{\mathbf{q}\}\), in which \(\bar{\mathcal{C}}=D_{q}\cup D_{k}\cup\text{ queue }\). The \(D_{q}\) and \(\bar{D}_{k}\) are vectorial embedding with respect to the query and key views given the current mini-batch. The queue size is determined accordingly depending on the input. The instances from the current mini-batch with the prediction label \(\bar{y}^{\prime}\) equal to \((\hat{y}_{i}=c)\) from the \(\hat{\mathcal{S}}(x)\). is chosen to be the positive sample set. Ultimately, the \(N(\mathbf{x})\) is acquired, and it is denoted as \[N_{+}(\mathbf{x}_{i})=\left\{\mathbf{x}^{\prime}\mid\mathbf{z}^{\prime}\in\hat{\mathcal{S}} \left(\mathbf{x}_{i}\right),\bar{y}^{\prime}=(\hat{y}_{i}=c)\right\}. \tag{11}\] The \(N_{+}(x)\) is the positive sample set. The construction of sufficiently accurate positive sample set \(N_{+}(x)\) is vital as it underpins the clustering effect of the latent embedding in the contrastive learning procedure. The quality of the clustering effect relies on the precision of prototype \(v_{j}\) corresponding to \(j\in\{1,...,C\}\). Our method helps maintain the precision of prototypes using the \(\hat{T}\) to render better label disambiguation module performance for contrastive learning when introduced the rival. where the query embedding \(u\) multiplies the key embedding \(z\) and then divides with the remaining pool \(\hat{C}\). Overall, the \(S_{+}(x)\) is used to facilitate the representation learning of the contrastive learning and the self-attention prototype learning to do the label disambiguation or a more accurate pseudo-labelling procedure. Our proposed loss ensures the prototype and contrastive learning are working systematically and benefit mutually when the rival is introduced. The pseudo label generation is according to equation 16. We have followed Wang et al. (2022) for the positive sample selection. ## 2 Methodology The main task of partial label learning is label disambiguation, which targets identifying the true label among candidate label sets. Thus, we present an adversarial teacher within momentum (ATM). The equation 17 is developed to do the debiasing from the prediction of \(f(x)\) given the adversary-aware partial label via the class instance dependent transition matrix \(\hat{T}+I\). The unbiased prediction induces the identification of a more accurate positive sample set which allows Equation 18 to leverage the high-quality presentation power of a positive sample set to improve the classification performance. ### Pseudo Label Learners via Adversarial Teacher within Momentum (ATM) Unlike Wang et al. (2022), we present an adversarial teacher strategy with momentum update (ATM) to guide the learning of pseudo labels using Equation 17. Just like a tough teacher who teaches the subject using harsh contents to test students' understanding of the subject. In our case, the rival is like the subject which is purposely generated by us, at the same time Equation 17 is introduced to check the understanding of the student (classifier) given the scope of testing content which is the \(\hat{T}\). Specifically, the spherically margin between prototype vector \(\mathbf{v}_{i}\in\mathbb{S}^{d-1}\) and prototype vector \(\mathbf{v}_{j}\in\mathbb{S}^{d-1}\) is defined as \[m_{ij}=\exp{(-\mathbf{v}_{i}^{\top}\mathbf{v}_{j})}. \tag{12}\] For prototype \(\mathbf{v}_{i}\), we define the normalized margin between \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\) as \[\bar{m}_{ij}=\frac{\exp{(-\mathbf{v}_{i}^{\top}\mathbf{v}_{j})}}{\sum_{j\neq i}\exp{( -\mathbf{v}_{i}^{\top}\mathbf{v}_{j})}}. \tag{13}\] Figure 1: An overview of the proposed method. General partial label can be disclosed to adversary. The initial training is about positive sample selection. Moreover, we have assumed \(\hat{T}\) is given. For each \(\mathbf{v}_{i},i\in\{1,\cdots,K\}\), we perform momentum updating with the normalized margin between \(\mathbf{v}_{j}\) and \(\mathbf{v}_{i}\) for all \(j\neq i\) as an regularization. The resulted new update rule is given as \[\mathbf{v}_{i}^{t+1}=\sqrt{1-\alpha^{2}}\mathbf{v}_{i}^{t}+\alpha\frac{\mathbf{g}}{\|\mathbf{g} \|_{2}}, \tag{14}\] where the gradient \(\mathbf{g}\) is given as \[\mathbf{g}=\mathbf{u}-\beta\sum_{j\neq i}\tilde{m}_{ij}^{t}\mathbf{v}_{j}^{t}, \tag{15}\] where \(\mathbf{u}\) is the query embedding whose prediction is class \(i\), \(\tilde{m}_{ij}^{t}\) is the normalized margin between prototype vectors at step \(t\) (i.e., \(\mathbf{v}_{j}^{t},j\neq i\)). The \(v_{c}\) is the prototype corresponding to each class. \[\mathbf{\bar{q}}=\phi\mathbf{\bar{q}}+(1-\phi)\mathbf{v},\quad v_{c}=\begin{cases}1&\text { if }c=\arg\max_{j\in Y}\mathbf{u}^{\top}\mathbf{v}\\ 0&\text{ otherwise, }\end{cases}. \tag{16}\] where \(\bar{q}\) is the target prediction and subsequently used in the equation 17. It was initialised as the uniform probability \(\mathbf{\bar{q}}=\frac{1}{|c|}|^{k}\) and updated accordingly to the equation 16. The \(\phi\) is the hyper-parameter controlling for the updating of \(\mathbf{\bar{q}}\). ### Adversary Aware Loss Function. The goal is to build a risk consistent loss function, hoping it can achieve the same generalization error as the supervised classification risk \(R(f)\) with the same classifier \(f\). To train the classifier, we minimize the following modified loss function estimator by leveraging the updated pseudo label from the Adversarial teacher within momentum (ATM) distillation method and transition plus identity matrix, \(I_{i,j}\in[0,1]^{c\times c}\), \(I_{i,i}=1\), for \(\forall_{i=j}\in[c]\), \(I_{i,j}=0\), for \(\forall_{i\neq j}\in[c]\): where \(f\left(\mathbf{X}\right)\in\mathbb{R}^{|c|}\), \[\bar{\mathcal{L}}(f(X),\vec{Y})=-\sum_{i=1}^{c}(\bar{q}_{i})\log\left(((\mathbf{ \bar{T}}+\mathbf{I})f(X))_{i}\right). \tag{17}\] The proof for the modified loss function is shown in the appendix lemma 4. In our case, given sufficiently accurate positive sample set of the contrastive learning is utilised to incorporate with equation 17 to identify the transition matrix of the adversary-aware partial label. The contrastive loss is defined as follows \[\mathcal{L}(f(x),\tau,C)=\frac{1}{|D_{q}|}\sum_{\mathbf{u}\in D_{q}} \{-\frac{1}{N_{+}(x)}\sum_{\mathbf{z}_{\mathbf{z}}\in N_{+}(x)}\log\frac{\exp(\mathbf{u}^ {\top}\mathbf{z}/\tau)}{\sum_{\mathbf{z}^{\prime}\in\mathcal{C}(\mathbf{z})}\exp(\mathbf{u}^ {\top}\mathbf{z}/\tau)}\}. \tag{18}\] Finally, we have the Adversary-Aware Loss expressed as \[\textbf{Adversary-Aware Loss}=\lambda\mathcal{L}(f(x_{i}),\tau,C)+\bar{\mathcal{ L}}(f(X),\vec{Y}). \tag{19}\] There are two terms of the proposed loss function (equation 19), which are the equation 17 and equation 18 correspondingly. equation 17 is developed to lessen prediction errors from \(f(x)\) given the adversary-aware partial label. The debiasing is achieved via the class instance dependent transition matrix \(\vec{T}+I\) by down-weighting the false prediction. The unbiased prediction induces the identification of a more accurate positive sample set. equation 18 is the contrastive loss. It leverages the high-quality representation power of positive sample set to improve the classification performance further. ## 3 Theoretical Analysis The section introduces the concepts of classifier consistency and risk consistency Xia et al. (2019) Zhang (2004), which are crucial in weakly supervised learning. Risk consistency is achieved if the risk function of weak supervised learning is the same as the risk of fully supervised learning with the same hypothesis. The risk consistency implies classifier consistency, meaning classifier trained with partial labels is consistent as the optimal classifier of the fully supervised learning. **Classifier-Consistent Risk Estimator Learning with True labels.** Lets denote \(f(X)=(g_{1}(x),\ldots,g_{K}(x))\) as the classifier, in which \(g_{c}(x)\) is the classifier for label \(c\in[K]\). The prediction of the classifier \(f_{c}(x)\) is \(P(Y=c\mid x)\). We want to obtain a classifier \(f(X)\) =\(\operatorname*{arg\,max}_{i\in[K]}g_{i}(x)\). The loss function is to measure the loss given classifier \(f(X)\). To this end, the true risk can be denoted as \[R(f)=\mathbb{E}_{(X,Y)}[\mathcal{L}\left(f\left(X\right),Y\right)]. \tag{20}\] The ultimate goal is to learn the optimal classifier \(f^{*}\)=\(\operatorname*{arg\,min}_{f\in\mathcal{F}}R(f)\) for all loss functions, for instance to enable the empirical risk \(\hat{R}_{pn}(f)\) to be converged to true risk \(R(h)\). To obtain the optimal classifier, we need to prove that the modified loss function is risk consistent as if it can converge to the true loss function. **Learning with adversary-aware Partial Label.** An input \(X\in\mathcal{X}\) has a candidate set of \(\vec{Y}\in\vec{\mathcal{Y}}\) but a only true label \(Y\in\vec{\mathcal{Y}}\). Given the adversary-aware partial label \(\vec{Y}\in\vec{\mathcal{Y}}\) and instance \(X\in\mathcal{X}\) that the objective of the loss function is denoted as \[\hat{R}(f)=\mathbb{E}_{(X,\vec{Y})}\vec{\mathcal{L}}\left(f\left(X\right), \vec{Y}\right). \tag{21}\] Since the true adversary-aware partial label distribution \(\vec{\mathcal{D}}\) is unknown, our goal is approximate the optimal classifier with sample distribution \(\hat{D}_{pn}\) by minimising the empirical risk function, namely \[\hat{R}_{pn}(f)=\frac{1}{n}\sum_{i=1}^{n}\vec{\mathcal{L}}\left(f\left( \boldsymbol{x}_{i}\right),\vec{y}_{i}\right). \tag{22}\] **Assumption 1**.: According to Yu et al. (2018) that the minimization of the expected risk \(R(f)\) given clean true population implies that the optimal classifier is able to do the mapping of \(f_{i}^{*}(X)=P(Y=i\mid X)\), \(\forall i\in[c]\). Under the assumption 1, we are able to draw conclusion that \(\hat{f}^{*}=f^{*}\) applying the theorem 2 in the following. Theorem 1**Theorem 1**.: _Assume that the Adversary-Aware matrix \(\star T_{y,y^{\prime}}\) is fully ranked and the Assumption \(1\) is met, the the minimizer of \(\hat{f}^{*}\) of \(\hat{R}(f)\) will be converged to \(f^{*}\) of \(R(f)\), meaning \(\hat{f}^{*}=f^{*}\)._ **Remark.** If the \(Q^{*}\) and \(T_{y,y^{\prime}}\) is estimated correctly the empirical risk of the designed algorithm trained with adversary-aware partial label will converge to the expected risk of the optimal classifier trained with the true label. If the number of sample is reaching infinitely large that given the adversary-aware partial labels, \(\hat{f}_{n}\) is going to converged to \(\hat{f}^{*}\) theoretically. Subsequently, \(\hat{f}_{n}\) will converge to the optimal classifier \(f^{*}\) as claimed in the theorem 1. With the new generation procedure, the loss function risk consistency theorems are introduced. **Theorem 2**.: _The adversary-aware loss function proposed is risk consistent estimator if it can asymptotically converge to the expected risk given sufficiently good approximate of \(\bar{Q}\) and the adversary-aware matrix. The proof is in appendix lemma 4._ \[\mathcal{L}(y,f(x)) =\sum_{\vec{y}\in\vec{y}_{y}}\sum_{y=1}^{C}\sum_{y^{\prime}\in Y^ {\prime}}\left(\operatorname{P}(Y=y\mid X=x)\right.\] \[=\prod_{b\in\vec{y}}p_{b^{\prime}}\cdot\prod_{t^{\prime}\notin \vec{y}}\left(1-p_{t^{\prime}}\right)\bar{T}_{y,y^{\prime}}\vec{\mathcal{L}}( \vec{y},f(x)))\] \[=\vec{\mathcal{L}}(\vec{y},f(x)). \tag{23}\] ### Generalisation error _Define \(\hat{R}\) and \(\hat{R}_{pn}\) as the true risk the empirical risk respectively given the adversary-aware partial label dataset. The empirical loss classifier is obtained as \(\hat{f}_{pn}=\operatorname*{arg\,min}_{f\in\mathcal{F}}\hat{R}_{pn}(f)\). Suppose a set of real hypothesis \(\mathcal{F}_{\vec{y}_{k}}\) with \(f_{i}(X)\in\mathcal{F},\forall i\in[c]\). Also, assume it's loss function \(\vec{\mathcal{L}}(\boldsymbol{f}(X),\vec{Y})\) is \(L\)-Lipschitz continuous with respect to \(f(X)\) for all \(\vec{y}_{k}\in\vec{\mathcal{Y}}\) and upper-bounded by \(M\), i.e., \(M=\sup_{x\in\mathcal{X},f\in\mathcal{F},y_{k}\in\mathcal{\vec{Y}}}\vec{\mathcal{L}} \left(f(x),\vec{y}_{k}\right)\). The expected Rademacher complexity of \(\mathcal{F}_{k}\) is denoted as \(\Re_{n}(\mathcal{F}_{\vec{y}_{k}})\)Bartlett & Mendelson (2002) **Theorem 3**. _For any \(\delta>0\), with probability at least \(1-\delta\),_ \[\hat{R}\left(\hat{f}_{pn}\right)-\hat{R}\left(\hat{f}^{\star}\right)\leq 4 \sqrt{2}L\sum_{k=1}^{c}\Re_{n}\left(\mathcal{F}_{\vec{y}_{k}}\right)+M\sqrt{ \frac{\log\frac{2}{\delta}}{2n}}. \tag{24}\] As the number of samples reaches to infinity \(n\rightarrow\infty,\Re_{n}\left(\mathcal{F}_{\vec{y}_{k}}\right)\to 0\) with a bounded norm. Subsequently, \(\bar{R}(\hat{f})\rightarrow\bar{R}\left(\hat{f}^{\star}\right)\) as the number of training data reach to infinitely large. The proof is given in Appendix Theorem 3. ## 4 Experiments \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Dataset & Method & \(q=0.01\) & \(q=0.05\) & \(q=0.1\) \\ \hline \multirow{4}{*}{CIFAR100} & (ATM)(Without T)(Our) & **73.43**\(\pm 0.11\) & 72.63 \(\pm 0.27\) & **72.35**\(\pm 0.22\) \\ & PiCO & 73.28 \(\pm 0.24\) & **72.90**\(\pm 0.27\) & 71.77 \(\pm 0.14\) \\ & LWS & 65.78 \(\pm 0.02\) & 59.56 \(\pm 0.33\) & 53.53 \(\pm 0.08\) \\ & PRODEN & 62.60 \(\pm 0.02\) & 60.73 \(\pm 0.03\) & 56.80 \(\pm 0.29\) \\ & Full Supervised & & 73.56 \(\pm 0.10\) & \\ \hline \multirow{2}{*}{Dataset} & Method & \(q^{\star}=0.03\pm 0.02\) & \(q^{\star}=0.05\pm 0.02\) & \(q^{\star}=0.1\pm 0.02\) \\ \hline \multirow{4}{*}{CIFAR100} & (ATM)(Our)\({}^{\star}\) & 73.36 \(\pm 0.32\) & 72.76 \(\pm 0.14\) & **54.09**\(\pm 1.88\) \\ & PiCO\({}^{\star}\) & 72.87 \(\pm 0.26\) & 72.53 \(\pm 0.37\) & 48.03 \(\pm 3.32\) \\ & LWS\({}^{\star}\) & 46.8 \(\pm 0.06\) & 24.82 \(\pm 0.17\) & 4.53 \(\pm 0.47\) \\ & PRODEN \({}^{\star}\) & 59.33 \(\pm 0.48\) & 41.20 \(\pm 0.27\) & 13.44\(\pm 0.41\) \\ \hline \hline \multirow{2}{*}{CUB200} & (ATM) (Without T)(Our) & **74.43**\(\pm 0.876\) & **72.30**\(\pm 0.521\) & **66.87**\(\pm 0.98\) \\ & PiCO & 74.11\(\pm 0.37\) & 71.75\(\pm 0.56\) & 66.12\(\pm 0.99\) \\ & LWS & 73.74\(\pm 0.23\) & 39.74\(\pm 0.47\) & 12.30\(\pm 0.77\) \\ & PRODEN & 72.34\(\pm 0.04\) & 62.56\(\pm 0.10\) & 35.89\(\pm 0.05\) \\ & Full Supervised & & 76.02\(\pm 0.19\) & \\ \hline \hline Dataset & Method & \(q^{\star}=\{0.03\pm 0.02\}\) & \(q^{\star}=\{0.05\pm 0.02\}\) & \(q^{\star}=\{0.1\pm 0.02\}\) \\ \hline \multirow{4}{*}{CUB200} & (ATM) (Our)\({}^{\star}\) & **72.22**\(\pm 1.36\) & **72.43**\(\pm 0.86\) & **56.26**\(\pm 0.70\) \\ & PiCO\({}^{\star}\) & 71.85\(\pm 0.53\) & 71.15\(\pm 0.41\) & 50.31\(\pm 1.01\) \\ & LWS\({}^{\star}\) & 9.60\(\pm 0.62\) & 4.02\(\pm 0.03\) & 1.44\(\pm 0.06\) \\ & PRODEN\({}^{\star}\) & 18.71\(\pm 0.45\) & 17.63\(\pm 0.89\) & 17.99\(\pm 0.62\) \\ \hline \hline \end{tabular} \begin{tabular}{c|c|c|c|c} \hline \hline Dataset & Method & \(q=0.1\) & \(q=0.3\) & \(q=0.5\) \\ \hline \multirow{4}{*}{CIFAR10} & (ATM)(Without T)(Our) & 93.57\(\pm 0.16\) & 93.17\(\pm 0.09\) & 92.22\(\pm 0.40\) \\ & PiCO & **93.74**\(\pm 0.24\) & **93.25**\(\pm 0.32\) & **92.46**\(\pm 0.38\) \\ & LWS & 90.30 \(\pm 0.60\) & 88.99 \(\pm 1.43\) & 86.16 \(\pm 0.85\) \\ & PRODEN & 90.24\(\pm 0.32\) & 89.38\(\pm 0.31\) & 87.78\(\pm 0.07\) \\ & Full Supervised & & 94.91\(\pm 0.07\) & \\ \hline \hline Dataset & Method & \(q^{\star}=0.1\pm 0.02\) & \(q^{\star}=0.3\pm 0.02\) & \(q^{\star}=0.5\pm 0.02\) \\ \hline \multirow{4}{*}{CIFAR10} & (ATM) (Our)\({}^{\star}\) & 93.52 \(\pm 0.11\) & **92.98**\(\pm 0.51\) & **89.62**\(\pm 0.79\) \\ & PiCO\({}^{\star}\) & **93.64**\(\pm 0.24\) & 92.85\(\pm 0.43\) & 81.45\(\pm 0.57\) \\ & LWS\({}^{\star}\) & 87.34\(\pm 0.87\) & 39.9\(\pm 0.72\) & 9.89\(\pm 0.55\) \\ & PRODEN\({}^{\star}\) & 88.80\(\pm 0.14\) & 81.88\(\pm 0.51\) & 20.32\(\pm 3.43\) \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark datasets for accuracy comparisons. Superior results are indicated in bold. Our proposed methods have shown comparable results to fully supervised learning and outperform previous methods in a more challenging learning scenario, such as the partial rate at 0.5(CIFAR10) and 0.1(CIFAR100, CUB200). The hyper-parameter \(\alpha\) is set to 0.1 for our method. (The symbol \(\ast\) indicates Adversary-Aware partial label dataset). **Datasets** We evaluate the proposed method on three benchmarks-CIFAR10, CIFAR100 Krizhevsky et al. (2009), and fine-grained CUB200 Wah et al. (2011) with general partial label and adversary-aware partial label datasets. **Main Empirical Results for CIFAR10.** All the classification accuracy is shown in Table 1. We have compared classification results on CIFAR-10 with previous works Wang et al. (2022); Lv et al. (2020); Wen et al. (2021) using the Adversarial teacher within momentum (ATM). The method has shown consistently superior results in all learning scenarios where \(q=\{0.3,0.5\}\) for the adversary-aware partial label learning. More specifically, the proposed method achieves **8.17\(\%\)** superior classification performance at a 0.5 partial rate than the previous state of art work Wang et al. (2022). Moreover, our proposed method has achieved comparable results at 0.1 and 0.3 partial rates. The experiments for CIFAR-10 have been repeated four times with four random seeds. **Main Empirical Results for CUB200 and CIFAR100.** The proposed method has shown superior results for the Adversary-Aware Partial Label, especially in more challenging learning tasks like the 0.1 partial rate of the dataset cub200 and CIFAR100, respectively. On the cub200 dataset, we have shown **5.95%** improvement at partial rates 0.1 and 1.281% and 0.37% where the partial rate is at 0.05 and 0.03. On the CIFAR100 dataset, the method has shown **6.06%** and 0.4181%, 0.5414% higher classification margin at partial rate 0.1, 0.05 and 0.03.The experiments have been repeated five times with five random seeds. ### Ablation Study Figure 2 shows the experimental result comparisons for CUB200 between the adversary-aware loss function and previous loss function before and after the momentum updating. Given equation 17, the uncertainty of the transition matrix \(\bar{Q}\) is reduced, leading to a good initialisation for the positive set selection, which is a warm start and plays a vital role in improving the performance of contrastive learning. After we have a good set of positive samples, the prototype's accuracy is enhanced. Subsequently, leveraging the clustering effect and the high-quality representation power of the positive sample set of contrastive loss function to improve the classification performance. ## 5 Conclusion and Future works This paper introduces a novel Adversary-Aware partial label learning problem. The new problem setting has taken local data privacy protection into account. Specifically, we have added the rival to the partial label candidate set as encryption for the dataset. Nonetheless, the generation process has made the intractable transition matrix even more complicated, leading to an inconsistency issue. Therefore, the novel adversary-aware loss function and the self-attention prototype are proposed. The method is proven to be a provable classifier and has shown superior performance. Future work will use variational inference methods to approximate the intractable transition matrix. Figure 2: The Top1 and Prototype Accuracy of the Proposed Method and the Method in Wang et al. (2022) on CUB200 Adversary-Aware Loss Comparison. ## References * Bartlett and Mendelson (2002) Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. _Journal of Machine Learning Research_, 3(Nov):463-482, 2002. * Chen et al. (2020) Brian Chen, Bo Wu, Alireza Zareian, Hanwang Zhang, and Shih-Fu Chang. General partial label learning via dual bipartite graph autoencoder. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pp. 10502-10509, 2020. * Cour et al. (2011) Timothee Cour, Ben Sapp, and Ben Taskar. Learning from partial labels. _The Journal of Machine Learning Research_, 12:1501-1536, 2011. * Feng and An (2019) Lei Feng and Bo An. Partial label learning with self-guided retraining. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 33, pp. 3542-3549, 2019. * Feng et al. (2020) Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, and Masashi Sugiyama. Provably consistent partial-label learning. _Advances in Neural Information Processing Systems_, 33:10948-10960, 2020. * Han et al. (2018) Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. _Advances in neural information processing systems_, 31, 2018. * He et al. (2016a) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016a. * He et al. (2016b) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _CVPR_, pp. 770-778, 2016b. * He et al. (2020) Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pp. 9729-9738, 2020. * Huang et al. (2006) Jiayuan Huang, Arthur Gretton, Karsten Borgwardt, Bernhard Scholkopf, and Alex Smola. Correcting sample selection bias by unlabeled data. _Advances in neural information processing systems_, 19, 2006. * Hullermeier and Beringer (2006) Eyke Hullermeier and Jurgen Beringer. Learning from ambiguously labeled examples. _Intelligent Data Analysis_, 10(5):419-439, 2006. * Jin and Ghahramani (2002) Rong Jin and Zoubin Ghahramani. Learning with multiple labels. _Advances in neural information processing systems_, 15, 2002. * Kairouz et al. (2014) Peter Kairouz, Sewoong Oh, and Pramod Viswanath. Extremal mechanisms for local differential privacy. _Advances in neural information processing systems_, 27, 2014. * Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. * Li et al. (2020) Junnan Li, Pan Zhou, Caiming Xiong, and Steven CH Hoi. Prototypical contrastive learning of unsupervised representations. _arXiv preprint arXiv:2005.04966_, 2020. * Liu and Dietterich (2012) Liping Liu and Thomas Dietterich. A conditional multinomial mixture model for superset label learning. _Advances in neural information processing systems_, 25, 2012. * Liu and Dietterich (2014) Liping Liu and Thomas Dietterich. Learnability of the superset label learning problem. In _International Conference on Machine Learning_, pp. 1629-1637. PMLR, 2014. * Lv et al. (2020) Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, and Masashi Sugiyama. Progressive identification of true labels for partial-label learning. In Hal Daume III and Aarti Singh (eds.), _Proceedings of the 37th International Conference on Machine Learning_, volume 119 of _Proceedings of Machine Learning Research_, pp. 6500-6510. PMLR, 13-18 Jul 2020. * Lyu et al. (2019) Gengyu Lyu, Songhe Feng, Tao Wang, Congyan Lang, and Yidong Li. Gm-pll: graph matching based partial label learning. _IEEE Transactions on Knowledge and Data Engineering_, 33(2):521-535, 2019. * Maurer (2016) Andreas Maurer. A vector-contraction inequality for rademacher complexities. In _International Conference on Algorithmic Learning Theory_, pp. 3-17. Springer, 2016. * McDiarmid et al. (1989) Colin McDiarmid et al. On the method of bounded differences. _Surveys in combinatorics_, 141(1):148-188, 1989. * Mohri et al. (2018) Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. _Foundations of machine learning_. MIT press, 2018. * Nguyen and Caruana (2008) Nam Nguyen and Rich Caruana. Classification with partial labels. In _Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining_, pp. 551-559, 2008. * Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_, 2018. * Patrini et al. (2017) Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 1944-1952, 2017. * Vapnik (1999) Vladimir Vapnik. _The nature of statistical learning theory_. Springer science & business media, 1999. * Wah et al. (2011) Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. * Wang et al. (2020) Haobo Wang, Yuzhou Qiang, Chen Chen, Weiwei Liu, Tianlei Hu, Zhao Li, and Gang Chen. Online partial label learning. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_, pp. 455-470. Springer, 2020. * Wang et al. (2022) Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. Pico: Contrastive label disambiguation for partial label learning. _ICLR_, 2022. * Wang and Isola (2020) Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In _International Conference on Machine Learning_, pp. 9929-9939. PMLR, 2020. * Warner (1965) Stanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias. _Journal of the American Statistical Association_, 60(309):63-69, 1965. * Wen et al. (2021) Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, and Zhouchen Lin. Leveraged weighted loss for partial label learning. In Marina Meila and Tong Zhang (eds.), _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pp. 11091-11100. PMLR, 18-24 Jul 2021. * Xia et al. (2019) Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. Are anchor points really indispensable in label-noise learning? _Advances in Neural Information Processing Systems_, 32:6838-6849, 2019. * Xu et al. (2019) Ning Xu, Jiaqi Lv, and Xin Geng. Partial label learning via label enhancement. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 33, pp. 5557-5564, 2019. * Xu et al. (2021) Ning Xu, Congyu Qiao, Xin Geng, and Min-Ling Zhang. Instance-dependent partial label learning. _Advances in Neural Information Processing Systems_, 34, 2021. * Yu et al. (2018) Xiyu Yu, Tongliang Liu, Mingming Gong, and Dacheng Tao. Learning with biased complementary labels. In _ECCV_, pp. 68-83, 2018. * Zhang et al. (2016) Min-Ling Zhang, Bin-Bin Zhou, and Xu-Ying Liu. Partial label learning via feature-aware disambiguation. In _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pp. 1335-1344, 2016. * Zhang (2004) Tong Zhang. Statistical analysis of some multi-category large margin classification methods. _Journal of Machine Learning Research_, 5(Oct):1225-1251, 2004. * Zhou (2018) Zhi-Hua Zhou. A brief introduction to weakly supervised learning. _National science review_, 5(1):44-53, 2018.
2301.12642
Exploring the Constructicon: Linguistic Analysis of a Computational CxG
Recent work has formulated the task for computational construction grammar as producing a constructicon given a corpus of usage. Previous work has evaluated these unsupervised grammars using both internal metrics (for example, Minimum Description Length) and external metrics (for example, performance on a dialectology task). This paper instead takes a linguistic approach to evaluation, first learning a constructicon and then analyzing its contents from a linguistic perspective. This analysis shows that a learned constructicon can be divided into nine major types of constructions, of which Verbal and Nominal are the most common. The paper also shows that both the token and type frequency of constructions can be used to model variation across registers and dialects.
Jonathan Dunn
2023-01-30T03:51:08Z
http://arxiv.org/abs/2301.12642v1
# Exploring the Constructicon: ###### Abstract Recent work has formulated the task for computational construction grammar as producing a constructicon given a corpus of usage. Previous work has evaluated these unsupervised grammars using both internal metrics (for example, Minimum Description Length) and external metrics (for example, performance on a dialectology task). This paper instead takes a linguistic approach to evaluation, first learning a constructicon and then analyzing its contents from a linguistic perspective. This analysis shows that a learned constructicon can be divided into nine major types of constructions, of which _Verbal_ and _Nominal_ are the most common. The paper also shows that both the token and type frequency of constructions can be used to model variation across registers and dialects. ## 1 Introduction Construction Grammar (CxG) is a usage-based approach to language which views grammatical structure as a set of form-meaning mappings called a _constructicon_Langacker (2008). From this usage-based perspective, a _construction_ could belong in the grammar either (i) because it is sufficiently entrenched (i.e., frequent) that it is stored and processed as a unique item or (ii) because it is sufficiently irregular (i.e., idiomatic) that it requires a unique grammatical description Goldberg (2006). The advantage of CxG from this perspective is that it focuses on explaining the creativity, the flexibility, and the idiosyncrasy of actual language use in real-world settings Goldberg (2019). Given this focus of CxG as a linguistic theory, the ideal computational implementation must be data-driven and unsupervised. For example, approaches which rely on manual annotations derived from individual introspection Steels (2017) fail to capture the usage-based foundations of CxG, in addition to being unreproducible and difficult to scale. For this reason, most recent work on computational CxG has taken an unsupervised learning approach to forming constructicons Dunn (2017, 2022). Such an unsupervised approach has its own challenges, however, especially the challenge of evaluation. Grammars from other syntactic paradigms can be evaluated by annotating a gold-standard corpus and then measuring the ability of both supervised and unsupervised models to predict those same sets of annotations (cf., Zeman et al.2017, 2018). Given its usage-based foundations, this approach to evaluation is simply not feasible for computational CxG because the standard for what counts as a construction depends to some degree on the corpus or the community of speaker-hearers that is being observed. For this reason, recent work on computational CxG has undertaken both internal and external evaluations for determining which one of a set of posited constructicons is better. An internal metric measures the fit between a grammar and a given corpus to determine which alternative constructicon offers a better description Dunn (2018, 2019). This work has drawn on Minimum Description Length Goldsmith (2001, 2006) as an evaluation metric because it combines both descriptive adequacy (i.e., the fit between the grammar and the test set) and model complexity (i.e., the number and the type of constructions in the grammar). An external metric evaluates and compares constructicons using their performance when applied to a specific prediction task. Recent work has focused on the use of computational CxG for modelling individual differences Dunn and Nini (2021), register variation Dunn and Tayyar Madabushi (2021), and population-based dialectal differences Dunn (2018, 2019); Dunn and Wong (2022). Because CxG is a usage-based paradigm, the definition of a construction that is referenced above depends on both entrenchment and idiomaticity. Both of these are properties of a corpus of usage rather than properties of a language as a whole. In other words, it is only meaningful to describe _entrenchment_ relative to a particular individual, dialect community, or context of production. These external tasks have therefore focused on the degree to which computational CxG can in fact account for differences in usage across these dimensions. The contribution of this paper is to undertake a detailed qualitative and quantitative evaluation of a learned grammar. While it is not possible to start with gold-standard linguistic annotations of constructions, it is possible to apply a linguistic analysis to the output of an unsupervised, usage-based framework. We start by describing the model and the data which are used to learn the construction (Section 2) before presenting examples of types of constructions that it contains (Section 3). We then proceed to a quantitative analysis of the grammar (Section 4). Finally, we end with a discussion of the challenge of parsing a nested and hierarchical grammar which contains representations at different levels of abstraction (Section 5). ## 2 Methods and Data Computational CxG is a theory in the form of a grammar induction algorithm that provides a reproducible constructicon given a corpus of exposure Dunn (2017, 2022). The theory is divided into three components, each of which models a particular aspect of the emergence of constructicons given exposure to a corpus of usage. First, a psychologically-plausible measure of association, the \(\Delta P\), is used to measure the entrenchment of potential constructions Ellis (2007); Dunn (2018). These potential constructions are sequences of lexical, syntactic, and semantic slot-constraints. The problem of _category formation_ is to define the inventory of fillers that are used for slot-constraints. In this implementation, lexical constraints are based on word-forms, without lemmatization. Syntactic constraints are formulated using the universal part-of-speech tagset Petrov et al. (2012) and implemented using the Ripple Down Rules algorithm Nguyen et al. (2016). Semantic constraints are based on distributional semantics, with k-means clustering used to discretize fastText embeddings Grave et al. (2018). The semantic constraints in the examples in this paper are formulated using the index of the corresponding clusters, a simple notational convention. Second, an association-based beam search is used to identify constructions of arbitrary length by finding the most entrenched representations in reference to a matrix of \(\Delta P\) values Dunn (2019). The beam search parsing strategy allows the grammar to avoid relying on heuristic frames and templates for producing potential constructions. Third, a measure of fit based on the Minimum Description Length paradigm is used to balance the increased storage of item-specific constructions against the increased computation of more generalized constructions Dunn (2018). The point is that any construction could become entrenched but more idiomatic constructions come at a higher cost. The contribution of this paper is to evaluate this existing model of CxG Dunn (2022) rather than to alter its overall method of learning a constructicon. We therefore apply the model without further discussion of its implementation and focus instead on a linguistic analysis of the resulting constructicon. The data used to learn grammars is collected from three sets of corpora: social media (Twitter), non-fiction articles (Wikipedia), and web pages (from the Common Crawl) drawn from the _Corpus of Global Language Use_Dunn (2020). This training corpus contains 2 million words per register for a total of 6 million words. From a usage-based perspective, exposure to language continues after the grammar has been acquired and such exposure might change the entrenchment of particular constructions. The model thus undertakes a second pruning stage which updates the constructicon given an additional 2 million words of exposure Dunn (2022). The model observes sub-corpora from each of the three registers in increments of 100k words. Each construction in the grammar receives an activation weight with an initial value of 1. For each sub-corpus in which a construction is not observed, its weight decays by 0.25. For each sub-corpus in which a construction is observed, its weight is returned to 1. When a construction's weight falls below 0, it is forgotten and removed from the grammar. This is a simple model of the way in which continued exposure leads to the forgetting of previously entrenched constructions. While somewhat arbitrary, the decay rate (0.25) is chosen to ensure that a construction is not forgotten simply because it occurs primarily in a specific register: this decay rate means that a construction must be absent from four successive sub-corpora, thus ensuring that each of the three registers has been observed. Thus, this pruning method removes unproductive constructions given additional exposure while ensuring that all three registers remain represented. A package for reproducing this grammar induction algorithm is available1 as well as the specific grammars used in this study.2 Footnote 1: [https://www.github.com/jonathandunn/c2xg](https://www.github.com/jonathandunn/c2xg) Footnote 2: [https://doi.org/10.18710/CES0L8](https://doi.org/10.18710/CES0L8) This method produces a constructicon that contains 12,856 constructions. The analysis in this paper is based on using this constructicon to annotate samples of 1 million words from 12 independent corpora: Project Gutenberg (Rae et al., 2019), Wikipedia (Ortman, 2018), European Parliament proceedings (Tiedemann, 2012), news article comments (Kesarwani, 2018), product reviews (Zhang et al., 2015), blogs (Schler et al., 2006), and tweets from six countries (with 1 million words representing each country; Dunn 2020). This range of corpora allows us to consider both register (different contexts of production) and dialect (different populations using the same register) when measuring the frequency and the productivity of individual constructions in the grammar. ## 3 Categorizing Constructions In this section we categorize the learned constructions to aid our quantitative analysis of the contents of the constructicon. We annotate a random sample of 20% of the constructions using the categorization described below, thus allowing an estimate of the overall composition of the grammar. The primary categories are _Verbal_, _Nominal_, _Adjectival_, _Adpositional_, _Transitional_, _Clausal_, _Adverbial_, _Sentential_, and _Fixed Idioms_. These categories are defined and exemplified in this section. The first category consists of verbal constructions. As shown in (1), we notate the construction using its slot-constraints, with each slot separated by dashes. Lexical constraints are shown in italics; syntactic constraints are shown in small caps; and semantic constraints are shown using the index of their distributional cluster (e.g., <521>). Using this notation, the construction in (1) is a simple passive verb phrase in a continuous aspect, defined using primarily syntactic constraints. (1) [ aux - _being_ - verb ] (1a) were being proposed (1b) was being spread (1c) is being invaded (1d) am being kept The verbal construction in (2) now contains a semantic constraint (<521>). This domain contains lexical items like _house_ and _carriage_, all locations that can be moved into or out of. The construction thus captures a meaning-based pattern of movement in relation to some area. (2) [ verb - adp - det - <521> ] (2a) come to this house (2b) leaped into a carriage (2c) seated at that window (2d) hurried across the room (2e) lying on the floor A lexical constraint for the main verb is shown in the construction in (3). This leads to an idiomatic usage of _play_, a set of utterances whose behaviour differs from the basic transitive verb phrase. The construction in (4) shows the influence of a lexical constraint in a different position, here _time_ as a noun introducing the verb phrase. This again results in idiomatic utterances with behaviour more specific than a construction with only syntactic constraints. Finally, the lexical constraint in (5) defines a particle verb, again with idiomatic semantics resulting for the utterances in (5a) through (5e). This series of examples shows how a lexical constraint in different locations within a verb phrase leads to different types of idiomatic verbal constructions. (3) [ _play_ - det - noun ] (3a) play the game (3b) play the part (3c) play the coquette (3d) play the king (4) [ _time_ - _to_ - verb ] (4a) time to plead (4b) time to write (4c) time to tell (4d) time to consider (4e) time to worry (5) [ _to_ - verb - _down_ ] (5a) to sit down (5b) to put down (5c) to settle down (5d) to bring down (5e) to strike down While these examples are relatively simple ver bal constructions, a more complex example is shown in (6). This construction contains a main verb with an infinitive complement followed by an argument that takes the form of a noun phrase. The entrenchment of these more complex constructions shows the flexibility of computational CxG as well as the infeasibility of relying on the introspection of individual linguists. (6) [verb - _to - be - c830> - adp - det - noun]_ (6a) seem to be unaware of the fact (6b) came to be known as the _Newcastle_ (6c) have to be supplied from that source (6d) is to be found in the world (6e) expect to be ushered into the temple Moving to nominal constructions, the first examples show the influence that a semantic constraint in one slot exerts across the entire construction. We focus here on complex nominal constructions, with both of these first examples containing a subordinate depositional phrase within the noun phrase. In each case, the noun in the adpositional phrase is constrained to a specific semantic domain. In (7), this leads to lexical items like _empire_ and _palace_ and, in (8), like _ground_ and _road_. Not all examples of a construction are perfect matches; an example of this is shown in (8e), marked with an asterisk, in which the first word is actually a mistagged verb rather than a noun. (7) [ noun - _of - det - <587> ]_ (7a) part of the empire (7b) inmates of the palace (7c) guardianship of the wanderer (7d) pursuit of a chimera (7e) circuit of the citadel (8) [ noun - adp - _the - c484> ]_ (8a) feet on the ground (8b) side of the road (8c) law of the land (8d) entrance of the path (8e) journey through the forest (8e) *wanders around the forest (9) [ _one - adp - the best - noun_ ]_ (9a) one of the best paintings (9b) one of the best apologies (9c) one of the best examples (9d) one of the best books More idiomatic noun phrases, with lexical constraints, are shown in (9) and (10). In the first, an adpositional phrase _one of the best_ functions as a single adjective. In the second, a superlative adjective frames the core noun phrase. In both cases, these constructions provide additional flexibility to describe unique nominal phrases, made into constructions by their entrenchment and their idiosyncrasy in this set of usage. (10) [ _the - most - adj - noun_ ]_ (10a) the most amusing instance (10b) the most violent withings (10c) the most astounding instances (10d) the most important generalizations (10e) the most unfavourable circumstances A single example of an adjectival construction is shown in (11). While the previous nominal constructions included adjectival material within them, this construction as a whole provides a modifier for a noun phrase. For example, (11e) as an abstract adjective could be combined with a variety of nouns like _immigrants_, _the elderly_, or _house sparrows_ to form a larger nominal construction. (11) [ _huge - noun - of_ ] (11a) huge pair of (11b) huge influx of (11c) huge clumps of (11d) huge piece of (11e) huge population of The next category is adpositional constructions, as shown in (12) through (14). As before, a semantic constraint leads to a meaning-based group of utterances, as with the terms specific to legal language in (12). In other words, this adpositional construction is specific to the category of nouns contained within it. A potentially problematic case is shown in (12e), here with what is likely a fixed idiom, where _case_ is not used in the legal sense. A lexical constraint for the head noun in (13) leads to idiosyncratic adpositional phrases with _beginning_. Other adpositional constructions are more syntactically complex. For example, the phrase in (14) transitions from a noun into a relative clause which describes that noun. (12) [ adp - det - c959> ] (12a) in the case (12b) of the provisions (12c) as a rule (12d) from the petitioners (12e)?in which case (13) [ adp - _the - beginning_ ] (13a) towards the beginning (13b) at the beginning (13c) from the beginning (13d) in the beginning (13e) for the beginning (14) [ adp - _the - noun - where_ ] (14a) in the world where (14b) at the spot where (14c) from the point where (14d) near the ceiling where The example of an depositional phrase that transitions into a relative clause in (14) introduces another category of constructions, those which capture transitional material connecting other types of constructions. In particular, the constructions in this category capture different types of transitions without containing the substance of the involved structures themselves. For example, in (15) there is the introduction of a new main clause with a first-person verb phrase. In (16) there is the introduction of a subordinate clause. In (17) there is a comparison between two nominal constructions. The final example in (17e) represents a problematic parse: the phrase is likely _at least_ rather than _least_ alone. These examples show how this category serves to link other constructions together. (15) [ _but - i - verb_ ] (15a) but i think (15b) but i knew (15c) but i regret (15d) but i noticed (16) [ sconj - verb - _to_ ] (16a) without seeming to (16b) because according to (16c) as opposed to (16d) while listening to (16e) in resorting to (17) [ adv - <917> - _than_ ] (17a) far deeper than (17b) considerably better than (17c) now more than (17d) much smaller than (17e) *least better than While transitional constructions focus mainly on the connecting element, clausal constructions are those which contain a significant portion of a subordinate clause. For instance, (18) is an example of a relative clause embedded within a larger noun phrase and (19) of a relative clause in which the subject is defined by the proceeding element. A problematic example is shown in (19e), where the phrase _a lot_ is treated as two separate slots. The complex subordinate clause in (20) consists of a gerund within an adpositional phrase, where the verb is further defined by a semantic constraint. Finally, a reduced relative clause is captured by (21), again with a semantic constraint on the verb. This series of examples shows the way in which subordinate clauses are captured in the grammar. (18) [ noun - adp - _those - who_ ] (18a) hearts of those who (18b) arguments of those who (18c) side of those who (18d) minds of those who (18e) tactics of those who (19) [ _which - verb - _a - noun_ ] (19a) which formed a snare (19b) which occasioned a detour (19c) which presented a problem (19d) which contained a letter (19e)? which looked a lot (20) [ sconj - <113> - det - noun - _of_ ] (20a) by taking the life of (20b) in sacrificing the rights of (20c) after collecting the remains of (20d) by applying a drop of (20e) in neglecting the cultivation of (21) [ det - noun - _he - <830> ] (21a) the loan he solicited (21b) the temple he discovered (21c) the words he used (21d) the life he led (21e) the flask he carried While these clausal constructions are connected into the main clause itself, the category of adverbial constructions contain clauses which are more independent of the structure of the main clause. For example, in (22) there is a gerund clause within an adpositional phrase, now with a semantic con straint. In (23) there is an adposition introducing a finite verb. And in (24), with a lexical constraint, there is a similar construction again with a finite verb. While similar to the clausal category, this class of constructions is less integrated with the main clause structure. (22) [ sconj - verb - adp - det - <512> ] (22a) in dealing with that section (22b) after referring to the matter (22c) as bearing on the question (22d) without glancing within the volume (22e) by bringing up the subject (23) [sconj - pron - aux - verb - _to_] (23a) that it would come to (23b) if he had lived to (23c) as they were trying to (24) [ _when_ - det - noun - _is_ ] (24a) when the end is (24b) when a man is (24c) when the heart is (24d) when the patient is (24e) when the temperature is (25) [ pron - _were_ - verb - adp ] (25a) we were accosted by (25b) they were employed by (25c) these were succeeded by (25d) they were drilled by (25e)? who were barred from sentential constructions contain the structure of the main clause. This category overlaps to some degree with verbal constructions; the key difference is that the sentential constructions contain the subject while verbal constructions do not. A simple passive clause is shown in (25), together with an adpositional argument. In many examples, this adpositional argument specifies the agent, but the example in (25e) differs in specifying a location. An active clause introducing an indirect speech clause is shown in (26), constrained to the subject _he_. Finally, a sequence of main verb and intuitive is shown in (27), with the final verb defined using a semantic constraint. (26) [ _he_ - verb - _that_ ] (26a) he remembered that (26b) he said that (26c) he realised that (26d) he discovered that (26e) he promised that (27) [ _they_ - verb - part - <583> ] (27a) they began to draw (27b) they threatened to destroy (27c) they chose to assert (27d) they wanted to persuade (27e) they began to look A more complex passive construction is shown in (28), containing both a semantic constraint on the main verb as well as an adpositional argument. Finally, a main clause with an existential _there_ as subject is shown in (29). As with the clausal constructions, these sentential constructions overlap with verbal constructions, thus illustrating the problem of parsing as clipping (c.f., Section 5). (28) [ noun - _are_ - adv - <830> - adp ] (28a) villages are thickly scattered about (28b) recruits are never measured for Figure 1: Distribution of Construction Types in the Grammar (28c) substances are universally regarded as (28d) lines are then drawn from (29) [_there_ - verb- \(a\) - noun - adp ] (29a) there was a kind of (29b) there is a habit of (29c) there were a number of (29d) there were a couple of (29e) there came a sort of The final category of constructions are fixed idioms, which here are mainly lexical constructions. These have a very limited number of types for each construction because the constraints are lexical: _in favor of, seems to be_, _all the best_, or _no matter_ adv. Taken together, the categories illustrated in this section describe the contents of the learned constructicon. A quantitative analysis of the distribution of construction types and their properties follows in the next section. ### Marginal Examples of Categories Not all constructions that are classified as belonging to a given category are equally good examples of that category. This section provides a few examples of such marginal tokens in order to provide a more transparent picture of the grammar as a whole. Starting with a construction categorized as adjectival in (30), we could also see this being categorized as a nominal construction. The reason behind this annotation decision is that the overall unit is used to describe a part of some piece of writing. (30) [ _beginning_ - adp - det - noun ] (30a) beginning of this note (30b) beginning of the article A marginal example of a nominal construction is shown in (31). Here, this sequence of noun and adpositional phrase, when taken in context, is quite likely to be two separate arguments of a double object verb phrase: for example, "They [ran [this country] [with the help...]]. However, the construction itself only includes the two arguments on their own. At the same time, (31) would clip together nicely with a verbal construction (c.f., Section 5). (31) [_this_ - noun - adp - _the_ - noun] (31a) this country with the help (31b) this morning to the surprise (32) [ verb- _by_ - det - <88> ] (32a) occcupied by a foreign (32b) used by the american A final marginal example is shown in (32), here within the verbal category. This example is a passive verb together with a prepositional phrase that expresses the agent. The issue here is that only part of the noun phrase specifying the agent is explicitly defined, and the slot constraint is semantic. From the perspective of clipping constructions, many noun phrases could be merged here but would not experience the same emergent relationships between slot-constraints. In other words, the impact of the semantic constraint would not transcend the construction boundary. These examples are meant to show some weaknesses of both the categorization scheme and the constructions themselves. ## 4 Distribution of Construction Types The first step in quantifying the contents of the constructicon is to estimate the relative distribution across these nine categories. This is shown in Figure 1 using annotations of 20% of the grammar to \begin{table} \begin{tabular}{r|r r|r r|r r|r r|r r|r r} \hline & \multicolumn{2}{c|}{**Blogs**} & \multicolumn{2}{c|}{**Comments**} & \multicolumn{2}{c|}{**Parliament**} & \multicolumn{2}{c|}{**Gutenberg**} & \multicolumn{2}{c}{**Reviews**} & \multicolumn{2}{c}{**Wikipedia**} \\ & _Freq_ & _Type_ & _Freq_ & _Type_ & _Freq_ & _Type_ & _Freq_ & _Type_ & _Freq_ & _Type_ & _Freq_ & _Type_ \\ \hline _Adjectival_ & 57 & 36 & 69 & 43 & 66 & 40 & 79 & 59 & 80 & 45 & 73 & 43 \\ _Adpositional_ & 207 & 141 & 222 & 150 & 433 & 215 & 401 & 272 & 221 & 145 & 327 & 181 \\ _Adverbial_ & 118 & 87 & 107 & 80 & 117 & 79 & 95 & 80 & 127 & 88 & 56 & 45 \\ _Idiom_ & 32 & 3 & 33 & 2 & 54 & 13 & 12 & 4 & 27 & 3 & 13 & 2 \\ _Nominal_ & 95 & 82 & 128 & 109 & 261 & 184 & 189 & 163 & 123 & 101 & 179 & 138 \\ _Sentential_ & 199 & 115 & 144 & 103 & 176 & 107 & 144 & 110 & 195 & 111 & 109 & 77 \\ _Clausal_ & 156 & 99 & 157 & 112 & 182 & 117 & 154 & 112 & 152 & 97 & 70 & 58 \\ _Transitional_ & 102 & 75 & 96 & 77 & 103 & 72 & 107 & 89 & 108 & 82 & 49 & 43 \\ _Verbal_ & 137 & 104 & 143 & 116 & 188 & 142 & 139 & 122 & 144 & 108 & 116 & 86 \\ \hline \end{tabular} \end{table} Table 1: Mean Frequency and Productivity of Constructions by Category and Register estimate the overall distribution. The y-axis contains a bar chart for each category of construction and the x-axis shows the percent of the construction which falls into that category. Thus, for example, the most frequent type of construction is _verbal_ at 33.7% of the grammar, followed by _nominal_ at 21.7% and _sentential_ at 18.3%. This distribution is not surprising given that verbs and nouns are the most common open-class lexical items and that sentential clauses form the basic structure of sentences. The next step is to measure the frequency of each construction and the number of its unique types, thus capturing its productivity. These measures of frequency and productivity are corpus-specific in the sense that different constructions are more likely to be used in specific contexts or by specific populations. We thus consider 12 distinct corpora of 1 million words each, six representing distinct registers and six representing distinct populations within the same register. Starting with a comparison across registers, Table 1 shows the mean frequency of tokens and the mean number of types for each class of constructions in each register-specific corpus. For example, the Project Gutenberg corpus has significantly more types per depositional construction than the corpus of blogs. While some categories of construction are more common in the grammar, the measures in Table 1 take the average for each category. While there are more verbal constructions in the grammar, for example, depositional and sentential constructions have more tokens per construction. The frequency of each category of construction (i.e., the mean number of tokens) also provides a view of the grammatical differences between these six registers. For instance, blogs contain fewer depositional constructions than other registers while published books and speeches in parliament contain approximately twice as many overall. Wikipedia articles contain many fewer cases of clasual and transitional constructions, indicating a register with fewer embedded clauses. Further, blogs have nearly twice as many sentential constructions (i.e., base main clauses) as Wikipedia, but many fewer adpositional phrases. This would indicate that information can be packaged in short sentences or in additional depositional constructions, depending on the register. Note that another set of Wikipedia corpora was available during the grammar learning process, so that the reduced frequencies of these types are not simply a matter of under-fitting the register. The next question is whether the differences in frequency of individual constructions across corpora are random or whether they reveal underlying relationships between the corpora themselves. In other words, given the frequencies of each construction in the grammar, we would expect a meaningful grammar to create meaningful relationships between conditions. A _condition_ in this case refers to the register or the population represented by the Figure 2: Clustering of Corpora Using Burrow’s Delta, Register (Above) and Dialect (Below) corpus. This is shown in Figure 2 using Burrow's Delta to calculate the distances between corpora and then hierarchical clustering to visualize relationships based on these distances. The figure shows relationships between registers on the top. The two core clusters are with modern formal documents (eu and wikipedia) and digital crowd-sourced documents (comments and blogs and reviews). The books from Project Gutenberg, from a different historical period, are an outlier. On the bottom the figure shows relationships between different dialects within the same register (tweets). The core pairs are the countries which are closest in geographic terms: Ireland and the UK together with Australia and New Zealand, with Canada and the US as a distant pair. In both cases, we see that the frequencies of constructions in the grammar provide meaningful relationships between both registers and dialects. This is important because it shows that the differing frequencies of constructions are not simply arbitrary patterns from this particular model but also reproduce two sets of real-world relationships. ## 5 Clipping: The Problem of Parsing The analysis in this paper has categorized and described the kinds of constructions that are contained in a learned constructicon, has quantified the frequency and productivity of each kind, and has shown that the usage of these constructions can reconstruct meaningful relationships between corpora. The analysis of construction types in Section 3, however, reveals a major challenge in this approach to computational CxG: the unification or _clipping together_ of these constructions into complete utterances during parsing (Jackendoff, 2013). The idea in CxG is that word-forms are not the basic building blocks of grammar. Rather, the types of constructions analyzed in this paper form the basic units, themselves built out of slot-constraints that depend on basic category formation processes. With the exception of short utterances, however, no single construction provides a complete description of a linguistic form. These constructions must be clipped together: a sentential construction, for example, joined with a verbal construction and then a nominal construction. CxG posits a continuum between the lexicon and the grammar, so that the constructicon contains basic units at different levels of abstraction. We must distinguish, however, between **first-order constructions** of the type discussed in this paper and **second-order constructions** which are formed by clipping together these lower constructions. A complete constructicon would thus also contain emergent structures formed from multiple first-order constructions. As a desideratum for future developments, we can conceptualize two types of second-order constructions: First, slot-recursion would allow a higher-order construction to contain first-order constructions as slot-fillers. For example, the set of sentential constructions could be expanded by allowing verbal constructions to fill verbal slots. Second, slot-clipping would allow two overlapping constructions to be merged, for instance connecting a transitional construction with a verbal construction. An overlapping shared slot-constraint would license such slot-clipping unifications. ## 6 Conclusions The main contribution of this paper has been to provide a qualitative linguistic analysis of a learned construction grammar, providing a new perspective on grammars which have previously been evaluated from a quantitative perspective. We presented a division of construction types into nine categories such as _Verbal_ and _Nominal_, with those two open-class categories the most common. The discussion of examples shows both the range and the robustness of computational construction grammar. This linguistic analysis does point to two current weaknesses: First, not all constructions fit nicely into the categories used for annotation (c.f., Section 3.1). A truly usage-based grammar does not necessarily align with introspection-based analysis, especially in regards to boundaries between constructions. Introspection often focuses on constructions which are complete or self-contained units, while the computational constructions place common pivot points at boundaries. Second, these constructions do not generally describe entire utterances, so that we must consider a form of clipping to provide complete parses (c.f., Section 5). From a quantitative perspective, the analysis of register and dialectal differences shows that the productivity of these constructions also reproduces expected relationships between corpora. This is important for providing an external evaluation of the grammar: the differences between registers, for example, show how functions which are salient in a given communicative situation ultimately drive constructional frequencies. In other words, the fre quencies of different types of constructions reflect meaningful patterns in real-world usage.
2301.03997
A Q-operator for open spin chains II: boundary factorization
One of the features of Baxter's Q-operators for many closed spin chain models is that all transfer matrices arise as products of two Q-operators with shifts in the spectral parameter. In the representation-theoretical approach to Q-operators, underlying this is a factorization formula for L-operators (solutions of the Yang-Baxter equation associated to particular infinite-dimensional representations). To have such a formalism to open spin chains, one needs a factorization identity for solutions of the reflection equation (boundary Yang-Baxter equation) associated to these representations. In the case of quantum affine $\mathfrak{sl}_2$ and diagonal K-matrices, we derive such an identity using the recently formulated theory of universal K-matrices for quantum affine algebras.
Alec Cooper, Bart Vlaar, Robert Weston
2023-01-10T14:42:39Z
http://arxiv.org/abs/2301.03997v3
# A Q-operator for open spin chains II: boundary factorization ###### Abstract. In the algebraic approach to Baxter's Q-operators for the closed Heisenberg XXZ spin chain, certain infinite-dimensional 'prefundamental' representations of the q-deformed Borel subalgebra play a central role. To extend this formalism to open spin chains, one needs a factorization identity for particular solutions of the reflection equation associated to these representations. In the case of quantum affine \(\mathfrak{sl}_{2}\), we derive such an identity using the recent theory of universal K-matrices for quantum affine algebras. ###### Contents * 1 Introduction * 2 Quantum affine \(\mathfrak{sl}_{2}\) and its universal R-matrix * 3 The augmented q-Onsager algebra, its twist and its universal K-matrix * 4 Borel representations in terms of the q-oscillator algebra * 5 L-operators and R-operators * 6 K-matrices * 7 Fusion intertwiners revisited * 8 Boundary factorization identity * 9 Discussion * A Deformed Pochhammer symbols and exponentials * B Explicit expressions for R-operators * C An alternative proof of the main theorem ## 1. Introduction ### Background and overview Baxter first introduced his Q-operator in [1, 2] as an auxiliary tool in the derivation of Bethe Equations for the eigenvalues of the 8-vertex model transfer matrix. The key characters in the story are the transfer matrix \(\mathcal{T}(z)\) and the Q-operator \(\mathcal{Q}(z)\). A detailed description of the essential properties of \(\mathcal{T}(z)\) and \(\mathcal{Q}(z)\) can be found in [1] (also see [21] and references therein); the key relation that they satisfy that leads directly to the Bethe equations is of the form \[\mathcal{T}(z)\mathcal{Q}(z)=\alpha_{+}(z)\mathcal{Q}(qz)+\alpha_{-}(z)\mathcal{Q }(q^{-1}z), \tag{1.1}\] where \(\alpha_{\pm}(z)\) are meromorphic functions and \(q\in\mathbb{C}^{\times}\) is not a root of unity. In the original papers of Baxter, the operator \(\mathcal{Q}(z)\) was constructed by a brilliant but ad hoc argument; the representation-theoretic construction of \(\mathcal{Q}(z)\) had to wait more than 20 years until the work of Bazhanov, Lukyanov and Zamolodchikov [1, 1, 2]. The main idea of the latter approach is to construct both \(\mathcal{T}(z)\) and \(\mathcal{Q}(z)\) as partial traces over different representations of the universal R-matrix \(\mathcal{R}\) of \(U_{q}(\widehat{\mathfrak{sl}}_{2})\). The operator \(\mathcal{T}(z)\) is a twisted trace over a two-dimensional \(U_{q}(\widehat{\mathfrak{sl}}_{2})\)-representation \(\Pi_{z}\), and \(\mathcal{Q}(z)\) is a similarly twisted trace over an infinite-dimensional \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-representation \(\rho_{z}\), where \(\widehat{\mathfrak{b}}^{+}\) is the upper Borel subalgebra of \(\widehat{\mathfrak{sl}}_{2}\) (the relevant representations are defined in Section 4.4 of the current paper). The relation (1.1) for closed spin chains then follows immediately by considering a short exact sequence (SES) of \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-representations with \(\Pi_{z}\otimes\rho_{z}\) as its'middle' object (cf. [15, Lem. 2 (2)]). The extension of this approach to Q-operators for the open XXZ chain was carried out in [16] and details and references can be found therein. For an arbitrary untwisted affine Lie algebra \(\widehat{\mathfrak{g}}\) with upper Borel subalgebra \(\widehat{\mathfrak{b}}^{+}\), the level-0 representation theory of \(U_{q}(\widehat{\mathfrak{b}}^{+})\) was studied in [11]; for the general connection with the theory of Baxter's Q-operators see [10]. As well as this direct SES route to the equation, there is an alternative strategy which we refer to as the 'factorization approach'; for closed chains see [14, 15, 16, 17, 18, 19, 20]. In fact, this approach was the one taken by Bazhanov, Lukyanov and Zamolodchikov. The work that developed this formalism in language most similar to the current paper, in particular the formulation of the intertwining property of the operator \(\mathcal{O}\) (defined in Section 4.5 of the current paper), is [10]. In this approach, a second operator \(\overline{\mathcal{Q}}(z)\) with similar properties to \(\mathcal{Q}(z)\) is introduced as a trace of \(\mathcal{R}\) over another infinite-dimensional representation \(\bar{\varrho}_{z}\) of \(U_{q}(\widehat{\mathfrak{b}}^{+})\). The affinized version \(\upsilon_{z}\) of the \(U_{q}(\mathfrak{sl}_{2})\)-Verma module is also considered as well as an another infinite-dimensional filtered \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-module \(\phi_{z}\); these two representations depend on a complex parameter \(\mu\). The key connection between all representations is given by Theorem 4.4, which expresses the fact that particular pairwise tensor products are isomorphic as \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-modules by means of an explicit intertwiner \(\mathcal{O}\). At the level of the L-operators this implies \[\mathcal{O}_{12}\mathcal{L}_{\varrho}(q^{\mu}z)_{13}\mathcal{L}_{\bar{\varrho} }(q^{-\mu}z)_{23}=\mathcal{L}_{v}(z)_{13}\mathcal{L}_{\phi}(z)_{23}\mathcal{ O}_{12}, \tag{1.2}\] (see Theorem 5.2 of the current paper), which is referred to as _factorization_ of the Verma module L-operator \(\mathcal{L}_{v}(z)\) in terms of the L-operators \(\mathcal{L}_{\varrho}(z)\) and \(\mathcal{L}_{\bar{\varrho}}(z)\) which are used to define \(\mathcal{Q}(z)\), \(\overline{\mathcal{Q}}(z)\) (the transfer matrix corresponding to the additional operator \(\mathcal{L}_{\phi}(z)\) is trivial). Defining \(\mathcal{T}_{\mu}(z)\) to be the transfer matrix that is the trace over the \(\mu\)-dependent representation \(\upsilon_{z}\) of \(\mathcal{R}\) in the first space, Theorem 5.2 yields a relation of the following form: \[\mathcal{T}_{\mu}(z)\varpropto\mathcal{Q}(zq^{-\mu/2})\overline{\mathcal{Q}}( zq^{\mu/2}). \tag{1.3}\] The SES associated with \(\upsilon_{z}\) in the case \(\mu\) is an integer then leads to the key relation (1.1). ### Present work The main result of the current paper is the following boundary analogue of Theorem 5.2, which we call the _boundary factorization identity_: \[\mathcal{K}_{\upsilon}(z)_{1}\mathcal{R}_{\upsilon\phi}(z^{2})\mathcal{K}_{ \phi}(z)_{2}\,\mathcal{O}=\mathcal{O}\mathcal{K}_{\varrho}(q^{\mu}z)_{1} \mathcal{R}_{\varrho\bar{\varrho}}(z^{2})\mathcal{K}_{\bar{\varrho}}(q^{-\mu }z)_{2} \tag{1.4}\] where \(z\) is a formal parameter (which can be specialized to generic complex numbers). The precise statement is given in Theorem 8.1. This formula involves the actions of the universal R-matrix of \(U_{q}(\widehat{\mathfrak{sl}}_{2})\) in tensor products of the various infinite-dimensional representations introduced. In addition, the various K-operators are diagonal solutions of reflection equations (boundary Yang-Baxter equations) [14, 15]. They arise as actions of the universal K-matrix associated to the augmented q-Onsager algebra, a particular coideal subalgebra of \(U_{q}(\widehat{\mathfrak{sl}}_{2})\), which featured also in e.g. [1, 16, 17, 18]. More precisely, diagonal solutions of the reflection equation with a free parameter, considered by Sklyanin in his 2-boundary version of the algebraic Bethe ansatz in [15], are intertwiners for this algebra. Equation (1.4) has a natural diagrammatic formulation - see Section 8. In a subsequent paper the authors will explain how (1.4) yields relations analogous to (1.3) and hence (1.1) for open chains. The proof of (1.4) and of the well-definedness of the various K-operators is an application of the universal K-matrix formalism developed in [1, 2] which is built on the earlier works [1, 15]. More precisely, it relies on an extension of the theory of K-matrices for finite-dimensional representations of quantum affine algebras in [2] to level-0 representations of \(U_{q}(\widehat{\mathfrak{b}}^{+})\), which we discuss in Section 3. The key point is that, for the special case of the augmented q-Onsager algebra there exists a universal element \(\mathcal{K}\), centralizing the augmented q-Onsager algebra up to a twist, with three desirable properties. 1. The element \(\mathcal{K}\) lies in (a completion of) the Borel subalgebra \(U_{q}(\widehat{\mathfrak{b}}^{+})\), so that the resulting family of linear maps is itself compatible with \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-intertwiners (which play an essential role in the algebraic theory of Baxter Q-operators). 2. The coproduct of \(\mathcal{K}\) is of a particularly simple form, which is relevant for the proof of the boundary factorization identity. 3. The linear operators accomplishing the action of \(\mathcal{K}\) in level-0 representations satisfy the untwisted reflection equation. Thus we obtain the factorization identity (1.4) as a natural consequence of the representation theory of \(U_{q}(\widehat{\mathfrak{sl}}_{2})\). The main benefit of this universal approach is that laborious linear-algebraic computations are avoided; in particular, we not even need explicit expressions for the various factors. Nevertheless, we do provide these explicit expressions, as we expect them to be useful in further work in this direction. We also give an alternative computational proof of (1.4), to illustrate the power of the universal approach. This is a 'boundary counterpart' to the level-0 theory of the universal R-matrix, which we also include for reference. We do this in Section 2, staying close to the original work by Drinfeld and Jimbo [11, 12, 13, 14]. In particular, Theorem 2.4 states that the grading-shifted universal R-matrix has a well-defined action as a linear-operator-valued formal power series on any tensor product of level-0 representations of \(U_{q}(\widehat{\mathfrak{b}}^{+})\) and \(U_{q}(\widehat{\mathfrak{b}}^{-})\) (including finite-dimensional representations). Often this well-definedness is tacitly assumed, see e.g. [18, Sec. 2.3]. It also follows from the Khoroshkin-Tolstoy factorization [10] of the universal R-matrix, see [1, 1, 12, 13]; however we are unaware of such a factorization for the universal K-matrix. ### Outline In Section 2 we study the action of the universal R-matrix of quantum affine \(\mathfrak{sl}_{2}\) on tensor products of level-0 representations of Borel subalgebras. Section 3 is a 'boundary counterpart' to Section 2, where we consider the augmented q-Onsager algebra. We show that its _(semi-)standard_ universal K-matrix, see [2, 2], has a well-defined action on level-0 representations of \(U_{q}(\widehat{\mathfrak{b}}^{+})\), see Theorem 3.6, and, with a simple correction, satisfies the above three desirable properties. In Section 4 we discuss the relevant representations of \(U_{q}(\widehat{\mathfrak{b}}^{+})\) in terms of (an extension of) the q-oscillator algebra, as well as the \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-intertwiner \(\mathcal{O}\). Various solutions of Yang-Baxter equations are obtained in Section 5 as actions of the universal R-matrix in tensor products of Borel representations. Similarly, in Section 6 we introduce solutions of the reflection equation as actions of the universal K-matrix in Borel representations. We revisit the SES approach to Baxter's Q-operators for the open XXZ spin chain in light of the universal K-matrix formalism in Section 7. Next, in Section 8 we give a diagrammatic motivation of the boundary factorization identity (1.4) for the open XXZ spin chain, and provide a short proof using the level-0 theory developed in Section 3. Finally in Section 9 we summarize the main results and point out future work. Some supplementary material is given in appendices. Namely, Appendix A provides some background material on deformed Pochhammer symbols and exponentials. In particular, Appendix B contains derivations of the explicit expressions of the two R-operators appearing in (1.4). In Appendix C we provide an alternative proof of the boundary factorization identity (1.4), relying on the explicit expressions of all involved factors. The key tool of this proof is provided by Lemma C.1, which consists in two product formulas involving deformed Pochhammer symbols and exponentials. ### Acknowledgments B.V. would like to thank A. Appel, P. Baseilhac and N. Reshetikhin for useful discussions. This research was supported in part by funding from EPSRC grant EP/R009465/1, from the Simons Foundation and the Centre de Recherches Mathematiques (CRM), through the Simons-CRM scholar-in-residence programme, and by the Galileo Galilei Institute (GGI) scientific programme on 'Randomness, Integrability and Universality'. R.W. would like to acknowledge and thank CRM and the GGI for their hospitality and support. ### Data availability statement Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. ## 2. Quantum affine \(\mathfrak{sl}_{2}\) and its universal R-matrix In this section we study the action of the universal R-matrix of the quasitriangular Hopf algebra quantum affine \(\mathfrak{sl}_{2}\) on tensor products of level-0 representations (including infinite-dimensional representations) of the Borel subalgebras. We give a basic survey of the algebras involved, the representations and the quasitriangular structure and show that the universal R-matrix has a well-defined action on tensor products of all level-0 representations of the Borel subalgebras. ### General overview of finite-dimensional R-matrix theory To formulate a quantum integrable system in terms of a transfer matrix built out of R-matrices, one needs finite-dimensional representations of a suitable quasitriangular Hopf algebra. To get trigonometric R-matrices, one can proceed as follows. Let \(\mathfrak{g}\) be a finite-dimensional simple Lie algebra and note that the untwisted loop algebra \(L\mathfrak{g}=\mathfrak{g}\otimes\mathbb{C}[z,z^{-1}]\) has a central extension \(\widehat{\mathfrak{g}}=L\mathfrak{g}\oplus\mathbb{C}c\). In turn, this can be extended to \(\widehat{\mathfrak{g}}=\widehat{\mathfrak{g}}\oplus\mathbb{C}d\) where \(d\) satisfies \([d,\cdot]=z\frac{\mathbf{d}}{\mathsf{d}z}\). For a fixed Cartan subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) we define \[\widehat{\mathfrak{h}}:=\mathfrak{h}\oplus\mathbb{C}c,\qquad\widehat{ \mathfrak{h}}:=\widehat{\mathfrak{h}}\oplus\mathbb{C}d.\] The Lie algebra \(\widetilde{\mathfrak{g}}\) is a Kac-Moody algebra and hence has a non-degenerate bilinear form \((\cdot,\cdot)\), which restricts to a non-degenerate bilinear form on \(\widetilde{\mathfrak{h}}\). See e.g. [10] for more detail. The universal enveloping algebras \(U(\widehat{\mathfrak{g}})\) and \(U(\widehat{\mathfrak{g}})\) can be q-deformed, yielding non-cocommutative Hopf algebras (Drinfeld-Jimbo quantum groups) \(U_{q}(\widehat{\mathfrak{g}})\) and \(U_{q}(\widetilde{\mathfrak{g}})\), see e.g. [11, 12, 13, 14, 15]. The nondegenerate bilinear form \((\cdot,\cdot)\) lifts to \(U_{q}(\widehat{\mathfrak{g}})\) inducing a pairing between the q-deformed Borel subalgebras and hence a quasitriangular structure. On the other hand, the subalgebra \(U_{q}(\widehat{\mathfrak{g}})\) has a rich finite-dimensional representation theory, see e.g. [13, 14, 15, 16]. The grading-shifted universal R-matrix has a well-defined action on tensor products of finite-dimensional representations of \(U_{q}(\widehat{\mathfrak{g}})\) as a formal power series, see e.g. [14, 15, 16, 17]). We now discuss the extension of this theory to level-0 representations of Borel subalgebras, including various infinite-dimensional representations. We will restrict to the case \(\mathfrak{g}=\mathfrak{sl}_{2}\) (but the theory naturally generalizes to any quantum untwisted affine algebra). ### Quantum affine \(\mathfrak{sl}_{2}\) Denoting the canonical Cartan generator of \(\mathfrak{sl}_{2}\) by \(h_{1}\), \(\widehat{\mathfrak{h}}\) is spanned by \(h_{0}=c-h_{1}\) and \(h_{1}\). The bilinear form on \(\widetilde{\mathfrak{h}}\) is defined by \[(h_{0},h_{0})=(h_{1},h_{1})=-(h_{0},h_{1})=2,\qquad(h_{0},d)=1,\qquad(h_{1},d )=(d,d)=0.\] Fix \(\epsilon\in\mathbb{C}\) such that \(q=\exp(\epsilon)\) is not a root of unity. For all \(\mu\in\mathbb{C}\) we will denote \(\exp(\epsilon\mu)\) by \(q^{\mu}\). First, we define \(U_{q}(\mathfrak{g})\) as the algebra generated over \(\mathbb{C}\) by \(e\), \(f\) and invertible \(k\) subject to the relations \[ke=q^{2}ek,\qquad kf=q^{-2}fk,\qquad[e,f]=\frac{k-k^{-1}}{q-q^{-1}}. \tag{2.1}\] The following assignments determine a coproduct \(\Delta:U_{q}(\mathfrak{g})\to U_{q}(\mathfrak{g})\otimes U_{q}(\mathfrak{g})\): \[\Delta(e)=e\otimes 1+k\otimes e,\qquad\Delta(f)=f\otimes k^{-1}+1\otimes f, \qquad\Delta(k^{\pm 1})=k^{\pm 1}\otimes k^{\pm 1}. \tag{2.2}\] It uniquely extends to a Hopf algebra structure on \(U_{q}(\mathfrak{g})\). Now the main algebra of interest, \(U_{q}(\widehat{\mathfrak{g}})\), arises as follows. **Definition 2.1** (Quantum affine \(\mathfrak{sl}_{2}\)).: We denote by \(U_{q}(\widehat{\mathfrak{g}})\) the Hopf algebra generated by two triples \(\{e_{i},f_{i},k_{i}\}\) (\(i\in\{0,1\}\)), such that: 1. the following assignments for \(i\in\{0,1\}\) define Hopf algebra embeddings from \(U_{q}(\mathfrak{g})\) to \(U_{q}(\widehat{\mathfrak{g}})\): (2.3) \[e\mapsto e_{i},\qquad f\mapsto f_{i},\qquad k\mapsto k_{i};\] 2. the following cross relations are satisfied: (2.4) \[k_{i}k_{j}=k_{j}k_{i},\qquad k_{i}e_{j}=q^{-2}e_{j}k_{i}, \qquad k_{i}f_{j}=q^{2}f_{j}k_{i},\qquad[e_{i},f_{j}]=0,\] (2.5) \[[e_{i},[e_{i},[e_{i},e_{j}]_{q^{2}}]_{1}]_{q^{-2}}=[f_{i},[f_{i},[f _{i},f_{j}]_{q^{2}}]_{1}]_{q^{-2}}=0,\] for \(i\neq j\), where we have introduced the notation \([x,y]_{p}:=xy-pyx\). \(\varnothing\) Consider the affine Cartan subalgebra \(\widehat{\mathfrak{h}}=\mathbb{C}h_{0}\oplus\mathbb{C}h_{1}\). Note that its q-deformation \(U_{q}(\widehat{\mathfrak{h}})=\langle k_{0}^{\pm 1},k_{1}^{\pm 1}\rangle\) is isomorphic to the group algebra of the affine co-root lattice \[\widehat{Q}^{\vee}=\mathbb{Z}h_{0}+\mathbb{Z}h_{1}\subset\widehat{\mathfrak{h }}. \tag{2.6}\] The nontrivial diagram automorphism \(\Phi\) of the affine Dynkin diagram, i.e. the nontrivial permutation of the index set \(\{0,1\}\), lifts to a linear automorphism \(\Phi\) of \(\widehat{\mathfrak{h}}\) which preserves the lattice \(\widehat{Q}^{\vee}\). Accordingly, it also lifts an involutive Hopf algebra automorphism of \(U_{q}(\widehat{\mathfrak{g}})\), also denoted \(\Phi\), via the assignments \[\Phi(e_{i})=e_{\Phi(i)},\qquad\Phi(f_{i})=f_{\Phi(i)},\qquad\Phi(k_{i}^{\pm 1 })=k_{\Phi(i)}^{\mp 1}\qquad\text{for $i\in\{0,1\}$}. \tag{2.7}\] ### Quantized Kac-Moody algebra To define the quantized Kac-Moody algebra \(U_{q}(\widehat{\mathfrak{g}})\), choose an extension \(\widetilde{Q}^{\vee}\) of \(\widehat{Q}^{\vee}\) (a lattice of rank \(3\) contained in \(\widetilde{\mathfrak{h}}\)) preserved by \(\Phi\). _Remark 2.2_.: The standard extension of the affine co-root lattice \(\mathbb{Z}h_{0}+\mathbb{Z}h_{1}+\mathbb{Z}d\) is not so convenient for us, mainly in view of the construction of the universal K-matrix in Section 3.3. Namely, extensions of \(\Phi\) to \(\widetilde{\mathfrak{h}}\) which are compatible with the bilinear form on \(\widetilde{\mathfrak{h}}\) do not preserve this lattice, see also [16, Sec. 2.6] and [14, Sec. 3.14]. The most convenient choice is to use the _principal grading_ and set \[d_{\mathsf{pr}}:=-\frac{1}{8}h_{0}+\frac{3}{8}h_{1}+2d\in\mathfrak{h}, \tag{2.8}\] so that \[(d_{\mathsf{pr}},h_{0})=(d_{\mathsf{pr}},h_{1})=1,\qquad(d_{\mathsf{pr}},d_{ \mathsf{pr}})=0.\] Now we set \(\Phi(d_{\mathsf{pr}})=d_{\mathsf{pr}}\) and obtain a linear automorphism \(\Phi\) of \(\widetilde{\mathfrak{h}}\) preserving the lattice \[\widetilde{Q}^{\vee}:=\mathbb{Z}h_{0}+\mathbb{Z}h_{1}+\mathbb{Z}d_{\mathsf{pr}}.\] The corresponding dual map on \(\widetilde{\mathfrak{h}}^{*}\), also denoted by \(\Phi\), preserves the extended affine weight lattice \[\widetilde{P}=\{\lambda\in\widetilde{\mathfrak{h}}^{*}\,|\,\lambda(\widetilde {Q}^{\vee})\subseteq\mathbb{Z}\}. \tag{2.9}\] Accordingly, we define \(U_{q}(\widehat{\mathfrak{g}})\) as the Hopf algebra obtained by extending \(U_{q}(\widehat{\mathfrak{g}})\) by a group-like element1\(g\) satisfying Footnote 1: It is equal to \(\exp(\epsilon d_{\mathsf{pr}})\) if we define \(U_{q}(\widehat{\mathfrak{g}})\) as a topological Hopf algebra over \(\mathbb{C}[[\epsilon]]\). \[ge_{i}=qe_{i}g,\qquad gf_{i}=q^{-1}f_{i}g,\qquad gk_{i}=k_{i}g. \tag{2.10}\] Hence, the assignment \(\Phi(g)=g\) together with (2.7) defines an involutive Hopf algebra automorphism of \(U_{q}(\widehat{\mathfrak{g}})\). ### Co-opposite Hopf algebra structure For any \(\mathbb{C}\)-algebra \(A\), denote by \(\sigma\) the algebra automorphism of \(A\otimes A\) which sends \(a\otimes a^{\prime}\) to \(a^{\prime}\otimes a\) for all \(a,a^{\prime}\in A\). If \(X\in A\otimes A\) we will also write \(X_{21}\) for \(\sigma(X)\). If \(A\) is a bialgebra with coproduct \(\Delta\), the _co-opposite bialgebra_, denoted \(A^{\mathsf{cop}}\), is the bialgebra with the same underlying algebra structure and counit as \(A\) but with \(\Delta\) replaced by \[\Delta^{\mathsf{op}}:=\sigma\circ\Delta \tag{2.11}\] (if \(A\) is a Hopf algebra with invertible antipode \(S\), then \(A^{\mathsf{cop}}\) is also a Hopf algebra with antipode \(S^{-1}\)). The assignments \[\omega(e_{i})=f_{i},\qquad\omega(f_{i})=e_{i},\qquad\omega(k_{i}^{\pm 1})=k_{i }^{\mp 1}\qquad\text{for $i\in\{0,1\}$},\qquad\qquad\omega(g)=g^{-1} \tag{2.12}\] define a bialgebra isomorphism from \(U_{q}(\widehat{\mathfrak{g}})\) to \(U_{q}(\widehat{\mathfrak{g}})^{\mathsf{cop}}\) (in particular, \((\omega\otimes\omega)\circ\Delta=\Delta^{\mathsf{op}}\circ\omega\)) which commutes with \(\Phi\). ### Weight modules We review some basic representation-theoretic notions for \(U_{q}(\widehat{\mathfrak{g}})\) by means of which its universal R-matrix can be described. Consider the commutative subalgebra \[U_{q}(\widehat{\mathfrak{h}})=\langle k_{0}^{\pm 1},k_{1}^{\pm 1},g^{\pm 1}\rangle \subset U_{q}(\widehat{\mathfrak{g}}). \tag{2.13}\] Call a \(U_{q}(\widehat{\mathfrak{g}})\)-module \(M\) a \(U_{q}(\widehat{\mathfrak{h}})\)-weight module if \[M=\bigoplus_{\lambda\in\widetilde{P}}M_{\lambda},\qquad M_{\lambda}=\{m\in M \,|\,k_{i}\cdot m=q^{\lambda(h_{i})}m\text{ for }i\in\{0,1\},\,g\cdot m=q^{\lambda(d_{\mathsf{pr}})}m\}.\] Elements of \(M_{\lambda}\) are said to have weight \(\lambda\). The adjoint action of \(U_{q}(\widehat{\mathfrak{h}})\) (with its generators acting by conjugation) endows \(U_{q}(\widehat{\mathfrak{g}})\) itself with a \(U_{q}(\widehat{\mathfrak{h}})\)-weight module structure, with elements of \(U_{q}(\widehat{\mathfrak{h}})\) of weight \(0\). More precisely, the weights of \(U_{q}(\widehat{\mathfrak{g}})\) are given by the affine root lattice \[\widehat{Q}:=\mathbb{Z}\alpha_{0}+\mathbb{Z}\alpha_{1}\subset\widetilde{P}\] (\(e_{i}\) has weight \(\alpha_{i}\), \(f_{i}\) has weight \(-\alpha_{i}\)). Furthermore, note that \(U_{q}(\widehat{\mathfrak{g}})\) is generated by \(U_{q}(\widehat{\mathfrak{h}})\) and the quantum analogues of the standard nilpotent subalgebras \[U_{q}(\widehat{\mathfrak{n}}^{+})=\langle e_{0},e_{1}\rangle,\qquad U_{q}( \widehat{\mathfrak{n}}^{-})=\langle f_{0},f_{1}\rangle. \tag{2.14}\] The action of \(U_{q}(\widehat{\mathfrak{h}})\) preserves these subalgebras \(U_{q}(\widehat{\mathfrak{n}}^{\pm})\) and the corresponding weights are the monoids \(\pm\widehat{Q}^{+}\) respectively, where \(\widehat{Q}^{+}:=\mathbb{Z}_{\geq 0}\alpha_{0}+\mathbb{Z}_{\geq 0}\alpha_{1}\). ### Quasitriangularity The universal R-matrix for \(U_{q}(\widehat{\mathfrak{g}})\) is an element of a completion of \(U_{q}(\widehat{\mathfrak{g}})\otimes U_{q}(\widehat{\mathfrak{g}})\) satisfying \[\mathcal{R}\Delta(u)=\Delta^{\text{op}}(u)\mathcal{R}\qquad\text{ for all }u\in U_{q}(\widehat{\mathfrak{g}}), \tag{2.16}\] \[(\Delta\otimes\mathsf{id})(\mathcal{R})=\mathcal{R}_{13}\mathcal{ R}_{23},\qquad\qquad(\mathsf{id}\otimes\Delta)(\mathcal{R})=\mathcal{R}_{13} \mathcal{R}_{12} \tag{2.15}\] and hence \[\mathcal{R}_{12}\mathcal{R}_{13}\mathcal{R}_{23}=\mathcal{R}_{23}\mathcal{R}_ {13}\mathcal{R}_{12}. \tag{2.17}\] Consider the Hopf subalgebras \[U_{q}(\widehat{\mathfrak{b}}^{\pm})=\langle U_{q}(\widehat{\mathfrak{h}}),U_{ q}(\widehat{\mathfrak{n}}^{\pm})\rangle.\] The element \(\mathcal{R}\) arises as the canonical element of the bialgebra pairing between \(U_{q}(\widehat{\mathfrak{b}}^{+})\) and the algebra \(U_{q}(\widehat{\mathfrak{b}}^{-})^{\mathsf{op}}\) (the bialgebra isomorphic as a coalgebra to \(U_{q}(\widehat{\mathfrak{b}}^{-})\) but with the opposite multiplication), see [10, 11]. In particular, \(\mathcal{R}\) lies in a completion of \(U_{q}(\widehat{\mathfrak{b}}^{+})\otimes U_{q}(\widehat{\mathfrak{b}}^{-})\). Further, invariance properties of the bialgebra pairing imply \[(\omega\otimes\omega)(\mathcal{R})=\mathcal{R}_{21}, \tag{2.19}\] \[(\Phi\otimes\Phi)(\mathcal{R})=\mathcal{R}. \tag{2.18}\] Also, this pairing has a non-degenerate restriction to \(U_{q}(\widehat{\mathfrak{n}}^{+})_{\lambda}\times U_{q}(\widehat{\mathfrak{n} }^{-})_{-\lambda}\) for all \(\lambda\in\widehat{Q}^{+}\); denote the canonical element of this restricted pairing by \(\Theta_{\lambda}\). With our choice of the coproduct we have \[\mathcal{R}=\Theta^{-1}\cdot\kappa^{-1},\qquad\Theta=\sum_{\lambda\in\widehat{Q }^{+}}\Theta_{\lambda}, \tag{2.20}\] A priori, \(\Theta\) acts naturally on \(U_{q}(\widehat{\mathfrak{g}})\)-modules with a locally finite action of \(U_{q}(\widehat{\mathfrak{n}}^{+})\) or \(U_{q}(\widehat{\mathfrak{n}}^{-})\). We briefly explain one possible definition2 of the element \(\kappa\). The non-degenerate bilinear form \((\cdot,\cdot)\) on \(\widehat{\mathfrak{h}}\) induces one on \(\widehat{\mathfrak{h}}^{\ast}\), which we denote by the same symbol. If \(M,M^{\prime}\) are \(U_{q}(\widehat{\mathfrak{h}})\)-weight modules we define a linear map \(\kappa_{M}:M\otimes M^{\prime}\to M\otimes M^{\prime}\) by stipulating that it acts on \(M_{\lambda}\otimes M^{\prime}_{\lambda^{\prime}}\) (\(\lambda,\lambda^{\prime}\in\widetilde{P}\)) as multiplication by \(q^{(\lambda,\lambda^{\prime})}\). The family of these maps \(\kappa_{M}\), where \(M\) runs through all \(U_{q}(\widehat{\mathfrak{h}})\)-weight modules, is compatible with \(U_{q}(\widehat{\mathfrak{h}})\)-intertwiners. Hence it gives rise to a well-defined weight-0 element \(\kappa\) of the corresponding completion of \(U_{q}(\widehat{\mathfrak{g}})\otimes U_{q}(\widehat{\mathfrak{g}})\) which we call here _weight completion_. Similarly, we will define weight-0 elements of the weight completion of \(U_{q}(\widehat{\mathfrak{g}})\) itself using functions from \(\widetilde{P}\) to \(\mathbb{C}\). See also [1, Sec. 4.8] for more detail. ### Level-0 representations Consider the following subalgebras of \(U_{q}(\widehat{\mathfrak{g}})\): \[U_{q}(\widehat{\mathfrak{h}}^{\pm})=\langle U_{q}(\widehat{\mathfrak{h}}),U_ {q}(\widehat{\mathfrak{n}}^{\pm})\rangle. \tag{2.21}\] Then \(U_{q}(\widehat{\mathfrak{h}}^{+})\) is isomorphic to the algebra with generators \(e_{i}\), \(k_{i}\) (\(i\in\{0,1\}\)) subject to those relations in Definition 2.1 which do not involve the \(f_{i}\) (the proof of e.g. [1, Thm. 4.21] applies). We say that a \(U_{q}(\widehat{\mathfrak{h}}^{+})\)-module \(V\) is _level-0_ if it decomposes as \[V=\bigoplus_{t\in\mathbb{C}^{\times}}V(t),\qquad V(t)=\{v\in V\,|\,k_{0}\cdot v =t^{-1}v,\quad k_{1}\cdot v=tv\} \tag{2.22}\] with each \(V(t)\) finite-dimensional. Note that the class of finitely generated level-0 modules (this is somewhat more general than [11, Def. 3.8]) is closed under tensor products. By the \(U_{q}(\widehat{\mathfrak{g}})\)-relations we have \(e_{0}\cdot V(t)\subseteq V(q^{-2}t)\), \(e_{1}\cdot V(t)\subseteq V(q^{2}t)\). It is convenient to call the subset \(\{t\in\mathbb{C}^{\times}\,|\,\dim(V(t))\neq 0\}\) the _support_ of \(V\). If \(V\) is a finite-dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-module then it is level-0 with support contained in \(\pm q^{\mathbb{Z}}\), see e.g. [1, Prop. 12.2.3]. _Remark 2.3_.: Let \(V\) be an irreducible level-0 \(U_{q}(\widehat{\mathfrak{h}}^{+})\)-module3. If \(\dim(V)>1\), the \(U_{q}(\widehat{\mathfrak{h}}^{+})\)-action does not extend to a \(U_{q}(\widehat{\mathfrak{h}}^{+})\)-action. To see this, for instance, one can choose distinct \(t,t^{\prime}\in\mathbb{C}^{\times}\) in the support of \(V\). By irreducibility, for any nonzero \(v\in V(t)\), \(v^{\prime}\in V(t^{\prime})\) there exist \(x,x^{\prime}\in U_{q}(\widehat{\mathfrak{h}}^{+})\) such that \(x\cdot v=v^{\prime}\), \(x^{\prime}\cdot v^{\prime}=v\). Without loss of generality, we may assume both \(x\) and \(x^{\prime}\) have no term in \(U_{q}(\widehat{\mathfrak{h}})\) and are hence non-invertible. If \(v\) is an eigenvector of the action of \(g\) on \(V(t)\) then applying \(g\) to \((x^{\prime}x)\cdot v=v\) results in a contradiction with (2.10). Footnote 3: By [11, Prop. 3.5] this includes all finite-dimensional irreducible \(U_{q}(\widehat{\mathfrak{g}})\)-modules. Analogous definitions and comments can be made for \(U_{q}(\widehat{\mathfrak{h}}^{-})\)-modules. ### The action of \(\mathcal{R}\) on tensor products of level-0 modules We wish to connect the quasitriangular structure of \(U_{q}(\widehat{\mathfrak{g}})\) with the level-0 representation theory of \(U_{q}(\widehat{\mathfrak{g}})\), i.e. let the universal R-matrix of \(U_{q}(\widehat{\mathfrak{g}})\) act on tensor products of level-0 modules. To do this, we follow the ideas from [10, Sec. 13] (also see [11, Sec. 4], [12, Sec. 1]). If we write the action of \(k_{1}\) on an arbitrary level-0 module \(V\) as \(\exp(\epsilon H_{V})\), then note that the factor \(\kappa\) naturally acts on tensor products \(V\otimes V^{\prime}\) of level-0 modules as \(\exp(\epsilon H_{V}\otimes H_{V^{\prime}}/2)\). To let \(\Theta\) act on such tensor products, we extend the field of scalars \(\mathbb{C}\) over which we defined \(U_{q}(\widehat{\mathfrak{g}})\) to the Laurent polynomial ring \(\mathbb{C}[z,z^{-1}]\), where \(z\) is a formal parameter. The action of \(\Theta\) is particularly well-behaved if we use the principal grading. That is, we define a Hopf algebra automorphism \(\Sigma_{z}\) of \(U_{q}(\widehat{\mathfrak{g}})[z,z^{-1}]\) such that \[\Sigma_{z}(e_{i})=ze_{i},\qquad\Sigma_{z}(f_{i})=z^{-1}f_{i},\qquad\Sigma_{z} |_{U_{q}(\widehat{\mathfrak{h}})}=\mathsf{id}. \tag{2.23}\] Straightforwardly one sees that \[\omega\circ\Sigma_{z} =\Sigma_{z^{-1}}\circ\omega, \tag{2.25}\] \[\Phi\circ\Sigma_{z} =\Sigma_{z}\circ\Phi. \tag{2.24}\] Let the height function \(\mathsf{ht}:\widehat{Q}\to\mathbb{Z}\) be defined by \(\mathsf{ht}(m_{0}\alpha_{0}+m_{1}\alpha_{1})=m_{0}+m_{1}\) for all \(m_{0},m_{1}\in\mathbb{Z}\) and note that the number of elements of \(\widehat{Q}^{+}\) of given height is finite. The key observation is that \[(\Sigma_{z}\otimes\mathsf{id})(\Theta)=(\mathsf{id}\otimes\Sigma_{z^{-1}})( \Theta)=\sum_{r\geq 0}z^{r}\sum_{\lambda\in\widehat{Q}^{+},\,\mathsf{ht}( \lambda)=r}\Theta_{\lambda}, \tag{2.26}\] is a formal power series in \(z\) whose coefficients are finite sums and hence lie in \(U_{q}(\widehat{\mathsf{n}}^{+})\otimes U_{q}(\widehat{\mathsf{n}}^{-})\). Hence \((\Sigma_{z}\otimes\mathsf{id})(\Theta)=(\mathsf{id}\otimes\Sigma_{z^{-1}})(\Theta)\) has a well-defined action as a linear-operator-valued formal power series on a tensor product of any \(U_{q}(\widehat{\mathsf{n}}^{+})\)-representation with any \(U_{q}(\widehat{\mathsf{n}}^{-})\)-representation. Consider now the _grading-shifted universal R-matrix_: \[\mathcal{R}(z):=(\Sigma_{z}\otimes\mathsf{id})(\mathcal{R})=(\mathsf{id} \otimes\Sigma_{z^{-1}})(\mathcal{R}). \tag{2.27}\] Note that by applying \(\Sigma_{z}\otimes\mathsf{id}\) to (2.15) we deduce that \(\mathcal{R}(z)\) commutes with \(\Delta(k_{1})=\Delta^{\mathsf{op}}(k_{1})=k_{1}\otimes k_{1}\). We collect the results obtained thus far. **Theorem 2.4**.: _Consider a pair of level-0 representations \(\pi^{\pm}:U_{q}(\widehat{\mathsf{b}}^{\pm})\to\mathrm{End}(V^{\pm})\). Then_ \[\mathcal{R}_{\pi^{+}\pi^{-}}(z):=(\pi^{+}\otimes\pi^{-})(\mathcal{R}(z))\in \mathrm{End}(V^{+}\otimes V^{-})[[z]] \tag{2.28}\] _is well-defined and commutes with \(\pi^{+}(k_{1})\otimes\pi^{-}(k_{1})\)._ From now on we will use the standard convention that if \(\pi\) is any level-0 representation then the corresponding grading-shifted representation is denoted by a subscript \(z\): \[\pi_{z}:=\pi\circ\Sigma_{z}. \tag{2.29}\] Hence we may write \[\mathcal{R}_{\pi^{+}\pi^{-}}(z)=(\pi_{z}^{+}\otimes\pi^{-})(\mathcal{R})=(\pi ^{+}\otimes\pi_{1/z}^{-})(\mathcal{R}).\] Consider two indeterminates \(z_{1},z_{2}\). Applying, say, \(\Sigma_{z_{1}}\otimes\mathsf{id}\otimes\Sigma_{1/z_{2}}\), to (2.17), we obtain a \(\mathbb{C}[[z_{1},z_{2}]]\)-version of the universal Yang-Baxter equation which can be evaluated on suitable triple tensor products. **Proposition 2.5**.: _If \(\pi^{+}:U_{q}(\widehat{\mathsf{b}}^{+})\to\mathrm{End}(V^{+})\), \(\pi:U_{q}(\widehat{\mathsf{g}})\to\mathrm{End}(V)\) and \(\pi^{-}:U_{q}(\widehat{\mathsf{b}}^{-})\to\mathrm{End}(V^{-})\) are level-0 representations, then we have the following identity of linear-operator-valued formal power series in two indeterminates:_ \[\mathcal{R}_{\pi^{+}\pi}(z_{1})_{12}\;\mathcal{R}_{\pi^{+}\pi^{-}}(z_{1}z_{2}) _{13}\;\mathcal{R}_{\pi\pi^{-}}(z_{2})_{23}=\mathcal{R}_{\pi\pi^{-}}(z_{2})_{ 23}\;\mathcal{R}_{\pi^{+}\pi^{-}}(z_{1}z_{2})_{13}\;\mathcal{R}_{\pi^{+}\pi}(z _{1})_{12}. \tag{2.30}\] Given a pair of level-0 representations \(\pi^{\pm}:U_{q}(\widehat{\mathsf{b}}^{+})\to\mathrm{End}(V^{\pm})\) it is often convenient to have an explicit expression of \(\mathcal{R}_{\pi^{+}\pi^{-}}(z)\) which does not rely on computing the coefficients of the series \(\mathcal{R}(z)\). Essentially following Jimbo's approach from [14], we may try to solve a linear equation for \(\mathcal{R}_{\pi^{+}\pi^{-}}(z)\). To derive such a linear equation, it is convenient to assume that, say, \(\pi^{+}\) extends to a representation of \(U_{q}(\widehat{\mathsf{g}})\). In this case4, one directly obtains the following result. Footnote 4: One can of course apply \(\pi_{z}^{+}\otimes\pi^{-}\) to (2.15) for arbitrary \(U_{q}(\widehat{\mathsf{b}}^{\pm})\)-representations \(\pi^{\pm}\), yielding (2.31) for all \(u\in U_{q}(\widehat{\mathsf{g}})\) such that \(\Delta(u)\) and \(\Delta^{\mathsf{op}}(u)\) both lie in \(U_{q}(\widehat{\mathsf{b}}^{+})\otimes U_{q}(\widehat{\mathsf{b}}^{-})\). However, by applying counits this subalgebra is seen to be equal to \(U_{q}(\widehat{\mathsf{b}}^{+})\cap U_{q}(\widehat{\mathsf{b}}^{-})=U_{q}( \widehat{\mathsf{b}})\). Hence, one would just recover the second statement of Theorem 2.4. **Proposition 2.6**.: _If \(\pi^{+}\) is a level-0 \(U_{q}(\widehat{\mathsf{g}})\)-representation and \(\pi^{-}\) a level-0 \(U_{q}(\widehat{\mathsf{b}}^{-})\)-representation, then for all \(u\in U_{q}(\widehat{\mathsf{b}}^{-})\) we have_ \[\mathcal{R}_{\pi^{+}\pi^{-}}(z)\cdot(\pi_{z}^{+}\otimes\pi^{-})(\Delta(u))=(\pi _{z}^{+}\otimes\pi^{-})(\Delta^{\mathsf{op}}(u))\cdot\mathcal{R}_{\pi^{+}\pi^{- }}(z). \tag{2.31}\] Obviously there is a counterpart of Proposition 2.6 with the role of \(U_{q}(\widehat{\mathsf{b}}^{-})\) replaced by \(U_{q}(\widehat{\mathsf{b}}^{+})\). _Remark 2.7_.: If the solution space of the linear equation (2.31) is 1-dimensional, Proposition 2.6 implies that any solution must be a scalar multiple of \(\mathcal{R}_{\pi^{+}\pi^{-}}(z)\) and hence satisfy the Yang-Baxter equation. This is well-known if both \(V^{\pm}\) extend to finite-dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-modules. In this case the existence of the universal R-matrix implies the existence of a solution of the intertwining condition (2.31) depending rationally on \(z\). If \(\pi^{+}\) and \(\pi^{-}\) are both irreducible then it is known, see e.g. [10, Sec. 4.2] and [12, Thm. 3], that \(V^{+}((z))\otimes V^{-}\) is irreducible as a representation of \(U_{q}(\widehat{\mathfrak{g}})((z))\) (extension of scalars to formal Laurent series); hence an application of Schur's lemma yields the 1-dimensionality of the solution space of (2.31). In this case, the rational intertwiner is called _trigonometric R-matrix_. For more background and detail, see e.g. [13] and [11, Secs. 2.6 & 2.7]. In the absence of a linear relation such as (2.31), one can use the Yang-Baxter equation (2.30) to determine an explicit expression for one of \(\mathcal{R}_{\pi^{+}\pi}(z)\), \(\mathcal{R}_{\pi^{+}\pi^{-}}(z)\), or \(\mathcal{R}_{\pi\pi^{-}}(z)\), provided the other two are known. ### Adjusting the grading In this approach the use of the principal grading in Theorem 2.4 avoids further constraints on the representations (e.g. local finiteness conditions). For completeness we briefly explain how to extend the results of Section 2.8 to arbitrary grading. For nonnegative integers \(s_{0},s_{1}\) such that \(s_{0}+s_{1}\) is nonzero, define a more general Hopf algebra automorphism \(\Sigma_{z}^{s_{0},s_{1}}\) of \(U_{q}(\widehat{\mathfrak{g}})[z,z^{-1}]\) by \[\Sigma_{z}^{s_{0},s_{1}}(e_{i})=z^{s_{i}}e_{i},\qquad\Sigma_{z}^{s_{0},s_{1}} (f_{i})=z^{-s_{i}}f_{i},\qquad\Sigma_{z}^{s_{0},s_{1}}|_{U_{q}(\widehat{ \mathfrak{h}})}=\mathsf{id} \tag{2.32}\] (note that the choice \(s_{0}=0\), \(s_{1}=1\) is used in in [16, Eq. (2.11)]). Rather than giving generalized versions of the main results above and of various statements in the remainder of this work, we make an observation which will allow the reader to generate these statements, as required. Recalling the decomposition (2.22) and the associated terminology, suppose the level-0 \(U_{q}(\widehat{\mathfrak{h}}^{+})\)-module \(V\) is generated by a nonzero element of \(V(t_{0})\) for some \(t_{0}\in\mathbb{C}^{\times}\) (which includes all modules considered in this paper and all finite-dimensional irreducible \(U_{q}(\widehat{\mathfrak{g}})\)-modules). Then the support of \(V\) is contained in \(q^{2\mathbb{Z}}t_{0}\). Now for any indeterminate \(y\) and any integer \(m\), let \(y^{mD}\) denote the linear map on \(V\) which acts on \(V(q^{-2m}t_{0})[y,y^{-1}]\) as scalar multiplication by \(y^{m}\). Writing the corresponding representation as \(\pi:U_{q}(\widehat{\mathfrak{h}}^{+})\to\operatorname{End}(V)\), the more general grading-shifted representation \(\pi_{z}^{s_{0},s_{1}}:=\pi\circ\Sigma_{z}^{s_{0},s_{1}}\) can be related to the representation shifted by the principal grading as follows. Adjoining to the ring \(\mathbb{C}[z,z^{-1}]\) a square root \(Z\) of \(z\), we have \[\pi_{z}^{s_{0},s_{1}}=\operatorname{Ad}\left(Z^{(s_{0}-s_{1})D}\right)\circ \pi_{Z^{s_{0}+s_{1}}}, \tag{2.33}\] where on the right-hand side \(\operatorname{Ad}\) stands for 'conjugation by'. See [11, Sec. 5.2] for essentially the same point in the context of irreducible finite-dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-representations. ## 3. The augmented q-Onsager algebra, its twist and its universal K-matrix In parallel with the previous section, we consider a particular subalgebra of \(U_{q}(\widehat{\mathfrak{g}})\) and extend some recent results on universal K-matrices [11, 12] in the context of (possibly infinite-dimensional) level-0 representations of Borel subalgebras of quantum affine \(\mathfrak{sl}_{2}\). For a related approach tailored to evaluation representations involving essentially the same subalgebra, see [1]. ### The twist map \(\psi\) We consider the following algebra automorphism and coalgebra antiautomorphism of \(U_{q}(\widehat{\mathfrak{g}})\): \[\psi:=\omega\circ\Phi. \tag{3.1}\] From (2.18-2.19) and (2.24-2.25), respectively, we immediately deduce \[(\psi\otimes\psi)(\mathcal{R}) =\mathcal{R}_{21}, \tag{3.3}\] \[\psi\circ\Sigma_{z} =\Sigma_{z^{-1}}\circ\psi. \tag{3.2}\] By the following result, P-symmetric R-matrices (\(\mathcal{R}(z)_{21}=\mathcal{R}(z)\)) naturally arise in tensor products of representations of the upper and lower Borel subalgebras on the same vector space, provided they are related through \(\psi\) and the principal grading is used in the definition of grading-shifted universal R-matrix \(\mathcal{R}(z)\), see (2.27). **Lemma 3.1**.: _Consider two pairs of level-0 representations \(\pi^{\pm},\varrho^{\pm}:U_{q}(\widehat{\mathfrak{b}}^{\pm})\to\mathrm{End}(V)\) such that_ \[\varrho^{\mp}=\pi^{\pm}\circ\psi. \tag{3.4}\] _Then \(\mathcal{R}_{\pi^{+}\pi^{-}}(z)_{21}=\mathcal{R}_{\varrho^{+}\varrho^{-}}(z)\)._ Proof.: Unpacking the definitions (2.28) and (2.27), we have \[\mathcal{R}_{\pi^{+}\pi^{-}}(z)_{21}=\Big{(}\big{(}(\pi^{+}\otimes\pi^{-}) \circ(\Sigma_{z}\otimes\mathfrak{id})\big{)}(\mathcal{R})\Big{)}_{21}=\big{(} (\pi^{-}\otimes\pi^{+})\circ(\mathfrak{id}\otimes\Sigma_{z})\big{)}\big{(} \mathcal{R}_{21}\big{)}.\] Now using (3.2-3.3) we deduce \[\mathcal{R}_{\pi^{+}\pi^{-}}(z)_{21}=\big{(}(\pi^{-}\otimes\pi^{+})\circ( \psi\otimes\psi)\circ(\mathfrak{id}\otimes\Sigma_{z^{-1}})\big{)}(\mathcal{R}).\] Applying (3.4) and using (2.28) and (2.27) once again, we obtain \(\mathcal{R}_{\varrho^{+}\varrho^{-}}(z)\) as required. ### The augmented \(\mathbf{q}\)-Onsager algebra The map \(\psi\) plays an important role in the theory of diagonal matrix solutions with a free parameter of the reflection equation in \(U_{q}(\widehat{\mathfrak{g}})\)-modules. Namely, fix a parameter \(\xi\in\mathbb{C}^{\times}\) and consider the following subalgebra of \(U_{q}(\widehat{\mathfrak{g}})\), also called the _(embedded) augmented q-Onsager algebra_: \[U_{q}(\mathfrak{k}):=\mathbb{C}\big{\langle}e_{0}-q^{-1}\xi^{-1}k_{0}f_{1},\,e _{1}-q^{-1}\xi k_{1}f_{0},\,k_{0}k_{1}^{-1},\,k_{0}^{-1}k_{1}\big{\rangle}. \tag{3.5}\] This is a left coideal: \[\Delta(U_{q}(\mathfrak{k}))\subseteq U_{q}(\widehat{\mathfrak{g}})\otimes U _{q}(\mathfrak{k}). \tag{3.6}\] The automorphism \(\psi\) is the trivial q-deformation of a Lie algebra automorphism of \(\widehat{\mathfrak{g}}\), also denoted \(\psi\), and \(U_{q}(\mathfrak{k})\) is the (\(\xi\)-dependent) coideal q-deformation of the universal enveloping algebra of the fixed-point subalgebra \(\mathfrak{k}=\widehat{\mathfrak{g}}^{\psi}\), in the style of [Ko14] but with opposite conventions. _Remark 3.2_.: See [VW20, Rmk. 2.3] for more background on this subalgebra. Note that the definition of \(U_{q}(\mathfrak{k})\) in _loc. cit._ has a misprint: \(\xi\) should be replaced by \(\xi^{-1}\). To connect with the universal K-matrix formalism of [AV22a, AV22b], let \(\widetilde{S}\) be the bialgebra isomorphism5 from \(U_{q}(\widehat{\mathfrak{g}})\) to \(U_{q}(\widehat{\mathfrak{g}})^{\mathsf{op},\mathsf{cop}}\) (also known as the _unitary antipode_) defined by the assignments Footnote 5: In particular, \(\widetilde{S}\), like the antipode \(S\) itself, is an algebra antiautomorphism and a coalgebra antiautomorphism. \[\widetilde{S}(e_{i})=-qk_{i}^{-1}e_{i},\qquad\widetilde{S}(f_{i})=-q^{-1}f_{i }k_{i},\qquad\widetilde{S}(k_{i}^{\pm 1})=k_{i}^{\mp 1},\qquad\widetilde{S}(g^{\pm 1})=g^{ \mp 1}. \tag{3.7}\] Note that \(\widetilde{S}^{2}=\mathfrak{id}\). Now consider6 the right coideal subalgebra Footnote 6: In general, each element or map in the right coideal setting of [Ko14, AV22a, AV22b] is denoted by a prime on the corresponding object in the current left coideal setting. \[U_{q}(\mathfrak{k})^{\prime}=\widetilde{S}(U_{q}(\mathfrak{k}))=\mathbb{C} \langle f_{0}-q\xi^{-1}e_{1}k_{0}^{-1},f_{1}-q\xi e_{0}k_{1}^{-1},k_{0}k_{1}^{- 1},k_{0}^{-1}k_{1}\rangle\] which is the subalgebra considered in [11, Sec. 9.7], forming part of a more general family of right coideal subalgebras (quantum symmetric pair subalgebras) of quantum affine algebras as considered in [12, 13, 14]. ### Universal K-matrix By [11, Thm. 8.5], \(U_{q}(\widehat{\mathfrak{g}})\) is endowed with a so-called _standard_ universal K-matrix, which is an invertible element in a completion of \(U_{q}(\widehat{\mathfrak{b}}^{+})\) satisfying a twisted \(U_{q}(\mathfrak{k})\)-intertwining property and a twisted coproduct formula involving the universal R-matrix7 Footnote 7: Note that our convention for the coproduct is as in [11], but the ordering of the tensor product of the two Borel subalgebras is opposite. Hence the R-matrix in [11], denoted here by \(\mathcal{R}^{\prime}\), is equal to \(\mathcal{R}^{-1}_{21}\). \[\mathcal{R}^{\prime}=\mathcal{R}^{-1}_{21}. \tag{3.8}\] There is an action of invertible elements of a completion of \(U_{q}(\widehat{\mathfrak{g}})\), gauge-transforming the universal K-matrix and the twisting operator simultaneously, see [11, Sec. 3.6]. For the case under consideration, there exists a gauge transformation (a 'Cartan correction', see [11, Sec. 8.8]) that brings both the intertwining property and the coproduct formula for the universal K-matrix into a particularly nice form. Moreover, the gauge-transformed universal K-matrix still resides in a completion of \(U_{q}(\widehat{\mathfrak{b}}^{+})\) and, when shifted by the principal grading, acts as a linear-operator-valued formal power series for all level-0 \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-modules. To make this more precise, let \(\Omega:\widetilde{P}\to\mathbb{C}^{\times}\) be any group homomorphism such that \(\Omega(\alpha_{0})=-\xi\) and \(\Omega(\alpha_{1})=-\xi^{-1}\). Now define a function \(G^{\prime}:\widetilde{P}\to\mathbb{C}^{\times}\) by \[G^{\prime}(\lambda)=\Omega(\lambda)q^{-(\Phi(\lambda),\lambda)/2}. \tag{3.9}\] Note that this is not a group homomorphism. Define the corresponding linear operator acting on \(U_{q}(\widehat{\mathfrak{b}})\)-weight modules as follows: \[G^{\prime}\cdot v=G^{\prime}(\lambda)v,\qquad v\in V_{\lambda},\qquad\lambda \in\widetilde{P}. \tag{3.10}\] Analogously to our definition of the factor \(\kappa\) of the universal R-matrix, we thus obtain an invertible element \(G^{\prime}\) of the weight completion of \(U_{q}(\widehat{\mathfrak{g}})\). Finally, let \(\delta=\alpha_{0}+\alpha_{1}\) be the basic imaginary root of \(\widehat{\mathfrak{g}}\). Then the following result is a special case of [11, Sec. 9.7], with the coproduct formula a direct consequence of [11, (8.21)]. **Proposition 3.3**.: _There exists an invertible element_ \[\Upsilon^{\prime}=\sum_{\lambda\in\mathbb{Z}_{\geqslant 0}\delta}\Upsilon^{ \prime}_{\lambda},\qquad\Upsilon^{\prime}_{\lambda}\in U_{q}(\widehat{ \mathfrak{n}}^{+})_{\lambda}, \tag{3.11}\] _such that the invertible element_ \[\mathcal{K}^{\prime}:=G^{\prime}\cdot\Upsilon^{\prime} \tag{3.12}\] _satisfies_ \[\mathcal{K}^{\prime}\cdot u =\psi(u)\cdot\mathcal{K}^{\prime}\qquad\text{for all }u\in U_{q}(\mathfrak{k})^{\prime}, \tag{3.14}\] \[\Delta(\mathcal{K}^{\prime}) =(1\otimes\mathcal{K}^{\prime})\cdot(\psi\otimes\mathsf{id})( \mathcal{R}^{\prime})\cdot(\mathcal{K}^{\prime}\otimes 1). \tag{3.13}\] _Remark 3.4_.: This choice of \(\mathcal{K}^{\prime}\) is also known as the _semi-standard_ universal K-matrix, see [11, Sec. 8.10] and cf.[11, Ex. 3.6.3 (2)]. It corresponds to the choice of a _twist pair_\((\psi,J)\) where \(\psi\) is a bialgebra automorphism (e.g. a diagram automorphism) and \(J\) is the trivial Drinfeld twist \(1\otimes 1\), see [11, Sec. 2.4 and 2.5]; this choice is associated with the simple 3-factor coproduct formula (3.14). The semi-standard K-matrix is always available; what is rather special in the case of the augmented q-Onsager algebra is that it still lies in a completion of \(U_{q}(\widehat{\mathfrak{b}}^{+})\). Now we transform this formalism [1] for the right coideal subalgebra \(U_{q}(\mathfrak{k})^{\prime}\), expressed in terms of the universal R-matrix \(\mathcal{R}^{\prime}\), to a formalism for the left coideal subalgebra \(U_{q}(\mathfrak{k})=\widetilde{S}(U_{q}(\mathfrak{k})^{\prime})\), expressed in terms of the universal R-matrix \(\mathcal{R}\) as used in this paper. To do this, note that, when going from a \(U_{q}(\widehat{\mathfrak{g}})\)-weight module to its dual, weights transform as \(\lambda\mapsto-\lambda\). This defines the extension of \(S\) and \(\widetilde{S}\) to a map on the weight completion of \(U_{q}(\widehat{\mathfrak{g}})\). Therefore \(\widetilde{S}(\Omega)=\Omega^{-1}\) but the non-group-like factor of \(G^{\prime}\) is fixed by \(\widetilde{S}\). We define \(G:\widehat{P}\to\mathbb{C}^{\times}\) by \[G(\lambda):=\Omega(\lambda)q^{(\Phi(\lambda),\lambda)/2} \tag{3.15}\] so that \(G=\widetilde{S}(G^{\prime})^{-1}\). Also, we set \[\Upsilon:=\widetilde{S}(\Upsilon^{\prime})^{-1}=\sum_{\lambda\in\mathbb{Z}_{ \geqslant 0}\delta}\Upsilon_{\lambda},\qquad\Upsilon_{\lambda}\in\widetilde{S}(U _{q}(\widehat{\mathfrak{n}}^{+})_{\lambda})\subset U_{q}(\widehat{\mathfrak{h }})\cdot U_{q}(\widehat{\mathfrak{n}}^{+})_{\lambda}. \tag{3.16}\] **Proposition 3.5**.: _The element_ \[\mathcal{K}:=\widetilde{S}(\mathcal{K}^{\prime})^{-1}=G\cdot\Upsilon \tag{3.17}\] _satisfies_ \[\mathcal{K}\cdot u =\psi(u)\cdot\mathcal{K}\qquad\text{for all }u\in U_{q}( \mathfrak{k}), \tag{3.19}\] \[\Delta(\mathcal{K}) =(\mathcal{K}\otimes 1)\cdot(\mathsf{id}\otimes\psi)(\mathcal{R}) \cdot(1\otimes\mathcal{K}). \tag{3.18}\] Proof.: This follows straightforwardly from Proposition 3.3. Namely, we apply \(\widetilde{S}\) to (3.13) and \((\widetilde{S}\otimes\widetilde{S})\circ\sigma\) to (3.14), and use the fact that \(\widetilde{S}\circ\psi=\psi\circ\widetilde{S}\) and \((\widetilde{S}\otimes\widetilde{S})(\mathcal{R})=\mathcal{R}\). Note that \(U_{q}(\widehat{\mathfrak{h}}^{+})\) is a bialgebra and, as expected, the right-hand side of (3.19) lies in a completion of \(U_{q}(\widehat{\mathfrak{h}}^{+})\otimes U_{q}(\widehat{\mathfrak{h}}^{+})\), since \(\psi\) interchanges the two Borel subalgebras \(U_{q}(\widehat{\mathfrak{h}}^{\pm})\). The reflection equation satisfied by the universal element \(\mathcal{K}\) is as follows: \[\mathcal{R}\cdot(\mathcal{K}\otimes 1)\cdot(\mathsf{id}\otimes\psi)( \mathcal{R})\cdot(1\otimes\mathcal{K})=(1\otimes\mathcal{K})\cdot(\mathsf{ id}\otimes\psi)(\mathcal{R})\cdot(\mathcal{K}\otimes 1)\cdot\mathcal{R}. \tag{3.20}\] This is a consequence of the linear relation (2.15) for \(\mathcal{R}\) and the coproduct formula (3.19) for \(\mathcal{K}\), alongside (3.2) and \(\psi^{2}=\mathsf{id}\). ### The action of the universal K-matrix on level-0 representations To deduce that \(\mathcal{K}\) has a well-defined action on level-0 representations of, say, \(U_{q}(\widehat{\mathfrak{h}}^{+})\), we can proceed in a similar way to the case of the R-matrix. This builds on the finite-dimensional theory for more general quantum symmetric pair subalgebras in [1, Sec. 4]. First note that if \(\pi\) is a level-0 representation, \(\pi\) and the twisted representation \(\pi\circ\psi\) coincide on \(U_{q}(\widehat{\mathfrak{h}})\). Now let \(z\) once again be a formal variable. Note that by (3.15) the function \(G\) sends the basic imaginary root \(\delta\) to 1. Hence the proof of [1, Prop. 4.3.1 (3)] implies that the corresponding factor \(G\) of the universal K-matrix descends to level-0 modules. Furthermore, the argument that shows \(\Sigma_{z}(\Theta)\) is a \(U_{q}(\widehat{\mathfrak{n}}^{+})\otimes U_{q}(\widehat{\mathfrak{n}}^{-})\)-valued formal power series can be easily adapted to \(\Upsilon\); it yields a formal power series with coefficients in \(\widetilde{S}(U_{q}(\widehat{\mathfrak{n}}^{+}))\subset U_{q}(\widehat{ \mathfrak{h}}^{+})\): \[\Sigma_{z}(\Upsilon)=\sum_{r\geqslant 0}z^{r}\sum_{\lambda\in\mathbb{Z}_{ \geqslant 0}\delta,\,\mathsf{ht}(\lambda)=r}\Upsilon_{\lambda}.\] Now consider the grading-shifted universal K-matrix: \[\mathcal{K}(z)=\Sigma_{z}(\mathcal{K}). \tag{3.21}\] Noting that the form of \(\Upsilon\) implies that \(\mathcal{K}\) commutes with \(k_{1}\), we arrive at the following main result, which is a boundary analogue of Theorem 2.4. **Theorem 3.6**.: _Consider a level-0 representation \(\pi:U_{q}(\widehat{\mathfrak{b}}^{+})\to\operatorname{End}(V)\). Then_ \[\mathcal{K}_{\pi}(z):=\pi(\mathcal{K}(z))\in\operatorname{End}(V)\otimes \mathbb{C}[[z]] \tag{3.22}\] _is well-defined and commutes with \(\pi(k_{1})\)._ We will also need boundary counterparts of Propositions 2.5 and 2.6. Consider two indeterminates \(z_{1},z_{2}\). Applying \(\Sigma_{z_{1}}\otimes\Sigma_{z_{2}}\) to (3.20) and using (3.3), we obtain the following reflection equation for the grading-shifted universal operators: \[\begin{split}\mathcal{R}(z_{1}/z_{2})\cdot(\mathcal{K}(z_{1}) \otimes 1)\cdot(\mathsf{id}\otimes\psi)(\mathcal{R}(z_{1}z_{2}))\cdot(1 \otimes\mathcal{K}(z_{2}))=\\ =(1\otimes\mathcal{K}(z_{2}))\cdot(\mathsf{id}\otimes\psi)( \mathcal{R}(z_{1}z_{2}))\cdot(\mathcal{K}(z_{1})\otimes 1)\cdot\mathcal{R}(z_{1}/z_{ 2}).\end{split} \tag{3.23}\] Recalling that the universal R-matrix \(\mathcal{R}\) lies in a completion of \(U_{q}(\widehat{\mathfrak{b}}^{+})\otimes U_{q}(\widehat{\mathfrak{b}}^{-})\) and applying a tensor product of suitable representations to (3.23), one obtains the _right reflection equation_ with multiplicative spectral parameters for P-symmetric R-matrices, as we now state precisely. **Proposition 3.7**.: _Consider level-0 representations \(\pi^{+}:U_{q}(\widehat{\mathfrak{b}}^{+})\to\operatorname{End}(V^{+})\) and \(\pi:U_{q}(\widehat{\mathfrak{g}})\to\operatorname{End}(V)\) such that \(\pi\circ\psi=\pi\). Then_ \[\begin{split}\mathcal{R}_{\pi^{+}\pi}&(z_{1}/z_{2})( \mathcal{K}_{\pi^{+}}(z_{1})\otimes\mathsf{Id}_{V})\mathcal{R}_{\pi^{+}\pi}(z _{1}z_{2})(\mathsf{Id}_{V^{+}}\otimes\mathcal{K}_{\pi}(z_{2}))=\\ &=(\mathsf{Id}_{V^{+}}\otimes\mathcal{K}_{\pi}(z_{2}))\mathcal{R} _{\pi^{+}\pi}(z_{1}z_{2})(\mathcal{K}_{\pi^{+}}(z_{1})\otimes\mathsf{Id}_{V}) \mathcal{R}_{\pi^{+}\pi}(z_{1}/z_{2}).\end{split} \tag{3.24}\] The use of linear relations to find explicit solutions of reflection equations was proposed in [14, 15, 16]. As before, we assume that \(\pi\) extends to a \(U_{q}(\widehat{\mathfrak{g}})\)-representation8, in which case it restricts to a \(U_{q}(\mathfrak{k})\)-representation and we obtain the following result as a consequence of (3.3). Footnote 8: Analogous to the case of the R-matrix, we can observe that the intersection of \(U_{q}(\mathfrak{k})\) and \(U_{q}(\widehat{\mathfrak{b}}^{+})\) is contained in \(U_{q}(\widehat{\mathfrak{b}})\). Therefore, applying a level-0 representation \(\pi\) to (3.18) just recovers the second part of Theorem 3.6. **Proposition 3.8**.: _If \(\pi:U_{q}(\widehat{\mathfrak{g}})\to\operatorname{End}(V)\) is a level-0 representation such that \(\pi\circ\psi=\pi\), then_ \[\mathcal{K}_{\pi}(z)\cdot\pi_{z}(u)=\pi_{1/z}(u)\cdot\mathcal{K}_{\pi}(z) \qquad\text{for all $u\in U_{q}(\mathfrak{k})$}. \tag{3.25}\] We close this section with some comments parallel to Remark 2.7. _Remark 3.9_.: If the solution space of (3.25) is 1-dimensional, Proposition 3.8 implies that any solution must be a scalar multiple of \(\mathcal{K}(z)\) and hence automatically satisfy the reflection equation (3.24). In the case that \(\pi:U_{q}(\widehat{\mathfrak{b}}^{+})\to\operatorname{End}(V)\) extends to a representation and \(V\) is finite-dimensional, there is an analogue to Remark 2.7. Namely, the solution space of (3.25) for irreducible representations is 1-dimensional and the existence of a solution of the intertwining condition (3.25) depending rationally on \(z\) leads to a _trigonometric K-matrix_. See [1, Secs. 5 and 6] for more detail. To explicitly determine \(\mathcal{K}_{\pi^{+}}(z)\) in the cases where \(\pi^{+}:U_{q}(\widehat{\mathfrak{b}}^{+})\to\operatorname{End}(V)\) does not extend to a \(U_{q}(\widehat{\mathfrak{g}})\)-representation, we will use the reflection equation (3.24), with the other K-matrix \(\mathcal{K}_{\pi}(z)\) determined using Proposition 3.8. ## 4. Borel representations in terms of the q-oscillator algebra ### The infinite-dimensional vector space \(W\) The countably-infinite-dimensional vector space plays a central role in the theory of Baxter's Q-operators. We may define it as the free \(\mathbb{C}\)-module over a given set \(\{w_{j}\}_{j\in\mathbb{Z}_{\geqslant 0}}\): \[W=\bigoplus_{j\geqslant 0}\mathbb{C}w_{j}. \tag{4.1}\] Given this distinguished basis, elements of \(\operatorname{End}(W)\) naturally identify with infinite-by-infinite matrices with the property that all but finitely many entries of each column are zero. It is convenient to work with a particular subalgebra of \(\operatorname{End}(W)\) depending on the deformation parameter \(q\). More precisely, consider the \(\mathbb{C}\)-linear maps \(a\), \(a^{\dagger}\) on \(W\) defined by \[a\cdot w_{j+1}=w_{j},\qquad a\cdot w_{0}=0,\qquad a^{\dagger}\cdot w_{j}=\big{(} 1-q^{2(j+1)}\big{)}w_{j+1} \tag{4.2}\] for all \(j\in\mathbb{Z}_{\geqslant 0}\). These operators satisfy the relation \([a,a^{\dagger}]_{q^{2}}=1-q^{2}\). Note that each basis vector \(w_{j}\) is an eigenvector of the compositions \(aa^{\dagger}\) and \(a^{\dagger}a\) with eigenvalues \(1-q^{2(j+1)}\) and \(1-q^{2j}\), respectively. For the description of L-operators associated to \(U_{q}(\widehat{\mathfrak{g}})\) acting on \(W\otimes\mathbb{C}^{2}\) (particular solutions of the Yang-Baxter equation), it is convenient to consider a linear operator \(q^{D}\) which is a square root of \(1-a^{\dagger}a\), i.e. \(q^{D}\cdot w_{j}=q^{j}w_{j}\) for \(j\in\mathbb{Z}_{\geqslant 0}\). Note that \(q^{D}\) is invertible and we let \(q^{-D}\) denote its inverse. _Remark 4.1_.: Often the q-oscillator algebra is defined as the abstract algebra generated by \(a\), \(a^{\dagger}\) and \(q^{\pm D}\) subject to certain relations, which naturally embeds into \(\operatorname{End}(W)\). This version of the q-oscillator algebra appeared in the guise of a topological algebra for instance in [1, Sec. 2.3] and with slightly different conventions in [11]9. Footnote 9: The two vector spaces \(W_{1}\) and \(W_{2}\) introduced in [11, Sec. 2.3] are naturally isomorphic, so that the two algebras \(\operatorname{Osc}_{1}\) and \(\operatorname{Osc}_{2}\) can be identified with the same subalgebra of \(\operatorname{End}(W_{1})\cong\operatorname{End}(W_{2})\). ### Diagonal operators from functions and an extended q-oscillator algebra To accommodate the action of the universal R and K-matrices on certain level-0 modules, we will need an extension of the commutative subalgebra \(\langle q^{\pm D}\rangle\) and work over the commutative ring \(\mathbb{C}[[z]]\). Denote by \(\mathcal{F}\) the commutative algebra of functions from \(\mathbb{Z}_{\geqslant 0}\) to \(\mathbb{C}[[z]]\). For any \(f\in\mathcal{F}\) we define \(f(D)\in\operatorname{End}(W)\) via \[f(D)\cdot w_{j}=f(j)w_{j}. \tag{4.3}\] Thus, we obtain an algebra embedding \(\mathcal{F}\to\operatorname{End}(W)[[z]]\), whose image \(\mathcal{F}(D)\) is the subalgebra of diagonal operators on \(W\) with respect to the given basis. Now we combine this with the maps \(a\), \(a^{\dagger}\) defined above (viewed as maps on \(W[[z]]\cong W\otimes\mathbb{C}[[z]]\), acting trivially on the second factor). **Definition 4.2**.: The _(extended) q-oscillator algebra_ is the subalgebra \(\mathcal{A}\subset\operatorname{End}(W)[[z]]\) generated by \(a^{\dagger}\), \(a\) and \(\mathcal{F}(D)\). As can be verified on basis vectors, in \(\mathcal{A}\) one has the relations \[aa^{\dagger}=1-q^{2(D+1)},\qquad a^{\dagger}a=1-q^{2D},\qquad af(D)=f(D+1)a, \qquad f(D)a^{\dagger}=a^{\dagger}f(D+1). \tag{4.4}\] One straightforwardly verifies that the subalgebras \(\mathcal{F}(D)\), \(\langle a^{\dagger}\rangle\) and \(\langle a\rangle\) are self-centralizing. Note that the operator \[\bar{a}^{\dagger}:=-q^{-2D}a^{\dagger}\in\operatorname{End}(W) \tag{4.5}\] sends \(w_{j}\) to \((1-q^{-2(j+1)})w_{j+1}\). Clearly, \(\mathcal{A}\) is also generated by \(\bar{a}^{\dagger}\), \(a\) and \(\mathcal{F}(D)\). The transformation \(q\mapsto q^{-1}\) defines an algebra automorphism of \(\mathcal{A}\), preserving the subalgebra \(\mathcal{F}(D)\), fixing the generator \(a\) and interchanging the generators \(a^{\dagger}\) and \(\bar{a}^{\dagger}\). ### Enomorphisms of \(W\otimes W\) The linear maps \[a_{1}:=a\otimes\operatorname{\mathsf{Id}}_{W},\qquad a_{1}^{\dagger}:=a^{ \dagger}\otimes\operatorname{\mathsf{Id}}_{W},\qquad a_{2}:=\operatorname{ \mathsf{Id}}_{W}\otimes a,\qquad a_{2}^{\dagger}:=\operatorname{\mathsf{Id}}_ {W}\otimes a^{\dagger}\] together with \(\mathcal{F}(D_{1})\cup\mathcal{F}(D_{2})\) generate \(\mathcal{A}\otimes\mathcal{A}\) over \(\mathbb{C}[[z]]\). We will need a larger subalgebra of \(\operatorname{End}(W\otimes W)\): we will allow all functions of two nonnegative integers as well as formal power series in certain locally nilpotent endomorphisms. Denote by \(\mathcal{F}^{(2)}\) the commutative algebra of functions from \(\mathbb{Z}_{\geqslant 0}\times\mathbb{Z}_{\geqslant 0}\) to \(\mathbb{C}[[z]]\). Similarly, we denote by \(D_{1}\) and \(D_{2}\) the linear operators on the tensor product \(W\otimes W\) defined by \[D_{1}\cdot(w_{j}\otimes w_{k})=jw_{j}\otimes w_{k},\qquad D_{2}\cdot(w_{j} \otimes w_{k})=kw_{j}\otimes w_{k}. \tag{4.6}\] For any \(f\in\mathcal{F}^{(2)}\) we define \(f(D_{1},D_{2})\in\operatorname{End}(W\otimes W)[[z]]\) via \[f(D_{1},D_{2})\cdot(w_{j}\otimes w_{k})=f(j,k)w_{j}\otimes w_{k}, \tag{4.7}\] yielding an algebra embedding \(\mathcal{F}^{(2)}\to\operatorname{End}(W\otimes W)[[z]]\), whose image \(\mathcal{F}^{(2)}(D_{1},D_{2})\) is the subalgebra of diagonal operators on \(W\otimes W\). Now note that \(a_{1}a_{2}^{\dagger}\) and \(a_{1}^{\dagger}a_{2}\) are locally nilpotent endomorphisms of \(W\otimes W\). Hence, for any \(g_{k,\ell},h_{k,\ell}\in\mathcal{F}^{(2)}\) series of the form \[\sum_{k,\ell\geqslant 0}(a_{2}^{\dagger})^{\ell}g_{k,\ell}(D_{1},D_{2})a_{1}^{k},\qquad\sum_{k,\ell\geqslant 0}(a_{1}^{\dagger})^{k}h_{k,\ell}(D_{1},D_{2})a_{2 }^{\ell} \tag{4.8}\] truncate when applied to any basis vector \(w_{j}\otimes w_{j^{\prime}}\). We obtain a class of well-defined elements of \(\operatorname{End}(W\otimes W)[[z]]\). We denote by \(\mathcal{A}^{(2)}\) the \(\mathbb{C}[[z]]\)-span of the operator-valued formal series (4.8), which is easily seen to be a subalgebra of \(\operatorname{End}(W\otimes W)[[z]]\). ### The Borel representations We introduce four level-0 representations of \(U_{q}(\widehat{\mathfrak{b}}^{+})\). First of all, let \(\mu\in\mathbb{C}\) be a free parameter. It is straightforward to check that the following assignments define a representation \(\upsilon\) of \(U_{q}(\widehat{\mathfrak{g}})\) on \(W\): \[\upsilon(e_{0})=\upsilon(f_{1})=\frac{1}{1-q^{2}}a^{\dagger}, \upsilon(k_{0})=q^{-\mu+1+2D},\] \[\upsilon(e_{1})=\upsilon(f_{0})=\frac{q^{2}}{1-q^{2}}a(q^{-\mu}-q ^{\mu-2D}), \upsilon(k_{1})=q^{\mu-1-2D}, \tag{4.9}\] The module structure on \(W\) defined by \(\upsilon\) is the evaluation Verma module: affinizations of finite-dimensional irreducible \(U_{q}(\mathfrak{sl}_{2})\)-modules arise as quotients if \(\mu\in\mathbb{Z}_{>0}\) (also see [14, Sec. 2.2]). We will in addition consider three \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-representations which do not extend to representations of \(U_{q}(\widehat{\mathfrak{g}})\). A useful reducible representation \(\phi:U_{q}(\widehat{\mathfrak{b}}^{+})\to\operatorname{End}(W)\) is given by \[\phi(e_{0})=0,\qquad\phi(e_{1})=\frac{q}{1-q^{2}}a,\qquad\phi(k_{0})=q^{\mu+1+2D },\qquad\phi(k_{1})=q^{-\mu-1-2D} \tag{4.10}\] which is closely related to the special evaluation homomorphism defined in [14, Eq. (4.6)]. The following representations \(\varrho,\bar{\varrho}:U_{q}(\widehat{\mathfrak{b}}^{+})\to\operatorname{End}(W)\) play an essential role in the definition of Baxter Q-operators: \[\varrho(e_{0})=\frac{1}{1-q^{2}}a^{\dagger},\qquad\varrho(e_{1})=\frac{q^{2} }{1-q^{2}}a,\qquad\varrho(k_{0})=q^{2D}, \varrho(k_{1})=q^{-2D}, \tag{4.11}\] \[\bar{\varrho}(e_{0})=\frac{q^{2}}{1-q^{2}}\bar{a}^{\dagger},\qquad\bar{ \varrho}(e_{1})=\frac{1}{1-q^{2}}a,\qquad\bar{\varrho}(k_{0})=q^{2(D+1)}, \bar{\varrho}(k_{1})=q^{-2(D+1)}. \tag{4.12}\] They correspond to the representations \(L^{\pm}_{1,a}\) introduced in [12, Def. 3.7] for suitable \(a\in\mathbb{C}^{\times}\) (called _prefundamental_ representations in [12] which considers their role in the construction of Q-operators for closed chains). We will henceforth repeatedly denote grading-shifted representations by the notation (2.29). Note that the grading-shifted representations \(\varrho_{z}\), \(\bar{\varrho}_{z}\) correspond to the representations defined by [13, Eq. (3.5)]. _Remark 4.3_.: Note that the grading-shifted representation in [11, Eq. (2.9)] is related to \(\varrho_{z}\) by a factor of \(-1\) in the actions of \(e_{0}\) and \(e_{1}\): in other words it is equal to \(\varrho_{-z}\). Since the Baxter Q-operators only depend on \(z^{2}\), see [11, Lem. 4.5], there are no serious discrepancies. The benefit of the current choice is its consistency across the relevant level-0 representations, with \(\upsilon\) having the same sign convention as finite-dimensional representations such as \(\Pi\), see Section 5. \(\diameter{\diameter{\mathcal{E}}}\) ### The \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-intertwiner \({\mathcal{O}}\) The tensor products \(\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{\mu/2}z}\) and \(\upsilon_{z}\otimes\phi_{z}\) of shifted representations are closely related in the following sense: the two induced \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-actions on \(W\otimes W\) are conjugate by an element in \(\mathcal{A}^{(2)}\) which is independent of \(z\). More precisely, consider the deformed exponential \[e_{q^{2}}(x)=\sum_{k=0}^{\infty}\frac{x^{k}}{(q^{2};q^{2})_{k}}. \tag{4.12}\] We refer to Appendix A for more detail on this formal power series. We now define the following invertible element of \(\operatorname{End}(W\otimes W)\): \[{\mathcal{O}}=e_{q^{2}}(q^{2}a_{1}\bar{a}_{2}^{\dagger})^{-1}q^{\mu(D_{1}-D_{2 })/2}. \tag{4.13}\] The following statement is [13, Eq. (4.4)] and connects to [12, Thm. 3.8]; for completeness we provide a proof in the present conventions. **Theorem 4.4**.: _The \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-representations \(\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{\mu/2}z}\) and \(\upsilon_{z}\otimes\phi_{z}\) are intertwined by \({\mathcal{O}}\):_ \[{\mathcal{O}}\left(\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{\mu/2}z} \right)\!\!\left(\Delta(u)\right)=\big{(}\upsilon_{z}\otimes\phi_{z}\big{)} \!\!\left(\Delta(u)\right)\,{\mathcal{O}}\qquad\text{for all $u\in U_{q}(\widehat{ \mathfrak{b}}^{+})$}. \tag{4.14}\] Proof.: The relations (A.13-A.15) can be evaluated at \(y=q^{2}\), yielding \[q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^{2}a_{1}\bar{a}_{2}^{\dagger}) \bar{a}_{2}^{\dagger}=\big{(}q^{-\mu/2}a_{1}^{\dagger}+q^{2(D_{1}+1)+\mu/2} \bar{a}_{2}^{\dagger}\big{)}q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^{2}a_{1}\bar{a}_ {2}^{\dagger}),\] \[q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^{2}a_{1}\bar{a}_{2}^{\dagger} )\big{(}a_{1}(q^{-2\mu}-q^{-2D_{1}})+q^{-2(D_{1}+1)}a_{2}\big{)}=\] \[=\big{(}a_{1}q^{-3\mu/2}+q^{-\mu/2-2(D_{1}+1)}a_{2}\big{)}q^{\mu( D_{2}-D_{1})/2}e_{q^{2}}(q^{2}a_{1}\bar{a}_{2}^{\dagger}),\] \[q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^{2}a_{1}\bar{a}_{2}^{\dagger} )q^{2(D_{1}+D_{2}+1)}=q^{2(D_{1}+D_{2}+1)}q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^{2 }a_{1}\bar{a}_{2}^{\dagger}),\] \[q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^{2}a_{1}\bar{a}_{2}^{\dagger} )q^{-2(D_{1}+D_{2}+1)}=q^{-2(D_{1}+D_{2}+1)}q^{\mu(D_{2}-D_{1})/2}e_{q^{2}}(q^ {2}a_{1}\bar{a}_{2}^{\dagger}).\] These directly imply (4.14) for \(u\in\{e_{0},e_{1},k_{0},k_{1}\}\). ### Formalism for \(U_{q}(\widehat{\mathfrak{b}}^{-})\) Recall the automorphism \(\psi\) defined by (3.1), interchanging the two Borel subalgebras. Note that \(\upsilon:U_{q}(\widehat{\mathfrak{g}})\to\operatorname{End}(W)\) satisfies \[\upsilon=\upsilon\circ\psi. \tag{4.15}\] Hence, it is natural to define representations of \(U_{q}(\widehat{\mathfrak{b}}^{-})\) corresponding to \(\varrho\), \(\bar{\varrho}\) and \(\phi\), as follows: \[\varrho^{-}:=\varrho\circ\psi,\qquad\bar{\varrho}^{-}:=\bar{\varrho}\circ\psi, \qquad\phi^{-}:=\phi\circ\psi. \tag{4.16}\] Explicitly, we have \[\varrho^{-}(f_{0}) =\frac{q^{2}}{1-q^{2}}a,\ \ \varrho^{-}(f_{1})=\frac{1}{1-q^{2}}a^{ \dagger},\ \ \varrho^{-}(k_{0})=q^{2D}, \varrho^{-}(k_{1})=q^{-2D},\] \[\bar{\varrho}^{-}(f_{0}) =\frac{1}{1-q^{2}}a,\ \ \bar{\varrho}^{-}(f_{1})=\frac{q^{2}}{1-q^{2}}\bar{a}^{ \dagger},\ \ \bar{\varrho}^{-}(k_{0})=q^{2(D+1)}, \bar{\varrho}^{-}(k_{1})=q^{-2(D+1)},\] \[\phi^{-}(f_{0}) =\frac{q}{1-q^{2}}a,\ \ \phi^{-}(f_{1})=0, \phi^{-}(k_{0})=q^{\mu+1+2D}, \phi^{-}(k_{1})=q^{-\mu-1-2D}. \tag{4.17}\] By (3.3), whereas the grading-shifted representations \(\varrho_{z}\), \(\bar{\varrho}_{z}\), \(\phi_{z}\) take values in \(\operatorname{End}(W)\otimes\mathbb{C}[z]\), their negative counterparts \(\varrho_{z}^{-}\), \(\bar{\varrho}_{z}^{-}\), \(\phi_{z}^{-}\) take values in \(\operatorname{End}(W)\otimes\mathbb{C}[z^{-1}]\). Since \(\psi\) is a coalgebra antiautomorphism, using (3.3) we immediately deduce the following characterization of the tensorial opposite of \(\mathcal{O}\). **Corollary 4.5**.: _The linear map_ \[\mathcal{O}_{21}=e_{q^{2}}(q^{2}\bar{a}_{1}^{\dagger}a_{2})^{-1}q^{\mu(D_{2}- D_{1})/2}\in\operatorname{End}(W\otimes W). \tag{4.18}\] _intertwines the \(U_{q}(\widehat{\mathfrak{b}}^{-})\)-representations \(\bar{\varrho}_{q^{-\mu/2}z}^{-\mu/2}\otimes\varrho_{q^{\mu/2}z}^{-}\) and \(\phi_{z}^{-}\otimes v_{z}\), viz._ \[\mathcal{O}_{21}\left(\bar{\varrho}_{q^{-\mu/2}z}^{-\mu/2}\otimes\varrho_{q^ {\mu/2}z}^{-}\right)(\Delta(u))=\big{(}\phi_{z}^{-}\otimes v_{z}\big{)}(\Delta (u))\ \mathcal{O}_{21}\qquad\text{for all $u\in U_{q}(\widehat{\mathfrak{b}}^{-})$}. \tag{4.19}\] ## 5. L-operators and R-operators In order to define L-operators, we recall the standard \(2\)-dimensional representation \(\Pi:U_{q}(\widehat{\mathfrak{g}})\to\operatorname{End}(\mathbb{C}^{2})\) determined by \[\Pi(e_{0}) =\Pi(f_{1})=\begin{pmatrix}0&0\\ 1&0\end{pmatrix}, \Pi(k_{0})=\begin{pmatrix}q^{-1}&0\\ 0&q\end{pmatrix},\] \[\Pi(e_{1}) =\Pi(f_{0})=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}, \Pi(k_{1})=\begin{pmatrix}q&0\\ 0&q^{-1}\end{pmatrix}. \tag{5.1}\] In analogy with (4.15), we have \[\Pi=\Pi\circ\psi. \tag{5.2}\] ### L-operators for \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-modules We will now obtain explicit formulas for certain scalar multiples of \(\mathcal{R}_{\partial\Pi}(z)\), \(\mathcal{R}_{\partial\Pi}(z)\), \(\mathcal{R}_{v\Pi}(z)\) and \(\mathcal{R}_{\phi\Pi}(z)\). In this case both Theorem 2.4 and Proposition 2.6 apply. It turns out that the relevant linear equations all have \(1\)-dimensional solution spaces over \(\mathbb{C}[[z]]\). The following linear operators are convenient scalar multiples. \[\mathcal{L}_{\varrho}(z) =\begin{pmatrix}q^{D}&a^{\dagger}q^{-D-1}z\\ aq^{D+1}z&q^{-D}-q^{D+2}z^{2}\end{pmatrix}, \tag{5.4}\] \[\mathcal{L}_{\tilde{\varrho}}(z) =\begin{pmatrix}q^{D+1}-q^{-D+1}z^{2}&\bar{a}^{\dagger}q^{-D}z\\ aq^{D}z&q^{-D-1}\end{pmatrix},\] (5.5) \[\mathcal{L}_{v}(z) =\begin{pmatrix}q^{D}-q^{-D+\mu}z^{2}&a^{\dagger}q^{-D-2+\mu}z\\ aq\big{(}q^{D-\mu}-q^{-D+\mu}\big{)}z&q^{-D-1+\mu}-q^{D+1}z^{2}\end{pmatrix},\] (5.6) \[\mathcal{L}_{\phi}(z) =\begin{pmatrix}q^{D+1}&0\\ aq^{D+1}z&q^{-D-\mu}\end{pmatrix}. \tag{5.3}\] _Remark 5.1_.: We have abused notation by representing linear operators on \(\operatorname{End}(W\otimes\mathbb{C}^{2})\) as \(2\times 2\) matrices with coefficients in \(\operatorname{End}(W)\) (as opposed to the conventional usage that realizes operators on \(\operatorname{End}(\mathbb{C}^{2}\otimes W)\) in this way). The following result is [14, Cor. 4.2]. **Theorem 5.2**.: _The above L-operators satisfy the following relation in \(\operatorname{End}(W\otimes W\otimes\mathbb{C}^{2})[[z]]\):_ \[\mathcal{O}_{12}\mathcal{L}_{\varrho}(q^{-\mu/2}z)_{13}\mathcal{L}_{\bar{ \varrho}}(q^{\mu/2}z)_{23}=\mathcal{L}_{\upsilon}(z)_{13}\mathcal{L}_{\phi}(z )_{23}\mathcal{O}_{12}. \tag{5.7}\] Proof.: From (2.16) one deduces \[\mathcal{L}_{\varrho}(q^{-\mu/2}z)_{13}\mathcal{L}_{\bar{\varrho }}(q^{\mu/2}z)_{23} \varpropto(\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{\mu/2}z} \otimes\Pi)\big{(}(\Delta\otimes\operatorname{\sf id})(\mathcal{R})\big{)},\] \[\mathcal{L}_{\upsilon}(z)_{13}\mathcal{L}_{\phi}(z)_{23} \varpropto(\upsilon_{z}\otimes\phi_{z}\otimes\Pi)\big{(}( \Delta\otimes\operatorname{\sf id})(\mathcal{R})\big{)}.\] Now Theorem 4.4 implies (5.7) up to a scalar. By applying both sides to \(w_{0}\otimes w_{0}\otimes(\begin{smallmatrix}1\\ 0\end{smallmatrix})\) one observes that the scalar is \(1\). Given the L-operators for the various \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-representations, Lemma 3.1 provides us with L-operators for the corresponding \(U_{q}(\widehat{\mathfrak{b}}^{-})\)-representations: \(\mathcal{L}_{\pi}^{-}(z)=\mathcal{L}_{\pi}(z)_{21}\) for \(\pi\in\{\varrho,\bar{\varrho},\upsilon,\phi\}\). These are scalar multiples of \(\mathcal{R}_{\Pi\varrho^{-}}(z)\), \(\mathcal{R}_{\Pi\bar{\varrho}^{-}}(z)\), \(\mathcal{R}_{\Pi\upsilon}(z)\) and \(\mathcal{R}_{\Pi\phi^{-}}(z)\), respectively. Theorem 5.2 immediately yields the following result: **Corollary 5.3**.: _The following relation in \(\operatorname{End}(\mathbb{C}^{2}\otimes W\otimes W)[[z]]\) is satisfied:_ \[\mathcal{O}_{32}\mathcal{L}_{\varrho}^{-}(q^{-\mu/2}z)_{13}\mathcal{L}_{\bar {\varrho}}^{-}(q^{\mu/2}z)_{12}=\mathcal{L}_{\upsilon}^{-}(z)_{13}\mathcal{L}_ {\phi}^{-}(z)_{12}\mathcal{O}_{32}. \tag{5.8}\] ### Actions of \(\mathcal{R}(z)\) on tensor products of infinite-dimensional Borel representations By Theorem 2.4, the grading-shifted universal R-matrix also acts on the tensor product of the level-\(0\) modules \((W,\upsilon)\) and \((W,\phi^{-})\) and on the tensor product of the level-\(0\) modules \((W,\varrho)\) and \((W,\bar{\varrho}^{-})\) as \(\operatorname{End}(W\otimes W)\)-valued formal power series. It is convenient for us to use rescaled linear-operator-valued formal power series \[\mathcal{R}_{\varrho\bar{\varrho}}(z),\mathcal{R}_{\upsilon\phi}(z)\in \operatorname{End}(W\otimes W)\otimes\mathbb{C}[[z]], \tag{5.9}\] uniquely defined by the condition that they fix \(w_{0}\otimes w_{0}\): \[\mathcal{R}_{\varrho\bar{\varrho}}(z)\varpropto(\varrho\otimes \bar{\varrho}^{-})(\mathcal{R}(z)), \mathcal{R}_{\varrho\bar{\varrho}}(z)\cdot(w_{0}\otimes w_{0}) =w_{0}\otimes w_{0},\] \[\mathcal{R}_{\upsilon\phi}(z)\varpropto(\upsilon\otimes\phi^{-} )(\mathcal{R}(z)), \mathcal{R}_{\upsilon\phi}(z)\cdot(w_{0}\otimes w_{0}) =w_{0}\otimes w_{0}. \tag{5.10}\] These power series will appear in the boundary factorization identity. In appendix B we obtain explicit expressions for \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) and \(\mathcal{R}_{\upsilon\phi}(z)\), although we will not need these for the proof of the boundary factorization identity using the universal K-matrix formalism of Section 3. ## 6. K-matrices In this section we consider solutions of reflection equations associated to the subalgebra \(U_{q}(\mathfrak{k})\). ### Right K-matrices By Theorem 3.6, applying any of the level-\(0\)\(U_{q}(\widehat{\mathfrak{b}}^{+})\)-representations \(\varrho\), \(\bar{\varrho}\), \(\upsilon\), \(\phi\) to the grading-shifted universal K-matrix associated to \(U_{q}(\mathfrak{k})\) we obtain \(\operatorname{End}(W)\)-valued formal power series, satisfying the reflection equation (3.7). Moreover, since these commute with the action of \(k_{1}\) they act diagonally with respect to the basis \(\{w_{j}\}_{j\geq 0}\). We will consider the scalar multiples of these linear operators which fix \(w_{0}\): \[\mathcal{K}_{\pi}(z)\varpropto\pi(\mathcal{K}(z)),\qquad\mathcal{K}_{\pi}(z) \cdot w_{0}=w_{0}. \tag{6.1}\] for \(\pi\in\{\varrho,\bar{\varrho},\upsilon,\phi\}\). It is convenient to obtain explicit expressions by applying Propositions 3.7 and 3.8. These could be found independently of the universal K-matrix formalism, either by solving the reflection equations directly in all cases or by following the approach outlined in [13, 14] (this relies on the irreducibility of certain tensor products as \(U_{q}(\mathfrak{k})((z))\)-modules; otherwise the reflection equation must be verified directly). First of all, the linear operator \[K_{\Pi}(z)=\begin{pmatrix}\xi z^{2}-1&0\\ 0&\xi-z^{2}\end{pmatrix}\in\operatorname{End}(\mathbb{C}^{2})[[z]] \tag{6.2}\] is, up to a scalar, the unique solution of the \(U_{q}(\mathfrak{k})\)-intertwining condition \[K_{\Pi}(z)\Pi_{z}(u)=\Pi_{1/z}(u)K_{\Pi}(z)\qquad\text{for all $u\in U_{q}( \mathfrak{k})$.} \tag{6.3}\] By Theorem 3.6, it is proportional to the action of the grading-shifted universal K-matrix in the representation \((\mathbb{C}^{2},\Pi)\). Recall that \(\Pi\circ\psi=\Pi\); hence, motivated by Proposition 3.7, for \(\pi\in\{\varrho,\bar{\varrho},\upsilon,\phi\}\), we consider the right reflection equation \[\mathcal{L}_{\pi}(\tfrac{y}{z})\mathcal{K}_{\pi}(y)\mathcal{L}_{\pi}(yz)K_{ \Pi}(z)=K_{\Pi}(z)\mathcal{L}_{\pi}(yz)\mathcal{K}_{\pi}(y)\mathcal{L}_{\pi}( \tfrac{y}{z})\in\operatorname{End}(W\otimes\mathbb{C}^{2})[[y/z,z]]. \tag{6.4}\] **Lemma 6.1**.: _We have_ \[\mathcal{K}_{\varrho}(z) =(-q^{-D}\xi)^{D}(q^{2}\xi^{-1}z^{2};q^{2})_{D}, \mathcal{K}_{\bar{\varrho}}(z) =(qz^{2})^{-D}(q^{2}\xi^{-1}z^{-2};q^{2})_{D}^{-1},\] \[\mathcal{K}_{\upsilon}(z) =z^{-2D}\frac{(q^{2-\mu}\xi^{-1}z^{2};q^{2})_{D}}{(q^{2-\mu}\xi^{ -1}z^{-2};q^{2})_{D}}, \mathcal{K}_{\phi}(z) =(-q^{-\mu-D-1}\;\xi)^{D}. \tag{6.5}\] Proof.: For \(\mathcal{K}_{\upsilon}(z)\), by a straightforward check, the intertwining condition \[\mathcal{K}_{\upsilon}(z)\upsilon_{z}(u)=\upsilon_{1/z}(u)\mathcal{K}_{ \upsilon}(z)\qquad\text{for all $u\in U_{q}(\mathfrak{k})$} \tag{6.6}\] can be solved to find \(\mathcal{K}_{\upsilon}(z)\), making use of Proposition 3.8. Since \(\mathcal{K}(z)\) commutes with the action of \(k_{1}\) it follows that \(\mathcal{K}_{\upsilon}(z)=f(D)\) for some \(f\in\mathcal{F}\). Now imposing (6.6) for the generators \(e_{0}-q^{-1}\xi^{-1}k_{0}f_{1}\) and \(e_{1}-q^{-1}\xi k_{1}f_{0}\) yields the recurrence relation \[\frac{f(D+1)}{f(D)}=\frac{1-q^{2(D+1)-\mu}\xi^{-1}z^{2}}{z^{2}-q^{2(D+1)-\mu} \xi^{-1}}.\] In particular, the linear relation (6.6) has a \(1\)-dimensional solution space. Together with the constraint \(f(0)=1\) it yields the formula given in (6.5). For \(\pi\in\{\varrho,\bar{\varrho},\phi\}\), it is convenient to consider the linear space (6.7) \[\operatorname{\mathsf{RE}}_{\pi}:=\{\mathcal{K}_{\pi}(y)\in\mathcal{F}(D)\,| \,\eqref{eq:f(f(f(f(f(f(f(f(f(f(f(((((((((( ((( ( ( ( ( 0 00 0 ### Left K-matrices We now obtain linear-operator-valued power series satisfying a reflection equation for the left boundary by using a well-established bijection, see [22, Eq. (15)], between its solution set and the solution set of the right reflection equation. For fixed \(\widetilde{\xi}\in\mathbb{C}^{\times}\) we define \[\widetilde{K}_{\Pi}(z):=(1-q^{2}\widetilde{\xi}^{-1}z^{2})^{-1}(1-q^{2} \widetilde{\xi}z^{2})^{-1}\big{(}K_{\Pi}(qz)^{-1}|_{\xi\mapsto\widetilde{\xi}^ {-1}}\big{)}=\begin{pmatrix}q^{2}\widetilde{\xi}z^{2}-1&0\\ 0&\widetilde{\xi}-q^{2}z^{2}\end{pmatrix}. \tag{6.8}\] Also, for \(\pi\in\{\varrho,\bar{\varrho},\upsilon,\phi\}\) we define \[\widetilde{\mathcal{K}}_{\pi}(z):=\mathcal{K}_{\pi}(qz)^{-1}|_{\xi\mapsto \widetilde{\xi}^{-1}}. \tag{6.9}\] Similarly, note that \(\mathcal{L}_{\pi}(\gamma z)\) is invertible in \(\operatorname{End}(W\otimes\mathbb{C}^{2})[[z]]\) for all \(\gamma\in\mathbb{C}\). We define \[\widetilde{\mathcal{L}}_{\pi}(z)=\mathcal{L}_{\pi}(q^{2}z)^{-1}. \tag{6.10}\] **Lemma 6.2**.: _For all \(\pi\in\{\varrho,\bar{\varrho},\upsilon,\phi\}\) the left reflection equation holds:_ \[\widetilde{\mathcal{K}}_{\pi}(y)\widetilde{\mathcal{L}}_{\pi}(yz)\widetilde{ K}_{\Pi}(z)\mathcal{L}_{\pi}(\tfrac{y}{z})=\mathcal{L}_{\pi}(\tfrac{y}{z}) \widetilde{K}_{\Pi}(z)\widetilde{\mathcal{L}}_{\pi}(yz)\widetilde{\mathcal{K }}_{\pi}(y)\quad\in\operatorname{End}(W\otimes\mathbb{C}^{2})[[y/z,z]]. \tag{6.11}\] Proof.: The desired equation (6.11) can be rewritten as \[\widetilde{K}_{\Pi}(z)^{-1}\widetilde{\mathcal{L}}_{\pi}(yz)^{-1}\widetilde{ \mathcal{K}}_{\pi}(y)^{-1}\mathcal{L}_{\pi}(\tfrac{y}{z})=\mathcal{L}_{\pi}( \tfrac{y}{z})\widetilde{\mathcal{K}}_{\pi}(y)^{-1}\widetilde{\mathcal{L}}(yz) ^{-1}\widetilde{K}_{\Pi}(z)^{-1}.\] By (6.8-6.10), this is equivalent to the right-reflection equation (6.4) with \(y\mapsto qy\), \(z\mapsto qz\) and \(\xi\mapsto\widetilde{\xi}^{-1}\). Using the explicit formulas (6.2) and (6.4) we obtain that the solutions of the left reflection equations (6.9) are the following \(\operatorname{End}(W)\)-valued formal power series in \(z\): \[\begin{split}\widetilde{\mathcal{K}}_{\varrho}(z)& =(-q^{D}\widetilde{\xi})^{D}(q^{4}\widetilde{\xi}z^{2};q^{2})_{D}^ {-1},&\widetilde{\mathcal{K}}_{\bar{\varrho}}(z)=(q^{3}z^{2})^{D}( \widetilde{\xi}z^{-2};q^{2})_{D},\\ \widetilde{\mathcal{K}}_{\upsilon}(z)&=(qz)^{2D}\frac {(q^{-\mu}\widetilde{\xi}z^{-2};q^{2})_{D}}{(q^{4-\mu}\widetilde{\xi}z^{2};q^ {2})_{D}},&\widetilde{\mathcal{K}}_{\phi}(z)=(-q^{\mu+D+1} \widetilde{\xi})^{D}.\end{split} \tag{6.12}\] ## 7. Fusion intertwiners revisited In this short intermezzo we explain how the universal K-matrix formalism naturally leads to relations involving K-matrices and \(U_{q}(\mathfrak{b}^{+})\)-intertwiners called _fusion intertwiners_ which play a key role in the short exact sequence approach to the Q-operator. These intertwiners were discussed in [23] and the relevant relations with K-matrices were shown by a linear-algebraic computation relying on the explicit expressions of the various constituent factors, see [23, Lem. 3.2]. In other words, the representation-theoretic origin of these relations was unclear, which we now remedy. Level-0 representations of \(U_{q}(\widehat{\mathfrak{b}}^{+})\) are amenable to scalar modifications of the action of \(U_{q}(\mathfrak{h})=\langle k_{1}^{\pm 1}\rangle\), see also [10, Rmk. 2.5]. In particular, for \(r\in\mathbb{C}^{\times}\), define a modified Borel representation \(\varrho\) as follows: \[\varrho_{r}(e_{i})=\varrho(e_{i}),\qquad\varrho_{r}(k_{0})=r\varrho(k_{0}), \qquad\varrho_{r}(k_{1})=r^{-1}\varrho(k_{1}) \tag{7.1}\] and consider the grading-shifted representation \(\varrho_{r,z}:=(\varrho_{r})_{z}\). There exist \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-intertwiners \[\iota(r):(W,\varrho_{qr,qz})\to(W\otimes\mathbb{C}^{2},\varrho_{r,z}\otimes \Pi_{z}),\] \[\tau(r):(W\otimes\mathbb{C}^{2},\varrho_{r,z}\otimes\Pi_{z})\to(W,\varrho_{q^{ -1}r,q^{-1}z}),\] called _fusion intertwiners_, which take part in the following short exact sequence: \[\begin{CD}0@>{}>{}>(W,\varrho_{qr,qz})@>{\iota(r)}>{}>(W\otimes\mathbb{C}^{2}, \varrho_{r,z}\otimes\pi_{z})@>{\tau(r)}>{}>(W,\varrho_{q^{-1}r,q^{-1}z})@>{}>{ }>0\end{CD} \tag{7.2}\] Explicitly10, we have Footnote 10: The sign mismatch with [20, Eq. (3.1)] is explained in Remark 4.3. \[\iota(r)=\begin{pmatrix}q^{-D}a^{\dagger}\\ -q^{D+1}r\end{pmatrix},\qquad\tau(r)=\begin{pmatrix}q^{D},&q^{-D}r^{-1}a^{ \dagger}\end{pmatrix}. \tag{7.3}\] Analogously to Theorem 5.2, fusion relations for the L-operators \(\mathcal{L}(r,z)\), defined as suitable scalar multiples of \((\varrho_{r,z}\otimes\Pi)(\mathcal{R})\), now follow from these intertwining properties and the coproduct formulas for \(\mathcal{R}\) (2.16), see [20, Eqns. (3.8) and (3.9)]. Recalling the universal object \(\mathcal{K}\) and Theorem 3.6, we define the corresponding K-operator \(\mathcal{K}_{\varrho}(r,z)\) as the unique scalar multiple of \(\varrho_{r,z}(\mathcal{K})\) which fixes \(w_{0}\) (cf. [20, Prop. 2.5]). Then \[(\varrho_{r,z}\otimes\Pi_{z})(\Delta(\mathcal{K}))\qquad\propto\qquad \mathcal{K}_{\varrho}(r,z)_{1}\mathcal{L}(r,z^{2})K_{\Pi}(z)_{2} \tag{7.4}\] as a consequence of (3.19). Since \(\mathcal{K}\) lies in a completion of \(U_{q}(\widehat{\mathfrak{b}}^{+})\), the intertwining properties of \(\iota(r)\) and \(\tau(r)\) now directly yield the following fusion relation for the K-operator: \[\mathcal{K}_{\varrho}(r,z)_{1}\mathcal{L}(r,z^{2})K_{\Pi}(z)_{2 }\iota(r) \propto \iota(r)\mathcal{K}_{\varrho}(qr,qz)\] \[\tau(r)\mathcal{K}_{\varrho}(r,z)_{1}\mathcal{L}(r,z^{2})K_{\Pi} (z)_{2} \propto \mathcal{K}_{\varrho}(q^{-1}r,q^{-1}z)\tau(r),\] with the scalar factors determined by applying the two sides of the equation to \(w_{0}\), say. We will be able to prove a boundary counterpart of the factorization identity (5.7) using similar ideas. We recover, with a much smaller computational burden, the key result [20, Lemma 3.2] (a similar relation for left K-operators can easily be deduced from this, as explained in the last sentence of [20, Proof of Lemma 3.2]). In the approach to Baxter's Q-operator using short exact sequences, the fusion relations for L and K-operators induce fusion relations for 2-boundary monodromy operators, see [20, Lem. 4.2] from which Baxter's relation (1.1) follows by taking traces, see [20, Sec. 5.2]. ## 8. Boundary factorization identity In motivating and presenting the key boundary relations, it is very useful to introduce a graphical representation of spaces and operators. Let us introduce the following pictures for the different representations introduced in Sections 4 and 5: \[\varrho_{z}=\quad z\rTo\qquad\qquad\bar{\varrho}_{z}=\quad z\rTo\qquad \qquad\phi_{z}=\quad z\rTo\] \[\varrho_{z}^{-}=\quad z\rTo\qquad\qquad\bar{\varrho}_{z}^{-}=\quad z\rTo\qquad \qquad\phi_{z}^{-}=\quad z\rTo\] \[\upsilon_{z}=\quad z\rTo\qquad\qquad\Pi_{z}=\quad z\rTo\] For any vector spaces \(V\), \(V^{\prime}\), denote by \(\mathcal{P}\) the linear map from \(V\otimes V^{\prime}\) to \(V^{\prime}\otimes V\) such that \(\mathcal{P}(v\otimes v^{\prime})=v^{\prime}\otimes v\) for all \(v\in V\), \(v^{\prime}\in V^{\prime}\). Also set \(z=z_{1}/z_{2}\). We then have the following pictures for L-operators and R-operators: We now make the following definitions11: Footnote 11: These are the modified forms of the R-matrices that appear in the corresponding left reflection equations, see [13, Eq. (13)]. \[\widetilde{\mathcal{R}}_{\varrho\bar{\varrho}}(z):=\mathcal{R}_{\varrho\bar{ \varrho}}(q^{2}z)^{-1},\qquad\widetilde{\mathcal{R}}_{\nu\phi}(z):=\mathcal{R }_{\nu\phi}(q^{2}z)^{-1}. \tag{8.1}\] and represent these modified R-matrices by the following pictures: The various right-boundary K-matrices are represented as follows: The left-boundary K-matrices defined in Section 6.2 are represented by the natural analogues of these pictures. For example: \[\widetilde{\mathcal{K}}_{\rho}(z)=\] Making use of these pictures, we see that Theorem 5.2 and Corollary 5.3 are represented by For the compatibility with the right boundary we claim that which corresponds to the following identity in \(\mathcal{A}^{(2)}\): \[\mathcal{K}_{\upsilon}(z)_{1}\mathcal{R}_{\upsilon\phi}(z^{2})\mathcal{K}_{ \phi}(z)_{2}\,\mathcal{O}=\mathcal{O}\,\mathcal{K}_{\varrho}(q^{-\mu/2}z)_{1} \mathcal{R}_{\varrho\bar{\varrho}}(z^{2})\mathcal{K}_{\bar{\varrho}}(q^{\mu/2} z)_{2}, \tag{8.2}\] which we call the _right boundary factorization identity_. The diagrams above serve as a motivation for the identity, which we now prove using results from Section 3 (an alternative computational proof of Theorem 8.1 is given in Appendix C). **Theorem 8.1**.: _For all \(\mu\in\mathbb{C}\), all \(q\in\mathbb{C}^{\times}\) not a root of unity and all \(\xi\in\mathbb{C}^{\times}\), relation (8.2) is satisfied._ Proof.: The proof is analogous to the proof of Theorem 5.2. We first note that \[\big{(}\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{\mu/2}z} \big{)}\big{(}(\mathsf{id}\otimes\psi)(\mathcal{R})\big{)} =\big{(}\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{-\mu/2}z^{-1 }}^{-}\big{)}(\mathcal{R}) \propto\mathcal{R}_{\varrho\bar{\varrho}}(z^{2}),\] \[\big{(}\upsilon_{z}\otimes\phi_{z}\big{)}\big{(}(\mathsf{id} \otimes\psi)(\mathcal{R})\big{)} =\big{(}\upsilon_{z}\otimes\phi_{z^{-1}}^{-}\big{)}(\mathcal{R}) \propto\mathcal{R}_{\upsilon\phi}(z^{2}).\] Noting the coproduct formula (3.19), we obtain \[\mathcal{K}_{\varrho}(q^{-\mu/2}z)_{1}\mathcal{R}_{\varrho\bar{ \varrho}}(z^{2})\mathcal{K}_{\bar{\varrho}}(q^{\mu/2}z)_{2} \propto\quad\big{(}\varrho_{q^{-\mu/2}z}\otimes\bar{\varrho}_{q^{ \mu/2}z}\big{)}(\Delta(\mathcal{K})),\] \[\mathcal{K}_{\upsilon}(z)_{1}\mathcal{R}_{\upsilon\phi}(z^{2}) \mathcal{K}_{\phi}(z)_{2} \propto\quad\big{(}\upsilon_{z}\otimes\phi_{z}\big{)}(\Delta( \mathcal{K})).\] Now Theorem 4.4 implies (8.2) up to a scalar. The fact that all factors fix \(w_{0}\otimes w_{0}\) shows that the scalar is \(1\) Compatibility with the left boundary requires that The identity in \(\mathcal{A}^{(2)}\) corresponding to this is \[\widehat{\mathcal{K}}_{\bar{\varrho}}(q^{\mu/2}z,\widehat{\xi})_{2}\widehat{ \mathcal{R}}_{\varrho\bar{\varrho}}(z^{2})\widehat{\mathcal{K}}_{\varrho}(q^{- \mu/2}z,\widehat{\xi})_{1}\mathcal{O}^{-1}=\mathcal{O}^{-1}\widehat{\mathcal{K }}_{\phi}(z,\widehat{\xi})_{2}\widehat{\mathcal{R}}_{\upsilon\phi}(z^{2}) \widehat{\mathcal{K}}_{\upsilon}(z,\widehat{\xi})_{1}. \tag{8.3}\] **Theorem 8.2**.: _Relation (8.3) is satisfied._ Proof.: Given the definitions (6.12) and (8.1), this follows straightforwardly by inverting (8.2) and replacing \((z,\xi)\mapsto(qz,\widehat{\xi}^{-1})\). ## 9. Discussion The main result of this paper is Theorem 8.1 which can be viewed as a boundary analogue of Theorem 5.2. To establish this result, we needed to first show that all R and K-operators involved in Equation (8.2) are well-defined actions of the universal elements \(\mathcal{R}\) and \(\mathcal{K}\) on the infinite-dimensional \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-modules involved. The key fact that allows for this is that \(\mathcal{R}\) and \(\mathcal{K}\) live in completions of \(U_{q}(\widehat{\mathfrak{b}}^{+})\otimes U_{q}(\widehat{\mathfrak{b}}^{-})\) and of \(U_{q}(\widehat{\mathfrak{b}}^{+})\), respectively. This is very familiar for \(\mathcal{R}\) but for \(\mathcal{K}\) relies on the recent works [1, 2]. Introducing the \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-intertwiner \(\mathcal{O}\) and the formula for \(\Delta(\mathcal{K})\) given by (3.19), relation (8.2) follows immediately from the intertwining property of \(\mathcal{O}\). The open Q-operator \(\mathcal{Q}(z)\) of [23] is the trace of a product of R and K-operators over the \(U_{q}(\widehat{\mathfrak{b}}^{+})\)-module \((W,\varrho_{z})\) and there is a similar construction of an open Q-operator \(\overline{\mathcal{Q}}(z)\). In a future paper, the authors will present this construction and the use of Theorem 8.2 in deriving a boundary analogue of the factorization relation \(\mathcal{T}_{\mu}(z)\varpropto\mathcal{Q}(zq^{-\mu/2})\overline{\mathcal{Q}}( zq^{\mu/2})\). They will also develop the analogous theory for different coideal subalgebras, in particular those for which non-diagonal solutions of the reflection equation are intertwiners. ## Appendix A Deformed Pochhammer symbols and exponentials This appendix is independent from the main text, but provides identities which are used there. We review some basic theory of deformed Pochhammer symbols and exponentials (as formal power series) with a deformation parameter \(p\in\mathbb{C}^{\times}\), which corresponds to \(q^{2}\) in the main text. ### Deformed Pochhammer symbols Let \(x\) be a formal variable. For \(n\in\mathbb{Z}\), the (finite) deformed Pochhammer symbol \((x;p)_{n}\in\mathbb{C}[[x]]\) is defined by (A.1) \[(x;p)_{n}:=\begin{cases}\prod_{m=0}^{n-1}(1-xp^{m})&\text{if $n\geq 0$},\\ \prod_{m=n}^{-1}(1-xp^{m})^{-1}&\text{if $n<0$}\end{cases}\] (the definition for \(n<0\) is understood as a product of geometric series); since its constant coefficient is nonzero, it is invertible. For all \(p\in\mathbb{C}^{\times}\) and \(n\in\mathbb{Z}_{\geq 0}\) we have the following basic identity, see [1, (I.2), (I.3)]: (A.2) \[(x;p)_{-n}=(p^{-n}x;p)_{n}^{-1}=(x/p;p^{-1})_{n}^{-1}=(-x)^{-n}p^{n(n+1)/2}(p/x ;p)_{n}^{-1}.\] Assuming \(|p|<1\), the infinite deformed Pochhammer symbol (A.3) \[(x;p)_{\infty}:=\prod_{m=0}^{\infty}(1-xp^{m})\] is an invertible formal power series with well-defined coefficients in \(\mathbb{C}\). The following identity holds in \(\mathbb{C}[[x]]\), see [1, (I.5)]: (A.4) \[(x;p)_{n}=\frac{(x;p)_{\infty}}{(p^{n}x;p)_{\infty}}.\] ### Deformed exponentials From now on we assume that \(p\) is not a root of unity. In particular, \((p;p)_{k}\neq 0\) for all \(k\in\mathbb{Z}_{\geq 0}\). The _deformed exponential_ is the invertible formal power series (A.5) \[e_{p}(x):={}_{1}\phi_{0}(0;-;p,x)=\sum_{k=0}^{\infty}\frac{x^{k}}{(p;p)_{k}}.\] The ordinary exponential formal power series arises as the limit \(\lim_{p\to 1}e_{p}((1-p)x)=\mathrm{e}^{x}\). This series satisfies the functional relation (A.6) \[e_{p}(px)=(1-x)e_{p}(x),\] see [1, Sec. 1.3]. Using the fact that constants are the only formal power series which are invariant under \(x\mapsto px\), by an inspection of constant coefficients we obtain from (A.6) the identity (A.7) \[e_{p}(x)=\frac{1}{(x;p)_{\infty}}\qquad\text{if $|p|<1$}.\] Similarly we consider the invertible formal power series (A.8) \[E_{p}(x):={}_{0}\phi_{0}(-;-;p,-x)=\sum_{k=0}^{\infty}\frac{p^{k(k-1)/2}x^{k} }{(p;p)_{k}}.\] Then \(E_{p}(-x)^{-1}\) also satisfies (A.6) and by comparing constant coefficients again we deduce \(e_{p}(x)=E_{p}(-x)^{-1}\). By evaluating (A.2) at \(x=1\), we obtain \(E_{p}(-x)=e_{p^{-1}}(p^{-1}x)\) and hence (A.9) \[e_{p}(x)=e_{p^{-1}}(p^{-1}x)^{-1}\quad\in\mathbb{C}[[x]].\] Deformed exponentials in \(x\) and \(y\) satisfy various useful identities in particular deformations of the commutative algebra \(\mathbb{C}[[x,y]]\). For instance, in any algebra generated by the symbols \(x\) and \(y\) such that \(yx=\gamma xy\) for \(\gamma\in\mathbb{C}\), the definition implies the following identity: (A.10) \[ye_{p}(x)=e_{p}(\gamma x)y\] which we will use repeatedly. For a survey of product formulas analogous to \(\exp(x)\exp(y)=\exp(x+y)\), see [10]. We will need the following result. **Lemma A.1**.: _Let \(x,y\) be elements of an algebra such that \(yx=pxy\). The following identities hold as formal power series in \(x,y\):_ (A.11) \[e_{p}(x)e_{p}(y) =e_{p}(x+y),\] (A.12) \[e_{p}(y)e_{p}(x) =e_{p}\big{(}x(1-y)\big{)}e_{p}(y)=e_{p}(x)e_{p}(-xy)e_{p}(y)=e_{p }(x)e_{p}\big{(}(1-x)y\big{)}.\] Proof.: (A.11) is a direct consequence of the well-known q-binomial formula, see e.g. [11, Ex. 1.35]. For (A.12) see [10, Prop. 3.2]. ### Deformed exponentials as linear maps Let \(V\) be a \(\mathbb{C}\)-linear space. Call an operator \(f\) on \(V\)_locally nilpotent_ if for all \(v\in V\) there exists \(o(v)\in\mathbb{Z}_{\geqslant 0}\) such that \(f^{o(v)}(v)=0\) (note that nilpotent operators are locally nilpotent and if \(V\) is finite-dimensional the converse is true). The deformed exponential \(e_{p}(f)\) defines an invertible map on \(V\). More generally, if \(y\) is an indeterminate then \(e_{p}(yf)\) is a well-defined invertible element of \(\operatorname{End}(V)[[y]]\). Recall the infinite-dimensional vector space \(W\) and its linear operators \(a\), \(a^{\dagger}\), \(\bar{a}^{\dagger}\), \(f(D)\) (\(f\in\mathcal{F}\)) from Section 4.2. In the case \(V=W\otimes W\), we have the following commutation relations for linear-operator valued formal series. **Lemma A.2**.: _Let \(y\) be a formal variable. In \(\operatorname{End}(W\otimes W)[[y]]\) the following identities hold:_ (A.13) \[\big{[}e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),f(D_{1}+D_{2})\big{]}=\big{[}e_{p}( ya_{1}\bar{a}_{2}^{\dagger}),a_{1}\big{]}=\big{[}e_{p}(ya_{1}\bar{a}_{2}^{ \dagger}),\bar{a}_{2}^{\dagger}\big{]}=0\] _for all \(f\in\mathcal{F}\) and_ (A.14) \[\big{[}e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),a_{1}^{\dagger}\big{]} =yp^{D_{1}}\bar{a}_{2}^{\dagger}e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),\] (A.15) \[\big{[}e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),p^{-D_{1}}a_{2}\big{]} =ye_{p}(ya_{1}\bar{a}_{2}^{\dagger})a_{1}p^{-D_{1}}.\] Proof.: Note that (A.13) follows directly from the definition of the deformed exponential. A straightforward inductive argument using (4.4) yields (A.16) \[[a^{k+1},a^{\dagger}] =(1-p^{k+1})p^{D}a^{k},\] (A.17) \[[(\bar{a}^{\dagger})^{k+1},a]_{p^{k+1}} =(1-p^{k+1})(\bar{a}^{\dagger})^{k},\] for all \(k\in\mathbb{Z}_{\geqslant 0}\), which imply (A.14) and (A.15), respectively. ## Appendix B Explicit expressions for R-operators In this appendix we derive explicit formulas for \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) and \(\mathcal{R}_{v\phi}(z)\), defined by (5.10) as images of the universal R-matrix \(\mathcal{R}\) fixing \(w_{0}\otimes w_{0}\). We expect that these will be useful in further studies of Baxter's Q-operators for the open XXZ spin chain; for now they will allow us to give a proof of the boundary factorization identity which does not rely on the universal K-matrix formalism. First we note that, by the second part of Theorem 2.4, \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) and \(\mathcal{R}_{v\phi}(z)\) lie in the centralizer (B.1) \[\mathcal{A}_{0}^{(2)}:=\Big{\{}X\in\mathcal{A}^{(2)}\,\Big{|}\,\big{[}X,q^{D_{ 1}+D_{2}}\big{]}=0\Big{\}}.\] One straightforwardly verifies that \(\mathcal{A}_{0}^{(2)}\) is generated by elements of the form (B.2) \[\sum_{k\geqslant 0}(\bar{a}_{2}^{\dagger})^{k}f_{k}(D_{1},D_{2})a_{1}^{k}, \qquad\sum_{k\geqslant 0}(a_{1}^{\dagger})^{k}f_{k}(D_{1},D_{2})a_{2}^{k}, \qquad\qquad f_{k}\in\mathcal{F}^{(2)}.\] Hence, elements of \(\mathcal{A}_{0}^{(2)}\) in fact commute with all elements of the form \(f(D_{1}+D_{2})\) (\(f\in\mathcal{F}\)). ### Explicit expression for \(R_{\nu\phi}(z)\) We first state and prove an explicit formula for \(R_{\nu\phi}(z)\). We keep using the shorthand notation \(p=q^{2}\). **Theorem B.1**.: _For all \(z\in\mathbb{C}\) we have_ (B.3) \[\mathcal{R}_{\nu\phi}(z)=e_{p}(za_{1}^{\dagger}a_{2})q^{(\mu-1)(D_{2}-D_{1})-2D _{1}(D_{2}+1)}.\] Proof.: From Proposition 2.6 we deduce that \(\mathcal{R}_{\nu\phi}(z)\) is a solution of the linear relation (B.4) \[X(\upsilon_{z}\otimes\phi^{-})(\Delta(u))=(\upsilon_{z}\otimes\phi^{-})( \Delta^{\mathrm{op}}(u))X\quad\text{for all $u\in U_{q}(\widehat{\mathfrak{b}}^{-})$}.\] First of all, note that the element in the right-hand side of (B.3) satisfies (B.4) with \(u\in\{k_{0},k_{1}\}\) and so it suffices to prove that the vector space (B.5) \[\mathcal{X}=\Big{\{}X\in\mathcal{A}_{0}^{(2)}\,\Big{|}\,X\text{ satisfies (B.4) for $u\in\{f_{0},f_{1}\}$}\Big{\}}\] is spanned by \(e_{p}(z^{2}a_{1}^{\dagger}a_{2})q^{(\mu-1)(D_{2}-D_{1})-2D_{1}(D_{2}+1)}\). Using the explicit formulas (2.2), (4.9) and (4.17), we obtain that (B.4) is equivalent to the system \[X\Big{(}z^{-1}a_{1}(q^{-\mu}-q^{\mu-2D_{1}})q^{-\mu-2D_{2}-1}+q^ {-1}a_{2}\Big{)} =\Big{(}z^{-1}a_{1}(q^{-\mu}-q^{\mu-2D_{1}})+q^{\mu-2(D_{1}+1)}a_{2 }\Big{)}X,\] \[Xa_{1}^{\dagger}q^{\mu+1+2D_{2}} =a_{1}^{\dagger}X.\] Without loss of generality we may write \(X=\widehat{X}q^{(\mu-1)(D_{2}-D_{1})-2D_{1}(D_{2}+1)}\) with \(\widehat{X}\in\mathcal{A}_{0}^{(2)}\). Hence (B.4) is equivalent to (B.6) \[z^{-1}[\widehat{X},a_{1}(1-p^{\mu-D_{1}})]=p^{\mu-D_{1}-1}a_{2}\widehat{X}- \widehat{X}p^{D_{1}}a_{2},\qquad\qquad[\widehat{X},a_{1}^{\dagger}]=0.\] It is straightforward to check that the centralizer in \(\mathcal{A}_{0}^{(2)}\) of \(a_{1}^{\dagger}\) is the subalgebra generated by elements of the form \(\sum_{k\geq 0}(a_{1}^{\dagger})^{k}f_{k}(D_{2})a_{2}^{k}\) with \(f_{k}\in\mathcal{F}\). It follows that \(\widehat{X}\) is of this form. Therefore (B.4) is equivalent to the single equation \[\sum_{k\geq 0}\big{[}(a_{1}^{\dagger})^{k},a_{1}(1-p^{\mu-D_{1}})\big{]}f_{k}( D_{2})a_{2}^{k}=z\sum_{k\geq 0}(a_{1}^{\dagger})^{k}\big{(}p^{\mu-D_{1}-k-1}f_{k}( D_{2}+1)-p^{D_{1}}f_{k}(D_{2})\big{)}a_{2}^{k+1}.\] The commutator vanishes if \(k=0\) so in the left-hand side we replace \(k\) by \(k+1\). For \(k\geq 0\) we have \[\big{[}(a^{\dagger})^{k+1},a(1-p^{\mu-D})\big{]}=(a^{\dagger})^{k}(1-p^{k+1})( p^{\mu-D-k-1}-p^{D}).\] Hence (B.4) is equivalent to the recurrence relation \[(1-p^{k+1})\big{(}p^{\mu-D_{1}-k-1}-p^{D_{1}}\big{)}f_{k+1}(D_{2})=z\big{(}p^{ \mu-D_{1}-k-1}f_{k}(D_{2}+1)-p^{D_{1}}f_{k}(D_{2})\big{)}.\] Viewing \(\mathcal{F}^{(2)}(D_{1},D_{2})\) as an \(\mathcal{F}(D_{2})\)-module, the elements \(p^{\pm D_{1}}\) are linearly independent. Hence the above recurrence relation is equivalent to the system \[(1-p^{k+1})f_{k+1}(D)=zf_{k}(D+1),\qquad f_{k}(D+1)=f_{k}(D).\] This is in turn equivalent to \(f_{k}(D)\in(p;p)_{k}^{-1}z^{k}\mathbb{C}\) for \(k\in\mathbb{Z}_{>0}\), as required. ### The automorphism \(\chi\) and the q-oscillator subalgebra \(\widetilde{\mathcal{A}}\) To obtain an expression for \(R_{v\phi}(z)\) in terms of deformed exponentials, it is very convenient to point out an additional automorphism \(\chi\). It cannot be defined on all of \(\mathcal{A}\) so we will consider a subalgebra \(\widetilde{\mathcal{A}}\). First, consider the subalgebra \(\widetilde{\mathcal{F}}(D)\subset\mathcal{F}(D)\) generated by \[p^{\pm D(D+1)/2},\qquad\gamma^{D},\qquad(p\widetilde{\gamma};p)_{D}^{\pm 1}, \qquad(p\gamma z^{2};p)_{D},\qquad(-\gamma z^{2})^{-D}(p\gamma^{-1}z^{-2};p)_{D} ^{-1}\] for all \(\gamma\in\mathbb{C}^{\times}\) and \(\widetilde{\gamma}\in\mathbb{C}^{\times}\backslash p^{\mathbb{Z}}\). For elements of \(\mathcal{G}(D)\), unlike general elements of \(\mathcal{F}(D)\), the symbol \(D\) can be formally evaluated at negative integers. Accordingly, we define an involutive automorphism \(\chi\) of \(\widetilde{\mathcal{F}}(D)\) accomplishing the formal replacement \(D\mapsto-D-1\). To be more precise, we set (B.7) \[\begin{split}\chi\big{(}(p^{\pm D(D+1)/2}\big{)}&=p ^{\pm D(D+1)/2},\qquad\qquad\chi\big{(}\gamma^{D}\big{)}=\gamma^{-D-1},\\ \chi\big{(}(p\widetilde{\gamma};p)_{D}^{\pm 1}\big{)}&=(1- \widetilde{\gamma})^{\mp 1}p^{\pm D(D+1)/2}(-\widetilde{\gamma})^{\mp D}(p \widetilde{\gamma}^{-1};p)_{D}^{\mp 1},\\ \chi\big{(}(p\gamma z^{2};p)_{D}\big{)}&=(1- \gamma z^{2})^{-1}p^{D(D+1)/2}(-\gamma z^{2})^{-D}(p\gamma^{-1}z^{-2};p)_{D}^{ -1},\\ \chi\big{(}(-\gamma z^{2})^{-D}(p\gamma^{-1}z^{-2};p)_{D}^{-1} \big{)}&=(1-\gamma z^{2})p^{-D(D+1)/2}(p\gamma z^{2};p)_{D}.\end{split}\] We denote the subalgebra of \(\operatorname{End}(W)\) generated by \(a^{\dagger}\), \(a\) and \(\mathcal{G}(D)\) by \(\widetilde{\mathcal{A}}\). It is straightforward to check that \(\chi\) extends to a (non-involutive) algebra automorphism of \(\widetilde{\mathcal{A}}\) by means of the assignments (B.8) \[\chi(a)=\bar{a}^{\dagger},\qquad\chi(a^{\dagger})=a.\] We can formulate a completion of the tensor product \(\widetilde{\mathcal{A}}\otimes\widetilde{\mathcal{A}}\) in a similar way as for \(\mathcal{A}\otimes\mathcal{A}\). More precisely, we consider the subalgebra \(\widetilde{\mathcal{F}}^{(2)}\) of \(\mathcal{F}^{(2)}\) generated by the subsets \(\widetilde{\mathcal{F}}(D_{1})\), \(\widetilde{\mathcal{F}}(D_{2})\) and the special elements \(p^{\pm D_{1}(D_{2}+1)}\). The completed tensorial square of \(\widetilde{\mathcal{A}}\) is defined to be the subalgebra \(\widetilde{\mathcal{A}}^{(2)}\) of \(\operatorname{End}(W\otimes W)\) generated by the elements (4.8) with \(g_{k,\ell}\), \(h_{k,\ell}\in\widetilde{\mathcal{F}}^{(2)}\). Note that the boundary factorization identity (8.2) is an identity in the subalgebra \(\widetilde{\mathcal{A}}^{(2)}\subset\operatorname{End}(W\otimes W)[[z]]\). The automorphism (B.9) \[\chi^{(2)}:=\sigma\circ(\chi\otimes\chi^{-1})\] of \(\widetilde{\mathcal{A}}\otimes\widetilde{\mathcal{A}}\) naturally extends to an automorphism of \(\widetilde{\mathcal{A}}^{(2)}\), fixing \(p^{\pm D_{1}(D_{2}+1)}\) and acting termwise on power series in locally nilpotent operators. _Remark B.2_.: The map \(\chi\) can be seen as an infinite-dimensional version of conjugation by anti-diagonal matrices; certain \(U_{q}(\widehat{\mathfrak{h}}^{+})\)-representations are naturally related this way. For instance, for the 2-dimensional representation \(\Pi\), note that \(\operatorname{Ad}(J)\circ\Pi=\Pi\circ\Phi\) where \(\operatorname{Ad}\) denotes 'conjugation by' and \(J=\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\). In the same way, \(\chi\) relates the prefundamental representations \(\varrho\) and \(\bar{\varrho}\) up to a twist by the diagram automorphism \(\Phi\): \(\chi\circ\varrho=\bar{\varrho}\circ\Phi\). Hence, the condition (2.19) and the 1-dimensionality of the solution space of the relevant linear equation implies \((\operatorname{Ad}(J)\otimes\chi)(\mathcal{L}_{\varrho}(z))=\mathcal{L}_{ \bar{\varrho}}(z)\). At the same time, a suitable scalar multiple of \(\mathcal{R}_{\Pi\,\Pi}(z)\), i.e. the R-matrix for the XXZ chain, is fixed by \(\operatorname{Ad}(J\otimes J)\) and we will see in Section B.3 that the same statement is true for \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) and \(\chi^{(2)}\). From (3.5) it follows that \(\Phi(U_{q}(\mathfrak{k}))=U_{q}(\mathfrak{k})|_{\xi\mapsto\xi^{-1}}\). Hence, the boundary counterparts of the above relations also involve inversion of the free parameter \(\xi\): \[\operatorname{Ad}(J)\big{(}K_{\Pi}(z)\big{)}|_{\xi\mapsto\xi^{-1}}=-\xi\,K_{\Pi} (z),\qquad\chi(\mathcal{K}_{\varrho}(z))|_{\xi\mapsto\xi^{-1}}=q^{-1}(z^{2}- \xi^{-1})^{-1}\mathcal{K}_{\bar{\varrho}}(z).\] In fact, applying \(\chi\otimes\operatorname{Ad}(J)\) to the reflection equation (6.4) with \(\pi=\varrho\) and inverting \(\xi\) we see that \[\mathcal{K}_{\varrho}(z)\mapsto\chi(\mathcal{K}_{\varrho}(z))|_{\xi\mapsto\xi^{ -1}}\] defines a bijection: \(\operatorname{\mathsf{RE}}_{\varrho}\to\operatorname{\mathsf{RE}}_{\bar{\varrho}}\) of the solution spaces defined in (6.7). We can use the map \(\chi^{(2)}\) to generate further relations similar to those in Lemma A.2. **Lemma B.3**.: _Let \(y\) be a formal parameter. In \(\operatorname{End}(W\otimes W)[[y]]\) the following identities hold:_ (B.10) \[[\bar{a}_{2}^{\dagger},e_{p}(ya_{1}^{\dagger}a_{2})] =ye_{p}(ya_{1}^{\dagger}a_{2})a_{1}^{\dagger}p^{-D_{2}-1},\] (B.11) \[[\bar{a}_{1}^{\dagger}a_{2},e_{p}(ya_{1}\bar{a}_{2}^{\dagger})] =y\big{(}e_{p}(ya_{1}\bar{a}_{2}^{\dagger})p^{-D_{1}-1}-p^{-D_{2}- 1}e_{p}(ya_{1}\bar{a}_{2}^{\dagger})\big{)}.\] Proof.: In this proof we consider the algebra \(\mathcal{A}\) as a subalgebra of \(\operatorname{End}(W)[[y]]\) instead of \(\operatorname{End}(W)[[z]]\), and similarly for \(\mathcal{A}^{(2)}\). To prove (B.10), first we apply \(\chi^{(2)}\) to (A.14), obtaining (B.12) \[[e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),a_{2}]=ya_{1}p^{-D_{2}-1}e_{p}(ya_{1}\bar{ a}_{2}^{\dagger}).\] Now consider the unique involutive algebra anti-automorphism \(\eta:\mathcal{A}\to\mathcal{A}\) which exchanges \(a\) and \(a^{\dagger}\) and fixes \(f(D)\) for all \(f\in\mathcal{F}\) and the unique involutive algebra anti-automorphism \(\overline{\eta}:\mathcal{A}\to\mathcal{A}\) which exchanges \(a\) and \(\bar{a}^{\dagger}\) and fixes \(f(D)\) for all \(f\in\mathcal{F}\). Then \(\eta^{(2)}:=\eta\otimes\overline{\eta}\) is an algebra antiautomorphism of \(\mathcal{A}\otimes\mathcal{A}\). It extends in a natural way to an algebra antiautomorphism of \(\mathcal{A}^{(2)}\). By applying \(\eta^{(2)}\) to (B.12) we obtain (B.10). Finally, to prove (B.11), upon right-multiplying (A.15) by \(p^{D_{1}+D_{2}+1}\) we obtain (B.13) \[[e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),a_{1}p^{D_{2}}]=ye_{p}(ya_{1}\bar{a}_{2}^{ \dagger})a_{1}p^{D_{2}}.\] From (A.14) and (B.13) it follows that (B.14) \[\begin{split}[e_{p}(ya_{1}\bar{a}_{2}^{\dagger}),a_{1}^{\dagger}a _{2}p^{D_{2}}]&=y\Big{(}\bar{a}_{2}^{\dagger}p^{D_{1}}e_{p}(ya_{1 }\bar{a}_{2}^{\dagger})a_{2}+a_{1}^{\dagger}e_{p}(ya_{1}\bar{a}_{2}^{\dagger} )a_{1}\Big{)}p^{D_{2}}\\ &=y\Big{(}p^{D_{1}}e_{p}(ya_{1}\bar{a}_{2}^{\dagger})\big{(}p^{D_ {2}}-1\big{)}+\big{(}1-p^{D_{1}}\big{)}e_{p}(ya_{1}\bar{a}_{2}^{\dagger})p^{D_ {2}}\Big{)}\\ &=y\big{(}e_{p}(ya_{1}\bar{a}_{2}^{\dagger})p^{D_{2}}-p^{D_{1}}e_ {p}(ya_{1}\bar{a}_{2}^{\dagger})\big{)}.\end{split}\] Now (B.11) follows as the \(\chi^{(2)}\)-image of (B.14). ### Explicit expression for \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) To aid the computation of \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\), consider the subalgebra \(\widetilde{\mathcal{A}}_{0}^{(2)}=\widetilde{\mathcal{A}}^{(2)}\cap\mathcal{ A}_{0}^{(2)}\), which is preserved by \(\chi^{(2)}\). **Lemma B.4**.: \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) _is a \(\widetilde{\mathcal{A}}_{0}^{(2)}\)-valued formal power series whose coefficients are fixed by \(\chi^{(2)}\)._ Proof.: It is clear from (4.11) and (4.17) that \(\varrho\otimes\bar{\varrho}^{-}\) takes values in \(\widetilde{\mathcal{A}}\otimes\widetilde{\mathcal{A}}\subset\widetilde{ \mathcal{A}}^{(2)}\). Now recall (2.20) and note that the factor \(\kappa\) acts as \(p^{D_{1}(D_{2}+1)}\). Furthermore, noting the form of \((\Sigma_{z}\otimes\mathsf{id})(\Theta)\) given by (2.26) with the components \(\Theta_{\lambda}\) lying in \(U_{q}(\widehat{\mathfrak{n}}^{+})_{\lambda}\otimes U_{q}(\widehat{\mathfrak{n }}^{+})_{-\lambda}\) (\(\lambda\in\widehat{Q}^{+}\)), we obtain that the action of \(\mathcal{R}(z)\) on \((W\otimes W,\varrho\otimes\bar{\varrho}^{-})\) is by an element of \(\widetilde{\mathcal{A}}_{0}^{(2)}\). For the second part, note that \[\chi^{(2)}\circ(\varrho\otimes\bar{\varrho}^{-})=(\chi^{-1}\otimes\chi)\circ( \bar{\varrho}^{-}\otimes\varrho)\circ\sigma=(\varrho\otimes\bar{\varrho}^{-}) \circ(\omega\otimes\omega)\circ\sigma.\] Applying this to \(\mathcal{R}(z)\), making use of (2.27), (2.24) and (2.18), we obtain \(\chi^{(2)}(\mathcal{R}_{\varrho\bar{\varrho}}(z))=\mathcal{R}_{\varrho\bar{ \varrho}}(z)\). In the derivation of the formula for \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\), we rely on the following result. **Lemma B.5**.: _The centralizer of the subset \(\{a_{1}^{\dagger},\bar{a}_{2}^{\dagger}\}\) in \(\mathcal{A}^{(2)}\) is equal to \(\mathbb{C}[[z]]\)._ Proof.: This centralizer is the intersection of the centralizer of \(a_{1}^{\dagger}\) and the centralizer of \(\bar{a}_{2}^{\dagger}\), which are easily found to be equal to \[\bigg{\{}\sum_{k,\ell\geq 0}(a_{1}^{\dagger})^{k}f_{k,\ell}(D_{2})a_{2}^{\ell} \,\bigg{|}\,f_{k,\ell}\in\mathcal{F}\bigg{\}},\qquad\bigg{\{}\sum_{k,\ell\geq 0}(\bar{a}_{2}^{ \dagger})^{k}g_{k,\ell}(D_{1})a_{1}^{\ell}\,\bigg{|}\,g_{k,\ell}\in\mathcal{F} \bigg{\}},\] respectively. Clearly their intersection is trivial. Now we are ready to state and prove a formula for \(\mathcal{R}_{\varrho\bar{\varrho}}(z)\) in terms of deformed exponentials. **Theorem B.6**.: _For all \(z\) we have_ (B.15) \[\mathcal{R}_{\varrho\bar{\varrho}}(z)=e_{q^{2}}(q^{3}za_{1}\bar{a}_{2}^{\dagger })e_{q^{2}}(q^{-1}za_{1}^{\dagger}a_{2})q^{-2D_{1}(D_{2}+1)}.\] Proof.: Clearly, \(w_{0}\otimes w_{0}\) is fixed by the expression on the right-hand side of (B.15). In the following we initially work over the ring \(\mathbb{C}[[z,z_{2}]]\) for some new indeterminate \(z_{2}\) and write \(z_{1}=zz_{2}\). By applying \(\varrho_{z_{1}}\otimes\Pi_{1}\otimes\bar{\varrho}_{z_{2}}^{-}\) to (2.17) and left and right-multiplying by \(\mathcal{L}_{\bar{\varrho},23}^{-}(z_{2}^{-1})^{-1}\) we obtain (B.16) \[\mathcal{R}_{\varrho\bar{\varrho}}(z)_{12}\mathcal{L}_{\varrho}(z_{1})_{13} \mathcal{L}_{\bar{\varrho}}^{-}(z_{2}^{-1})_{32}^{-1}=\mathcal{L}_{\bar{ \varrho}}^{-}(z_{2}^{-1})_{32}^{-1}\mathcal{L}_{\varrho}(z_{1})_{13}\mathcal{ R}_{\varrho\bar{\varrho}}(z)_{12}\] an equation in \((\widetilde{\mathcal{A}}^{(2)}\otimes\operatorname{End}(\mathbb{C}^{2}))[[z_{2 }]]\). By a direct computation we obtain (B.17) \[\mathcal{L}_{\bar{\varrho}}^{-}(z_{2}^{-1})^{-1}=\frac{1}{z_{2}^{2}-1}\begin{pmatrix} q^{-D-1}z_{2}^{2}&\bar{a}^{\dagger}q^{-D-1}z_{2}\\ aq^{D-1}z_{2}&q^{D+1}z_{2}^{2}-q^{-D-1}\end{pmatrix}\in\operatorname{End}( \mathbb{C}^{2})\otimes\widetilde{\mathcal{A}}.\] Now we consider the equation (B.18) \[(z_{2}^{2}-1)X_{12}\mathcal{L}_{\varrho}(z_{1})_{13}\mathcal{L}_{\bar{\varrho} }^{-}(z_{2}^{-1})_{32}^{-1}=(z_{2}^{2}-1)\mathcal{L}_{\bar{\varrho}}^{-}(z_{ 2}^{-1})_{32}^{-1}\mathcal{L}_{\varrho}(z_{1})_{13}X_{12}\] in \((\widetilde{\mathcal{A}}^{(2)}\otimes\operatorname{End}(\mathbb{C}^{2}))[[z_{ 2}]]\), for some \(X\in\widetilde{\mathcal{A}}^{(2)}_{0}\) such that \(\chi^{(2)}(X)=X\). It suffices to prove that (B.19) \[\mathcal{X}=\Big{\{}X\in\widetilde{\mathcal{A}}^{(2)}_{0}\Big{|}\,X\text{ satisfies (B.18) and is fixed by }\chi^{(2)}\Big{\}},\] which by Lemma B.4 contains \((\varrho_{z}\otimes\bar{\varrho}^{-})(\mathcal{R})\), is spanned by the element given in the right-hand side of (B.15). By considering explicit expressions for \((z_{2}^{2}-1)\mathcal{L}_{\varrho}(z_{1})_{13}\mathcal{L}_{\bar{\varrho}}^{-}( z_{2}^{-1})_{32}^{-1}\) and \((z_{2}^{2}-1)\mathcal{L}_{\bar{\varrho}}^{-}(z_{2}^{-1})_{32}^{-1}\mathcal{L}_ {\varrho}(z_{1})_{13}\), we obtain that (B.18) amounts to the system \[X\big{(}q^{D_{1}-D_{2}-1}-a_{1}^{\dagger}a_{2}q^{-D_{1}+D_{2}-2}z \big{)}=\big{(}q^{D_{1}-D_{2}-1}-a_{1}\bar{a}_{2}^{\dagger}q^{D_{1}-D_{2}}z \big{)}X,\] \[X\Big{(}\big{(}\bar{a}_{2}^{\dagger}q^{D_{1}-D_{2}-1}+a_{1}^{ \dagger}q^{-D_{1}-D_{2}-2}z\big{)}-a_{1}^{\dagger}q^{-D_{1}+D_{2}}zz_{2}^{2} \Big{)}=\] \[=\Big{(}\bar{a}_{2}^{\dagger}q^{-D_{1}-D_{2}-1}-\big{(}a_{1}^{ \dagger}q^{-D_{1}-D_{2}-2}+\bar{a}_{2}^{\dagger}q^{D_{1}-D_{2}+1}z\big{)}zz_{2 }^{2}\Big{)}X,\] \[X\Big{(}a_{2}q^{-D_{1}+D_{2}-1}-\big{(}a_{1}q^{D_{1}-D_{2}}+a_{2 }q^{D_{1}+D_{2}+1}z\big{)}zz_{2}^{2}\Big{)}=\] \[=\Big{(}\big{(}a_{2}q^{D_{1}+D_{2}-1}+a_{1}q^{D_{1}-D_{2}}z \big{)}-a_{1}q^{D_{1}+D_{2}+2}zz_{2}^{2}\Big{)}X,\] \[X\Big{(}q^{-D_{1}+D_{2}+1}+q^{D_{1}-D_{2}+1}z^{2}-a_{1}\bar{a}_{2 }^{\dagger}q^{D_{1}-D_{2}}z\Big{)}=\Big{(}q^{-D_{1}+D_{2}+1}+q^{D_{1}-D_{2}+1} z^{2}-a_{1}^{\dagger}a_{2}q^{-D_{1}+D_{2}-2}z\Big{)}X\] for \(X\in\widetilde{\mathcal{A}}^{(2)}_{0}\) fixed by \(\chi^{(2)}\). Since \(\mathbb{C}[[z,z_{2}]]=(\mathbb{C}[[z]])[[z_{2}]]\), considering expansion coefficients with respect to \(z_{2}\), we can use \([X,q^{D_{1}+D_{2}}]=0\) to deduce that the above system is equivalent to (B.20) \[Xa_{2}q^{-2D_{1}} =\big{(}a_{2}+a_{1}q^{-2D_{2}+1}z\big{)}X, a_{1}X=X\big{(}a_{1}q^{-2(D_{2}+1)}+q^{-1}a_{2}z\big{)},\] (B.21) \[Xa_{1}^{\dagger}q^{2(D_{2}+1)} =\big{(}a_{1}^{\dagger}+\bar{a}_{2}^{\dagger}q^{2D_{1}+3}z\big{)}X, \bar{a}_{2}^{\dagger}X=X\big{(}\bar{a}_{2}^{\dagger}q^{2D_{1}}+q^{-1}a_{1}^{ \dagger}z\big{)},\] \[\big{[}X,q^{2D_{1}}\big{]} =\big{(}Xa_{1}^{\dagger}a_{2}q^{2D_{2}-1}-a_{1}\bar{a}_{2}^{ \dagger}q^{2D_{1}+1}X\big{)}z,\] \[\big{[}X,q^{2D_{2}}+q^{2D_{1}}z^{2}\big{]} =\big{(}Xa_{1}\bar{a}_{2}^{\dagger}q^{2D_{1}-1}-a_{1}^{\dagger}a_ {2}q^{2D_{2}-3}X\big{)}z.\] Note that \(q^{-2D_{1}(D_{2}+1)}\in\widetilde{\mathcal{A}}^{(2)}_{0}\) is fixed by \(\chi^{(2)}\). Hence without loss of generality we may write (B.22) \[X=\widetilde{X}q^{-2D_{1}(D_{2}+1)},\] for some \(\widetilde{X}\in\widetilde{\mathcal{A}}_{0}^{(2)}\) fixed by \(\chi^{(2)}\). The system (B.20-B.21) is equivalent to (B.23) \[[\widetilde{X},a_{2}] =q^{-2D_{2}+1}a_{1}\widetilde{X}z, [a_{1},\widetilde{X}] =\widetilde{X}q^{2D_{1}-1}a_{2}z,\] (B.24) \[=\bar{a}_{2}^{\dagger}q^{2D_{1}+3}\widetilde{X}z, [\bar{a}_{2}^{\dagger},\widetilde{X}] =\widetilde{X}a_{1}^{\dagger}q^{-2D_{2}-3}z,\] (B.25) \[\big{[}\widetilde{X},q^{2D_{1}}\big{]} =\big{(}\widetilde{X}a_{1}^{\dagger}a_{2}q^{2D_{1}-1}-a_{1}\bar{a }_{2}^{\dagger}q^{2D_{1}+1}\widetilde{X}\big{)}z,\] (B.26) \[\big{[}\widetilde{X},q^{2D_{2}}+q^{2D_{1}}z^{2}\big{]} =\big{(}\widetilde{X}a_{1}\bar{a}_{2}^{\dagger}q^{2D_{2}+3}-a_{1}^ {\dagger}a_{2}q^{2D_{2}-3}\widetilde{X}\big{)}z.\] Since \(\chi^{(2)}\) fixes \(\widetilde{X}\), the equations in (B.23) and the equations in (B.24) are pairwise equivalent. At the same time, the system (B.23-B.24) implies (B.25) and (B.26). To show this, since \([\widetilde{X},q^{2D_{1}}]=[a_{1}^{\dagger}a_{1},\widetilde{X}]\) from (B.23-B.24) we obtain \[[\widetilde{X},q^{2D_{1}}]+a_{1}\bar{a}_{2}^{\dagger}q^{2D_{1}+1 }\widetilde{X}z-\widetilde{X}a_{1}^{\dagger}a_{2}q^{2D_{1}-1}z=\] \[=a_{1}\bar{a}_{2}^{\dagger}q^{2D_{1}+1}\widetilde{X}z-[\widetilde {X},a_{2}^{\dagger}]a_{1}+a_{1}^{\dagger}[a_{1},\widetilde{X}]-\widetilde{X}a_ {1}^{\dagger}a_{2}q^{2D_{1}-1}z\] \[=\big{(}\bar{a}_{2}^{\dagger}q^{2D_{1}+3}[a_{1},\widetilde{X}]-[ \widetilde{X},a_{1}^{\dagger}]a_{2}q^{2D_{1}-1}\big{)}z,\] which vanishes, thereby recovering (B.25). Applying \(\chi^{(2)}\) to (B.25) we obtain \([\widetilde{X},q^{-2D_{2}}]=\big{(}\widetilde{X}a_{1}^{\dagger}a_{2}q^{-2D_{2} -1}-a_{1}\bar{a}_{2}^{\dagger}q^{-2D_{2}+1}\widetilde{X}\big{)}z\). Left-and-right multiplying this by \(q^{2D_{2}}\) and using (B.23-B.24) to rewrite the result we obtain (B.27) \[[\widetilde{X},q^{2D_{2}}]=\big{(}\bar{a}_{2}^{\dagger}\widetilde{X}a_{1}q^{2D _{2}+3}-q^{2D_{2}-1}a_{1}^{\dagger}\widetilde{X}a_{2}\big{)}z.\] Finally, using (B.27) and again (B.23-B.24), we derive that \[[\widetilde{X},q^{2D_{2}}+q^{2D_{1}}z^{2}]-\widetilde{X}a_{1}\bar {a}_{2}^{\dagger}q^{2D_{2}+3}z+a_{1}^{\dagger}a_{2}q^{2D_{2}-3}\widetilde{X}z=\] \[=\bar{a}_{2}^{\dagger}\widetilde{X}a_{1}q^{2D_{2}+3}z-q^{2D_{2}-1 }a_{1}^{\dagger}\widetilde{X}a_{2}z+[\widetilde{X},q^{2D_{1}}]z^{2}+\] \[\qquad-(\bar{a}_{2}^{\dagger}\widetilde{X}-\widetilde{X}a_{1}^{ \dagger}q^{-2D_{2}-3}z)a_{1}q^{2D_{2}+3}z+a_{1}^{\dagger}q^{2D_{2}-1}( \widetilde{X}a_{2}-a_{1}q^{-2D_{2}+1}\widetilde{X}z)z\] \[=\big{(}\widetilde{X}a_{1}^{\dagger}a_{1}+a_{1}^{\dagger}a_{1} \widetilde{X}+[\widetilde{X},1-a_{1}^{\dagger}a_{1}]\big{)}z^{2}\] which vanishes, thereby proving (B.26) as well. We have obtained that the system (B.23-B.26) is equivalent to the system (B.24). Writing \(p=q^{2}\), without loss of generality we set \[\widetilde{X}=Ye_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger})e_{p}(q^{-1}za_{1}^{ \dagger}a_{2})\] for some \(Y\in\widetilde{\mathcal{A}}_{0}^{(2)}\) fixed by \(\chi^{(2)}\), noting that \(e_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger})\) and \(e_{p}(q^{-1}za_{1}^{\dagger}a_{2})\) lie in \(\widetilde{\mathcal{A}}_{0}^{(2)}\) and are fixed by \(\chi^{(2)}\). The theorem now follows from the following claim. _Claim:_ (B.24) is satisfied if and only if \(Y\in\mathbb{C}[\![z]\!]\). In the special case \(Y=1\), indeed (B.24) is indeed satisfied: \[[\widetilde{X},a_{1}^{\dagger}]-\bar{a}_{2}^{\dagger}q^{2D_{1}+3}z \widetilde{X} =\Big{(}[e_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger}),a_{1}^{\dagger}]- \bar{a}_{2}^{\dagger}q^{2D_{1}+3}ze_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger})\Big{)} e_{p}(q^{-1}za_{1}^{\dagger}a_{2}),\] \[[\bar{a}_{2}^{\dagger},\widetilde{X}]-\widetilde{X}a_{1}^{ \dagger}q^{-2D_{2}-3}z =e_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger})\Big{(}[\bar{a}_{2}^{ \dagger},e_{p}(q^{-1}za_{1}^{\dagger}a_{2})]-e_{p}(q^{-1}za_{1}^{\dagger}a_{2}) a_{1}^{\dagger}q^{-2D_{2}-3}z\Big{)},\] with the expressions in parentheses vanishing by virtue of (A.14) and (B.10) (with \(y=q^{-1}z\)). For general \(Y\) we therefore have \[[\widetilde{X},a_{1}^{\dagger}]-\bar{a}_{2}^{\dagger}q^{2D_{1}+3}z \widetilde{X} =[Y,a_{1}^{\dagger}]e_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger})e_{p}(q^ {-1}za_{1}^{\dagger}a_{2}),\] \[[\bar{a}_{2}^{\dagger},\widetilde{X}]-\widetilde{X}a_{1}^{\dagger }q^{-2D_{2}-3}z =[\bar{a}_{2}^{\dagger},Y]e_{p}(q^{3}za_{1}\bar{a}_{2}^{\dagger})e_ {p}(q^{-1}za_{1}^{\dagger}a_{2}).\] Both right-hand sides vanish, i.e. (B.24) is indeed satisfied, if and only if \(Y\) lies in the centralizer in \(\widetilde{\mathcal{A}}^{(2)}\) of \(\{a_{1}^{\dagger},\bar{a}_{2}^{\dagger}\}\), which is trivial by Lemma B.5. This proves the claim. ## C. An alternative proof of the main theorem In this part of the appendix we give a proof of the boundary factorization identity (8.2) which is independent of the universal K-matrix formalism. Before we state and prove a key lemma, note that expressions of the form \(e_{p}(\gamma^{D}y)\) where \(\gamma\in\mathbb{C}^{\times}\) and \(y\) is an indeterminate give rise to well-defined \(\operatorname{End}(W)\)-valued formal power series, sending \(w_{j}\) to \(e_{p}(\gamma^{j}y)w_{j}\). **Lemma C.1**.: _Let \(y\) be a formal parameter and let \(p\) be a nonzero complex number unequal to a root of unity. In \(\operatorname{End}(W\otimes W)[[y]]\) we have the identities_ (C.1) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})(y;p)_{D_{1}} =(y;p)_{D_{1}}e_{p}(-a_{1}\bar{a}_{2}^{\dagger}p^{D_{1}}y)e_{p}( pa_{1}\bar{a}_{2}^{\dagger})\] (C.2) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})(p^{1-D_{1}}y;p)_{D_{1}}^{-1}e _{p}(py\bar{a}_{1}^{\dagger}a_{2}) =e_{p}(py\bar{a}_{1}^{\dagger}a_{2})(p^{1-D_{2}}y;p)_{D_{2}}^{-1}e _{p}(pa_{1}\bar{a}_{2}^{\dagger}).\] Proof.: Note that \[W\otimes W=\bigoplus_{m\in\mathbb{Z}_{\geq 0}}(W\otimes W)_{m},\qquad(W \otimes W)_{m}:=\bigoplus_{j,k\geq 0\atop j+k=m}\mathbb{C}w_{j}\otimes w_{k}.\] Because each factor in (C.1-C.2) preserves each finite-dimensional subspace \((W\otimes W)_{m}\), it suffices to prove the restrictions of (C.1-C.2) to \((W\otimes W)_{m}\), where \(m\in\mathbb{Z}_{\geq 0}\) is fixed but arbitrary. Note that on \((W\otimes W)_{m}\) the operators appearing as arguments of the deformed exponentials are nilpotent. Therefore the operators on the left- and right-hand side of the restricted equations depend rationally on \(p\) and hence it suffices to prove them with \(p\) restricted to an open subset of \(\mathbb{C}\). We will prove the restriction of (C.1) to \((W\otimes W)_{m}\) for all \(p\in\mathbb{C}\) such that \(|p|<1\). Combining (A.4) and (A.7) we obtain \((y;p)_{D}=\frac{e_{p}(p^{D}y)}{e_{p}(y)}\); as a consequence, (C.1) is equivalent to (C.3) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})e_{p}(p^{D_{1}}y)=e_{p}(p^{D_{1}}y)e_{p}(-a_ {1}\bar{a}_{2}^{\dagger}p^{D_{1}}y)e_{p}(pa_{1}\bar{a}_{2}^{\dagger}).\] But this equation follows directly from (A.12) and the observation \((a_{1}\bar{a}_{2}^{\dagger})(p^{D_{1}}y)=p(p^{D_{1}}y)(a_{1}\bar{a}_{2}^{ \dagger})\). On the other hand12, we will prove the restricted version of (C.2) for all \(p\in\mathbb{C}^{\times}\) such that \(|p|>1\). In this case, for all \(j\in\mathbb{Z}_{\geq 0}\) we have Footnote 12: We will need (C.2) with \(|p|<1\), but we are not aware of a direct proof of this. \[(p^{1-j}y;p)_{j}^{-1}=(y;p^{-1})_{j}^{-1}=\frac{(p^{-j}y;p^{-1})_{\infty}}{(y; p^{-1})_{\infty}}\in\mathbb{C}[[y]].\] From (A.7) and (A.9) we deduce the identity \((p^{-j}y;p^{-1})_{\infty}=e_{p^{-1}}(p^{-j}y)^{-1}=e_{p}(p^{1-j}y)\) of formal power series in \(y\). Hence, we have \((p^{1-D}y;p)_{D}^{-1}=(y;p^{-1})_{\infty}^{-1}e_{p}(p^{1-D}y)\) in \(\operatorname{End}(W)[[y]]\). As a consequence, (C.2) is equivalent to (C.4) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})e_{p}(p^{1-D_{1}}y)e_{p}(py\bar{a}_{1}^{ \dagger}a_{2})=e_{p}(py\bar{a}_{1}^{\dagger}a_{2})e_{p}(p^{1-D_{2}}y)e_{p}(pa_ {1}\bar{a}_{2}^{\dagger}).\] To prove (C.4), note that (B.11) can be evaluated at \(y=p\), and the result can be rewritten as \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})\big{(}p^{-D_{1}}+\bar{a}_{1}^{\dagger}a_{2} \big{)}=\big{(}p^{-D_{2}}+\bar{a}_{1}^{\dagger}a_{2}\big{)}e_{p}(pa_{1}\bar{a }_{2}^{\dagger}).\] By iteration we obtain (C.5) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})e_{p}\big{(}p^{1-D_{1}}y+py\bar{a}_{1}^{\dagger} a_{2}\big{)}=e_{p}\big{(}p^{1-D_{2}}y+py\bar{a}_{1}^{\dagger}a_{2}\big{)}e_{p}(pa_{1} \bar{a}_{2}^{\dagger}).\] Note that \((\bar{a}_{1}^{\dagger}a_{2})p^{1-D_{1}}=p\,p^{1-D_{1}}(\bar{a}_{1}^{\dagger}a_{2})\) and \(p^{1-D_{2}}(\bar{a}_{1}^{\dagger}a_{2})=p\,(\bar{a}_{1}^{\dagger}a_{2})p^{1-D_ {2}}\). Applying (A.11), we obtain (C.4), as required. Alternative proof of Theorem 8.1.: By virtue of (4.13), the desired identity, viz. (C.6) \[\mathcal{K}_{\upsilon}(z)_{1}\mathcal{R}_{\upsilon\phi}(z^{2})\mathcal{K}_{ \phi}(z)_{2}\,\mathcal{O}=\mathcal{O}\mathcal{K}_{\varrho}(q^{-\mu/2}z)_{1} \mathcal{R}_{\varrho\bar{\varrho}}(z^{2})\mathcal{K}_{\bar{\varrho}}(q^{\mu/2} z)_{2}\] for arbitrary \(\mu\in\mathbb{C}\) and \(q,\xi\in\mathbb{C}^{\times}\) such that \(q\) is not a root of unity, is equivalent to (C.7) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})\mathcal{K}_{\upsilon}(z)_{1} \mathcal{R}_{\upsilon\phi}(z^{2})\mathcal{K}_{\phi}(z)_{2}e_{p}(pa_{1}\bar{a}_ {2}^{\dagger})^{-1}=\] \[\qquad=q^{\mu(D_{1}-D_{2})/2}\mathcal{K}_{\varrho}(q^{-\mu/2}z)_{ 1}\mathcal{R}_{\varrho\bar{\varrho}}(z^{2})\mathcal{K}_{\bar{\varrho}}(q^{\mu /2}z)_{2}q^{\mu(D_{2}-D_{1})/2}\] where \(p=q^{2}\). The strategy of the proof is as follows. We move various simple factors in \(\mathcal{F}^{(2)}(D_{1},D_{2})\) to the right in both sides of (C.7), thus bringing them to a similar form. Then more advanced product formulas involving q-exponentials and finite q-Pochhammer symbols yield the desired equality. More precisely, we set \(\gamma=pq^{-\mu}\xi^{-1}\in\mathbb{C}^{\times}\) and from (A.2) deduce \[(\gamma z^{-2};p)_{j}^{-1}=p^{j(1-j)/2}(-\gamma^{-1}z^{2})^{j}(p^{1-j}\gamma^{ -1}z^{2};p)_{j}^{-1}\] for all \(j\in\mathbb{Z}_{\geqslant 0}\). Using the identities \(q^{-D^{2}}a^{\dagger}=-q\bar{a}^{\dagger}q^{-D^{2}}\) and \(q^{-D^{2}}a=aq^{2D-1}q^{-D^{2}}\), we obtain, for the left-hand side of (C.7), (C.8) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})\mathcal{K}_{\upsilon}(z)_{1} \mathcal{R}_{\upsilon\phi}(z^{2})\mathcal{K}_{\phi}(z)_{2}e_{p}(pa_{1}\bar{a}_ {2}^{\dagger})^{-1}=\] \[=e_{p}(pa_{1}\bar{a}_{2}^{\dagger})\big{(}-\gamma^{-1}q^{1-D_{1}} \big{)}^{D_{1}}(\gamma z^{2};p)_{D_{1}}(p^{1-D_{1}}\gamma^{-1}z^{2};p)_{D_{1}} ^{-1}e_{p}(z^{2}a_{1}^{\dagger}a_{2})\cdot\] \[\qquad\cdot q^{(2\mu-1)D_{1}-2D_{2}-2D_{1}D_{2}-D_{2}^{2}}(-\xi) ^{D_{2}}e_{p}(pa_{1}\bar{a}_{2}^{\dagger})^{-1}\] \[=e_{p}(pa_{1}\bar{a}_{2}^{\dagger})(\gamma z^{2};p)_{D_{1}}(p^{1-D _{1}}\gamma^{-1}z^{2};p)_{D_{1}}^{-1}e_{p}(p\gamma^{-1}z^{2}\bar{a}_{1}^{ \dagger}a_{2})(-q^{-D_{1}-D_{2}-2}\xi)^{D_{1}+D_{2}}e_{p}(pa_{1}\bar{a}_{2}^{ \dagger})^{-1}\] \[=e_{p}(pa_{1}\bar{a}_{2}^{\dagger})(\gamma z^{2};p)_{D_{1}}(p^{1-D _{1}}\gamma^{-1}z^{2};p)_{D_{1}}^{-1}e_{p}(p\gamma^{-1}z^{2}\bar{a}_{1}^{ \dagger}a_{2})e_{p}(pa_{1}\bar{a}_{2}^{\dagger})^{-1}(-q^{-D_{1}-D_{2}-2}\xi)^{ D_{1}+D_{2}}.\] Similarly, for the right-hand side of (C.7) we obtain (C.9) \[q^{\mu(D_{1}-D_{2})/2}\mathcal{K}_{\varrho}(q^{-\mu/2}z)_{1} \mathcal{R}_{\varrho\bar{\varrho}}(z^{2})\mathcal{K}_{\bar{\varrho}}(q^{\mu/2}z)_ {2}q^{\mu(D_{2}-D_{1})/2}=\] \[=(\gamma z^{2};p)_{D_{1}}q^{\mu(D_{1}-D_{2})/2-D_{1}^{2}}(-\xi)^{ D_{1}}e_{p}(q^{3}z^{2}a_{1}\bar{a}_{2}^{\dagger})e_{p}(q^{-1}z^{2}a_{1}^{ \dagger}a_{2})\cdot\] \[\qquad\cdot(p^{1-D_{2}}\gamma^{-1}z^{2};p)_{D_{2}}^{-1}q^{\mu(D_ {2}-D_{1})/2-2(D_{1}+D_{2})-2D_{1}D_{2}-D_{2}^{2}}(-\xi)^{D_{2}}=\] \[=(\gamma z^{2};p)_{D_{1}}e_{p}(-a_{1}\bar{a}_{2}^{\dagger}q^{2D_{1} }\gamma z^{2})e_{p}(p\gamma^{-1}z^{2}\bar{a}_{1}^{\dagger}a_{2})(p^{1-D_{2}} \gamma^{-1}z^{2};p)_{D_{2}}^{-1}(-q^{-D_{1}-D_{2}-2}\xi)^{D_{1}+D_{2}}.\] Therefore (C.7) is equivalent to (C.10) \[e_{p}(pa_{1}\bar{a}_{2}^{\dagger})(\gamma z^{2};p)_{D_{1}}(p^{1-D _{1}}\gamma^{-1}z^{2};p)_{D_{1}}^{-1}e_{p}(p\gamma^{-1}z^{2}\bar{a}_{1}^{ \dagger}a_{2})e_{p}(pa_{1}\bar{a}_{2}^{\dagger})^{-1}=\] \[=(\gamma z^{2};p)_{D_{1}}e_{p}(-a_{1}\bar{a}_{2}^{\dagger}p^{D_{1}} \gamma z^{2})e_{p}(p\gamma^{-1}z^{2}\bar{a}_{1}^{\dagger}a_{2})(p^{1-D_{2}} \gamma^{-1}z^{2};p)_{D_{2}}^{-1}.\] Applying (C.1) with \(y=\gamma z^{2}\) and (C.2) with \(y=\gamma^{-1}z^{2}\), we deduce (C.10), as required.
2304.01786
Distributionally robust stability of payoff allocations in stochastic coalitional games
We consider multi-agent coalitional games with uncertainty in the coalitional values. We provide a novel methodology to study the stability of the grand coalition in the case where each coalition constructs ambiguity sets for the (possibly) unknown probability distribution of the uncertainty. As a less conservative solution concept compared to worst-case approaches for coalitional stability, we consider a stochastic version of the so-called core set, i.e., the expected value core. Unfortunately, without exact knowledge of the probability distribution, the evaluation of the expected value core is an extremely challenging task. Hence, we propose the concept of distributionaly robust (DR) core. Leveraging tools from data-driven DR optimization under the Wasserstein distance, we provide finite-sample guarantees that any allocation which lies in the DR core is also stable with respect to the true probability distribution. Furthermore, we show that as the number of samples grows unbounded, the DR core converges almost surely to the true expected value core. We dedicate the last section to the computational tractability of finding an allocation in the DR core.
George Pantazis, Barbara Franci, Sergio Grammatico, Kostas Margellos
2023-04-04T13:22:31Z
http://arxiv.org/abs/2304.01786v2
# Distributionally robust stability of payoff allocations in stochastic coalitional games ###### Abstract We consider multi-agent coalitional games with uncertainty in the coalitional values. We provide a novel methodology to study the stability of the grand coalition in the case where each coalition constructs ambiguity sets for the (possibly) unknown probability distribution of the uncertainty. As a less conservative solution concept compared to worst-case approaches for coalitional stability, we consider a stochastic version of the so-called core set, i.e., the expected value core. Unfortunately, without exact knowledge of the probability distribution, the evaluation of the expected value core is an extremely challenging task. Hence, we propose the concept of distributionaly robust (DR) core. Leveraging tools from data-driven DR optimization under the Wasserstein distance, we provide finite-sample guarantees that any allocation which lies in the DR core is also stable with respect to the true probability distribution. Furthermore, we show that as the number of samples grows unbounded, the DR core converges almost surely to the true expected value core. We dedicate the last section to the computational tractability of finding an allocation in the DR core. ## I Introduction Coalitional games [1] are prevalent in applications ranging from engineering [2, 3, 4] to economics and social sciences [5]. Even though agents in such systems typically act as selfish entities, they are incentivized to form coalitions aiming at receiving higher individual gains or reducing their own costs. A challenging task, due to the agents' individual interests, is to distribute their payoffs in such a way that none of them has an incentive to deviate from the so-called _grand coalition_, i.e., the coalition where all agents work together. In the literature of coalitional game theory this problem is known as _stability of the grand coalition_ and the set of payoffs for which stability is achieved is known as the _core_ of the game. Due to its conceptual simplicity, the core has been widely used as a stability concept in coalitional games [1] and in turn intense research has been dedicated to finding allocations that lie within the core. Stability of the grand coalition is fundamentally connected to the values of each coalition. However, coalitional values are typically subject to uncertainty. As such, the mathematical framework of deterministic coalitional games needs to be revisited and extended. The seminal works [6, 7, 8] are the first on stochastic coalitional games. The work in [9] also studies uncertain coalitional games and shows that for a particular class, certain properties of the game, such as the non-emptiness of the core continue to hold when uncertainty is introduced. Uncertain coalitional games were studied under the lenses of Bayesian learning in [10, 11], while the work in [12] investigates which stability solution concepts maximize the probabilistic stability of allocations after the samples of the uncertainty have been revealed. Moreover, [13] and [14] focus on the dynamic evolution of repeated stochastic games. In [2], the concept of the so called _robust core_ is introduced as a generalization to the traditional deterministic core. In this setting, the range of possible coalitional values is assumed to be known. The work in [15] extends the notion of the robust core to that of the _scenario core_ accounting for the more general case where both the support set and the probability distribution of the uncertainty affecting the coalitional value are unknown. As an alternative to the robust core [2] and its data-driven counterpart [15], in this paper we consider instead the significantly milder concept of stability in the mean sense that in turn gives rise to the so-called _expected value core_. Studying allocation stability in the mean sense might circumvent the possible emptiness of the core set, a fundamental technical challenge in coalitional game theory. Apart from very mild assumptions on the probability distribution of the uncertainty, here we consider both the support set and the probability distribution to be unknown. In other cases, the uncertain parameter affecting the coalitional game might not even admit a single distribution, but a range of possible distributions, quantified through data-driven approaches. As such, evaluating the expected core in this setting is extremely challenging. To address this, we follow an approach based on distributionally robust (DR) optimization [16, 17, 18, 19, 20, 21, 22] thus considering ambiguity sets that represent empirical sets in which the true probability distribution (in case the uncertainty admits one) is likely to be contained. The consideration of ambiguity sets leads to allocations that are _distributionally stable_. We call the set of all distributionally stable allocations the _distributionally robust (DR)_ core of the game. Leveraging results from data-driven DR optimization under the Wasserstein distance [23, 24], we provide finite sample guarantees on the probability that any allocation in the DR core of the DR game approximation is also in the expected value core of the original game with a given confidence (Section III). Moreover, we prove almost-sure asymptotic convergence of the Wasserstein DR core to the expected core of the original game (Section IV.A). Finally, we provide the means to calculate an allocation in the Wasserstein DR core (Section IV.B). Specifically, we show that under certain conditions, the problem of finding such an allocation can be recast as a convex optimization problem, whose complexity both in the number of decision variables and constraints is inherently connected to the number of possible subcoalitions. Numerical simulations corroborate our theoretical findings (Section V). ## II Stochastic coalitional games ### _Allocation mean stability and expected value core_ We consider a coalitional game with \(N\) agents parameterized by the index set \(\mathcal{N}=\{1,\ldots,N\}\). We denote the number of possible subcoalitions except for the grand coalition by \(M\), i.e., \(M=2^{N}-1\). In this setting, the agents, though selfish, wish to form coalitions if that implies an increase in their individual payoffs. The total gain for each coalition is given by the so called _value function_, which, depending on the coalition \(S\subseteq\mathcal{N}\), takes a real value representing the total payoff that agents participating in it would obtain from its formation. However, the values of each coalition are subject to uncertainty thus rendering the value function of each coalition stochastic. **Definition 1**: _(Stochastic value function) The value function of a coalition \(S\subset\mathcal{N}\) is a function \(u_{S}:2^{N}\times\Xi\to\mathbb{R}\) that, given the value of the uncertainty realization \(\xi\in\Xi\subseteq\mathbb{R}^{p}\), returns the total payoff for the agents forming a coalition \(S\). The value function of the grand coalition is deterministic, i.e., \(u_{N}\cdot 2^{\mathcal{N}}\to\mathbb{R}\). \(\square\)_ An uncertain coalitional game is then defined as the tuple \(G_{\mathbb{P}}=\{\mathcal{N},\{u_{S}\}_{S\subseteq\mathcal{N}},\Xi,\mathbb{P}\}\), where \(\mathbb{P}\) denotes the probability distribution that the uncertainty \(\xi\in\Xi\) follows. To circumvent the fundamental issue of the emptiness of the robust core as defined in [2], let us consider the concept of stability of allocations in the mean sense, defined as follows. **Definition 2**: _(Stability in the mean sense) An allocation \(x=(x_{i})_{i\in\mathcal{N}}\) of the game \(G_{\mathbb{P}}=\{\mathcal{N},\{u_{S}\}_{S\subseteq\mathcal{N}},\Xi,\mathbb{P}\}\) is stable in the mean sense if i) \(\sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}}\) and ii) \(\sum_{i\in S}x_{i}\geq\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)],\ \forall\ S \subset\mathcal{N}\). \(\square\)_ The first condition is called the efficiency condition. Due to our assumption that the grand coalition is deterministic, it means that the total increase in gains when all agents work together is known with certainty. This is the case when agents might know how efficient a fully-cooperative scheme is but have some level of uncertainty/ambiguity with respect to the potential outcomes of the subcoalitions. The second condition implies that the allocation \(x\) is not strictly feasible, hence agents do not have an incentive to form \(S\). Otherwise, if \(\sum_{i\in S}x_{i}<\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]\) agents would have the incentive to leave the grand coalition and form \(S\), thus receiving \(\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]\) as opposed to \(\sum_{i\in S}x_{i}\). In this setting, we wish to study the stability of the grand coalition, where no agent has an incentive to deviate and create other subcoalitions. To this end, let us introduce an extension to the classic notion of the core, the so-called _expected value core_, which is the set of all stable allocations in the mean sense as defined next. **Definition 3**: _(Expected value core) The expected value core \(C_{E}(G_{\mathbb{P}})\) of the game \(G_{\mathbb{P}}\) is defined as the set_ \[C_{E}(G_{\mathbb{P}})=\{x\in\mathbb{R}^{N}:\sum_{i\in\mathcal{N }}x_{i}=u_{\mathcal{N}},\] \[\sum_{i\in S}x_{i}\geq\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)],\ \forall\ S \subset\mathcal{N}\},\] _where_ \[\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]=\int_{\Xi}u_{S}(\xi)\mathbb{P}(d\xi). \tag{1}\] \(\square\)__ We now impose the following mild technical assumptions. **Assumption 1**: _(Light-tailed probability distribution) For the true probability measure \(\mathbb{P}\), there exists \(a>1\) such that_ \[A:=\mathbb{E}_{\mathbb{P}}[\exp(\|\xi\|^{a})]=\int_{\Xi}\exp(\|\xi\|^{a}) \mathbb{P}(d\xi)<\infty.\] **Assumption 2**: _For the probability measure \(\mathbb{P}\) it holds that \(\mathbb{E}_{\mathbb{P}}[\|\xi\|]=\int_{\Xi}\|\xi\|\mathbb{P}(d\xi)<\infty\)._ Assumption 1 requires the tail of the true probability distribution \(\mathbb{P}\) to decay at an exponential rate. In case \(\Xi\) is a compact set this assumption trivially holds. Assumption 2 requires that \(\mathbb{P}\) admits a finite first-order moment. In such a general set-up it is challenging, if not impossible, to compute the expected value integral given in (1), i.e., one cannot evaluate the expected-valued core \(C_{E}(G_{\mathbb{P}})\). To circumvent this challenge, we propose a methodology based on distributionally robust optimization. ### _Distributionally robust stability of allocations_ In our setting, agent coalitions \(S\subset\mathcal{N}\) can construct ambiguity sets of the probability distribution \(\mathbb{P}\) of the uncertainty \(\xi\in\Xi\) that affect their coalitional values \(u_{S}(\xi)\). This is due to lack of knowledge of \(\mathbb{P}\). In other words, we do not only have uncertainty affecting the coalitional game, but uncertainty about the distribution of the uncertain parameter. We postulate that each coalition \(S\subset\mathcal{N}\) is allowed to construct their own ambiguity sets. The heterogeneity of the coalitional ambiguity sets provides the necessary modelling freedom for our theory to be flexible for application purposes. To this end, we assume that each coalition \(S\subset\mathcal{N}\) has access to their own i.i.d. samples \(\xi_{K_{S}}=(\xi^{(1)},\ldots,\xi^{(K_{S})})\in\Xi^{K_{S}}\) and consider the distributionally robust version \(G_{\hat{\mathbb{P}}_{K}}\) of the original game \(G_{\mathbb{P}}\) defined as the tuple \(G_{\hat{\mathbb{P}}_{K}}=\{\mathcal{N},\{u_{S}\}_{S\subseteq\mathcal{N}},\Xi, \hat{\mathbb{P}}_{K}\}\), where \(K=\{K_{S}\}_{S\subseteq\mathcal{N}}\), while \(\hat{\mathbb{P}}_{K}=\{\hat{\mathbb{P}}_{K_{S}}\}_{S\subset\mathcal{N}}\) is the collection of ambiguity sets constructed based on the available data \(\xi_{K_{S}}\) of each subcoalition \(S\subset\mathcal{N}\). We now proceed to defining the notion of distributional stability of an allocation. **Definition 4**: _(Distributionally robust stability of allocations) For a given number of i.i.d. drawn samples \(\xi_{K_{S}}=(\xi^{(1)},\ldots,\xi^{(K_{S})}_{S\subseteq\mathcal{N}})\in\Xi^{K_ {S}}\) per coalition \(S\subset\mathcal{N}\), an allocation \(x=(x_{i})_{i\in\mathcal{N}}\) is distributionally stable with respect to the coalitional ambiguity sets \(\hat{\mathbb{P}}_{K_{S}}\), \(S\subset\mathcal{N}\) if_ 1. \(\sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}}\) _and_ 2. \(\sum_{i\in\mathcal{S}}x_{i}\geq\sup_{\mathbb{Q}_{S}\in\mathcal{P}_{K_{S}}} \mathbb{E}_{\mathbb{Q}}[u_{S}(\xi)],\ \forall\ S\subset\mathcal{N}\)_. \(\square\)__ ## III Distributionally robust coalitional games based on the Wasserstein distance ### _Background on distributional robustness_ In this section, we introduce some basic concepts from distributionally robust optimization under the Wasserstein metric [17, 23, 24]. We show how one can leverage this framework in order to provide certificates of stability with respect to the true unknown coalitional game along with a tractable approximation of its expected value core. We start by imposing a mild assumption on the probability distributions serving as candidates of the true distribution \(\mathbb{P}\). Specifically, we consider all distributions with bounded first-order moments, i.e., \(\mathbb{Q}\in\mathcal{M}(\Xi)\), where \(\mathcal{M}(\Xi)\) is the set of probability distributions with support \(\Xi\) that satisfy Assumption 2. We then need a measure of distance between two probability distributions to quantify how close a candidate probability distribution is to the true probability distribution \(\mathbb{P}\); Let us thus use the Wasserstein distance defined as follows. **Definition 5**: _(Wasserstein distance, [25]) The Wasserstein distance \(d_{W}:\mathcal{M}(\Xi)\times\mathcal{M}(\Xi)\rightarrow\mathbb{R}_{\geq 0}\) between two probability distributions \(\mathbb{Q}_{1},\mathbb{Q}_{2}\in\mathcal{M}(\Xi)\) is defined as_ \[d_{W}(\mathbb{Q}_{1},\mathbb{Q}_{2})= \inf_{\Pi}\Big{\{}\int_{\Xi^{2}}\|\xi_{1}-\xi_{2}\|\Pi(d\xi_{1},d \xi_{2}):\] \[\Pi\text{ is a joint distribution of }\xi_{1}\text{ and }\xi_{2}\] \[\text{ with marginals }\mathbb{Q}_{1}\text{ and }\mathbb{Q}_{2}\text{, respectively}\Big{\}},\] _where \(\|\cdot\|\) can be any norm on \(\mathbb{R}^{p}\)._ An alternative dual interpretation can be derived by the so-called Kantorovich-Rubinstein theorem: **Theorem 1**: _(Kantorovich-Rubinstein theorem, [25]) Given two probability distributions \(\mathcal{Q}_{1}\), \(\mathcal{Q}_{2}\in\mathcal{M}(\Xi)\) we have that_ \[d_{W}(\mathbb{Q}_{1},\mathbb{Q}_{2})=\sup_{f\in\mathcal{F}}\left\{\int_{\Xi}f( \xi)\mathbb{Q}_{1}(d\xi)-\int_{\Xi}f(\xi)\mathbb{Q}_{2}(d\xi)\right\},\] _where \(\mathcal{F}\) is the space of all Lipschitz continuous functions for which \(|f(\xi)-f(\xi^{\prime})|\leq||\xi-\xi^{\prime}||\) for all \(\xi,\xi^{\prime}\in\Xi\). \(\square\)_ ### _Finite-sample guarantees for the distributionally robust core_ In the subsequent developments, we consider that each coalition \(S\subset\mathcal{N}\) has their own independent samples from \(\mathbb{P}\). Any given coalition \(S\subset\mathcal{N}\) constructs their respective ambiguity set based on their collected data \(\xi_{K_{S}}=\left(\xi_{s}^{(k_{S})}\right)_{K_{S}=1}^{K_{S}}\in\Xi^{K_{S}}\). For each coalition \(S\subset\mathcal{N}\) each ambiguity set is given by: \[\mathbb{E}_{\xi_{S}}(\hat{\mathbb{P}}_{K_{S}})=\{\mathbb{Q}_{S}\in\mathcal{M} :d_{W}(\hat{\mathbb{P}}_{K_{S}},\mathbb{Q}_{S})\leq\varepsilon_{S}\}, \tag{2}\] where \(\hat{\mathbb{P}}_{K_{S}}=\sum_{k_{S}=1}^{K_{S}}\delta(\xi-\xi^{(k_{S})})\) is the empirical probability distribution of each coalition \(S\) on the basis of \(K_{S}\) i.i.d samples from the support set \(\Xi\) by coalition \(S\) with \(\delta(\xi-\xi^{(k_{S})})=\frac{1}{K_{S}}\) if \(\xi=\xi^{(k_{S})}\), \(k_{S}=\{1,\ldots,K_{S}\}\) and \(0\) otherwise. The following result relies on Assumption 1 to provide guarantees on the probability that a multi-sample will be drawn from coalition \(S\subset\mathcal{N}\) such that true probability measure lies within the constructed Wasserstein ball with a given confidence. **Lemma 1**: _Let Assumption 1 hold and for any coalition \(S\subset\mathcal{N}\) fix \(\varepsilon_{S}>0\). We have that_ \[\mathbb{P}^{K_{S}}\left\{\xi_{K_{S}}\in\Xi^{K_{S}}:d_{W}(\mathbb{P},\hat{ \mathbb{P}}_{K_{S}})\leq\varepsilon_{S}\right\}\geq 1-\beta_{S},\] _where_ \[\beta_{S}=\begin{cases}c\exp(-qK_{S}\varepsilon_{S}^{\max\{p,2\}}),&\text{ if } \varepsilon_{S}\leq 1\\ c\exp(-qK_{S}\varepsilon_{S}^{\varepsilon_{S}}),&\text{ if }\varepsilon_{S}>1,\end{cases} \tag{3}\] _for all \(K_{S}\geq 1\) and \(p\neq 2\), where \(c,q\) are positive constants that only depend on the parameters \(a,A\) in Assumption 1 and the dimension of the support set \(p\)._ _Proof_: The proof is an adaptation of Theorem 2 in [17] applied to each Wasserstein ball \(\hat{\mathbb{P}}_{K_{S}}=\mathbb{E}_{\varepsilon_{S}}(\hat{\mathbb{P}}_{K_{S}})\) constructed around the empirical probability distribution \(\hat{\mathbb{P}}_{K_{S}}\) and a radius \(\varepsilon_{S}\) chosen by each coalition \(S\subset\mathcal{N}\). \(\blacksquare\) Lemma 1 paves the way towards establishing finite sample guarantees for the following Wasserstein-based version of the distributionally robust core. **Definition 6**: _(Distributionally robust core) The distributionally robust core \(C_{\mathrm{DR}}(G_{\hat{\mathbb{P}}_{K}})\) of the game \(G_{\mathbb{P}}\) based on the Wasserstein distance is defined as the set_ \[\mathcal{C}_{\mathrm{DR}}(G_{\hat{\mathbb{P}}_{K}})=\Big{\{}x\in\mathbb{R}^{N }:\sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}},\] _where \(\hat{\mathbb{P}}_{K_{S}}=\mathbb{E}_{\varepsilon_{S}}(\hat{\mathbb{P}}_{K_{S}})\). \(\square\)_ Throughout we assume that for all multi-samples, the ambiguity sets are such that a non-empty DR core is returned. **Theorem 2**: _For each \(S\subset\mathcal{N}\) fix a Wasserstein radius \(\varepsilon_{S}\) and consider a multi-sample size \(K_{S}\). It holds that_ \[\mathbb{P}^{K}\Big{\{}\xi_{K}\in\Xi^{K}:C_{\mathrm{E}}(G_{\mathbb{P}})\supseteq C _{\mathrm{DR}}(G_{\hat{\mathbb{P}}_{K}})\Big{\}}\geq\beta, \tag{4}\] _where \(\beta=\prod_{S\subset\mathcal{N}}(1-\beta_{S})\) and each \(\beta_{S}\) is given by (3)._ _Proof_: We have that \[\mathbb{P}^{K}\Big{\{}\xi_{K}\in\Xi^{K}:\;C_{\mathrm{E}}(G_{\{ \Xi,\mathbb{P}\}})\supseteq C_{\mathrm{DR}}(G_{\{\Xi,\mathbb{P}\}})\Big{\}}\] \[\geq\mathbb{P}^{K}\Big{\{}\xi_{K}\in\Xi^{K}:\mathbb{E}_{\mathbb{P} }[u_{S}(\xi)]\leq\sup_{Q_{S}\in\hat{\mathbb{P}}_{K_{S}}}\mathbb{E}_{Q_{S}}[u_{S}( \xi)],\;\forall\;S\subset\mathcal{N}\Big{\}}\] \[=\mathbb{P}^{K}\Big{\{}\bigcap_{S\subset\mathcal{N}}\Big{\{}\xi_{K} \in\Xi^{K}:\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]\leq\sup_{Q_{S}\in\hat{\mathbb{P}}_{K_{S }}}\mathbb{E}_{Q_{S}}[u_{S}(\xi)]\Big{\}}\Big{\}}\] \[=\prod_{S\subset\mathcal{N}}\mathbb{P}^{K_{S}}\Big{\{}\xi_{K_{S}} \in\Xi^{K_{S}}:\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]\leq\sup_{Q_{S}\in\hat{ \mathbb{P}}_{K_{S}}}\mathbb{E}_{Q_{S}}[u_{S}(\xi)]\Big{\}} \tag{5}\] The last equality is due to the fact that each coalition constructs its ambiguity sets based on its own (independent) samples. From Lemma 1 for each coalition \(S\subset\mathcal{N}\) we have \[\mathbb{P}^{K_{S}}\Big{\{}\xi_{K_{S}}\in\Xi^{K_{S}}:d_{W}(\mathbb{P},\hat{ \mathbb{P}}_{K_{S}})\Big{\}}\leq\varepsilon_{S}\Big{\}}\geq 1-\beta_{S}.\] Therefore, \[\prod_{S\subset\mathcal{N}}\mathbb{P}^{K_{S}}\Big{\{}\xi_{K_{S}}\in\Xi^{K_{S}}: \mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]\leq\sup_{Q_{S}\in\hat{\mathbb{P}}_{K_{S}}} \mathbb{E}_{Q_{S}}[u_{S}(\xi_{S})]\Big{\}}\] \[\geq\prod_{S\subset\mathcal{N}}\mathbb{P}^{K_{S}}\Big{\{}\xi_{K_{S}} \in\Xi^{K_{S}}:d_{W}(\mathbb{P},\hat{\mathbb{P}}_{K_{S}})\leq\varepsilon_{S}\Big{\}}\] \[\geq\prod_{S\subset\mathcal{N}}(1-\beta_{S}), \tag{6}\] where the first inequality follows from Theorem 3.5 in [23]. \(\blacksquare\) The following result provides guarantees when all coalitions use the same parameters. **Corollary 1**: _Consider Assumption 1. For each coalition \(S\subset\mathcal{N}\) fix a common Wasserstein radius \(\epsilon\) and assume that the same multi-sample \(\xi_{K}\) is used among coalitions. Then, it holds that_ \[\mathbb{P}^{K}\{\xi_{K}\in\Xi^{K}:\ C_{\mathrm{E}}(G_{\mathbb{P}})\supseteq C_{ \mathrm{DR}}(G_{\hat{\mathcal{P}}_{K}})\}\geq(1-\beta)^{M},\] _where \(\beta\) is given by (3) by setting \(\epsilon_{S}=\epsilon\) and \(K_{S}=K\). \(\square\)_ _Proof_: Since \(\epsilon_{S}=\epsilon\) for all \(S\subset\mathcal{N}\) and the same number of samples \(K\in\mathbb{N}\) is drawn for all \(S\subset\mathcal{N}\), then by (3), \(\beta_{S}=\beta\) for all \(S\subset\mathcal{N}\). By Lemma 1 we then have that for each coalition \(S\subset\mathcal{N}\) \[\mathbb{P}^{K}\Big{\{}\xi_{K}\in\Xi^{K}:d_{W}(\mathbb{P},\hat{\mathbb{P}}_{K })\leq\epsilon\Big{\}}\geq 1-\beta,\] where \(\beta\) is given by (3). Following the same proofline as in Theorem 2 and setting \(\beta_{S}=\beta\) to the right hand side of (6) concludes the proof. \(\blacksquare\) **Remark 1**: _The confidence parameter in Theorem 2 and Corollary 1 depends on the number of possible subcoalitions \(S\subset\mathcal{N}\). Corollary 1 shows that the dependence on the number of possible coalitions of the original problem is also inherited by the provided finite-sample guarantees. Though under a different approach, this is also observed in the _a priori_ results in [15], where the authors apply the results of [26, 27] to construct a data-driven version of the robust core. \(\square\)_ ### _Agent-level sampling_ Assume now that the ambiguity sets are constructed on the basis of samples drawn by the individual agents, i.e, with a slight abuse of notation each agent draws a multi-sample \(\xi_{K_{i}}\), which is then used by any coalition \(S\subset\mathcal{N}\) for which \(i\in S\). Fix a confidence parameter \(\hat{\beta}_{S}\in(0,1)\) for each \(S\subset\mathcal{N}\) and let \(\xi_{\mathcal{R}}=(\xi_{\hat{\mathcal{R}}_{S}})_{S\subset\mathcal{N}}\), where \(\xi_{\hat{\mathcal{R}}_{S}}\) are the samples of each coalition \(S\subset\mathcal{N}\). Then, it is easy to show that \[\mathbb{P}^{K}\{\xi_{\mathcal{R}}\in\Xi^{K}:\ C_{\mathrm{E}}(G_{\mathbb{P}}) \supseteq C_{\mathrm{DR}}(G_{\hat{\mathcal{P}}_{K}})\}\geq\tilde{\beta},\] where \(\tilde{\beta}=\sum_{S\subset\mathcal{N}}(1-\beta_{S})-M+1\) and each \(\hat{\beta}_{S}\) is given by (3) setting \(\epsilon_{S}=\epsilon_{S}\) and \(K_{S}=K_{S}=\sum_{i\in S}K_{i}\). This result is obtained by applying Bonferroni's inequality to the third step of the proof of Theorem 2 and thus constitutes a rather conservative bound. Further work is required to leverage the data on the agents' level and translate it to guarantees on a coalitional level taking into account sharing of data among coalitions and thus improving those theoretical guarantees. This is a challenging task and thus left for future work. ## IV Asymptotic consistency and computational tractability of the DR core ### _Asymptotic consistency of the DR core_ In this subsection we show that under Lipschitz continuity of the value functions and careful choice of the radius of the Wasserstein ball \(\epsilon_{S}\) and of the confidence parameter \(\beta_{S}\) for each coalition \(S\subset\mathcal{N}\), the DR core based on the Wasserstein distance converges almost surely to the true expected value core of the original problem. Let us impose the following assumption: **Assumption 3**: _For each coalition \(S\subset\mathcal{N}\) the value function \(u_{S}\) is \(L_{S}\)-Lipschitz continuous in \(\xi\) with \(L_{S}\geq 0\), i.e, \(\|u_{S}(\xi)-u_{S}(\xi^{\prime})\|\leq L_{S}\|\xi-\xi^{\prime}\|\) for all \(\xi,\xi^{\prime}\in\Xi\). \(\square\)_ Such assumptions are common in stability analysis for stochastic programming [24, 28] and also provide interesting insights in our setting. In fact, the increase in the error between a DR coalitional value and the corresponding expected coalitional value is proportional to the estimation error between the true and the empirical distribution of that coalition amplified by at most the Lipschitz constant of the corresponding value function. As opposed to the developments of the previous section, where we fix the radius \(\epsilon_{S}\) and the number of samples \(K_{S}\) for any \(S\subset\mathcal{N}\) and calculate \(\beta_{S}\) based on (3), we now solve (3) with respect to \(\epsilon_{S}\), thus obtaining the Wasserstein radius as a function of the confidence parameter \(\beta_{S}\) and the number of samples \(K_{S}\). In particular, we have that \[\epsilon_{S}(\beta_{S},K_{S})=\begin{cases}\left(\frac{\ln(\frac{c}{\beta_{S}} )}{qK_{S}}\right)^{\frac{1}{\max\{p,2\}}}&\text{if }K_{S}\geq\frac{\ln(\frac{c}{\beta_{S}})}{q}\\ \left(\frac{\ln(\frac{c}{\beta_{S}})}{qK_{S}}\right)^{\frac{1}{2}}&\text{if }K_{S}< \frac{\ln(\frac{c}{\beta_{S}})}{q}.\end{cases}\] The following theorem establishes almost-sure convergence of the DR core to the expected value-core as the number of samples increases. **Theorem 3**: _Let Assumptions 1 and 3 hold. Suppose that for each \(S\subset\mathcal{N}\), \(\beta_{S}^{K_{S}}\in(0,1),K_{S}\in\mathbb{N}\) satisfies \(\sum\limits_{K_{S}=1}^{\infty}\beta_{S}^{K_{S}}<\infty\) and \(\lim\limits_{K_{S}\to\infty}\epsilon_{S}(\beta_{S},K_{S})=0\). Any sequence of the Wasserstein-based DR cores \(\{C_{\mathrm{DR}}(G_{\hat{\mathcal{P}}_{K}})\}_{K\in\mathbb{N}^{M}}\), where \(K=(K_{S})_{S\subset\mathcal{N}}\), converges \(\mathbb{P}^{\infty}\)-almost surely to the true expected core \(C_{\mathrm{E}}(G_{\mathbb{P}})\) as \(K_{S}\to\infty\) for all \(S\subset\mathcal{N}\). \(\square\)_ _Proof_: For each coalition \(S\subset\mathcal{N}\) we have that \[\mathbb{P}^{K_{S}}\{\xi_{K_{S}}\in\Xi^{K_{S}}:\mathbb{E}_{\mathbb{P}}[u_{S}(\xi )]\leq\sup\limits_{Q_{S}\in\hat{\mathcal{P}}_{K_{S}}}\mathbb{E}_{Q_{S}}[u_{S}( \xi)]\}\] \[\geq\mathbb{P}^{K_{S}}\{\xi_{K_{S}}\in\Xi^{K_{S}}:d_{W}(\mathbb{P},\hat{\mathbb{P} }_{K_{S}})\leq\epsilon_{S}(\beta_{S}^{K_{S}},K_{S})\}\geq 1-\beta_{S}^{K_{S}},\] where the last inequality is due to Lemma 1. Letting \(K_{S}\to\infty\) and since \(\lim\limits_{K_{S}\to\infty}\beta_{S}^{K_{S}}=0\) we have that \[\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]\leq\lim\limits_{K_{S}\to\infty}\sup\limits_ {Q_{S}\in\hat{\mathcal{P}}_{K_{S}}}\mathbb{E}_{Q_{S}}[u_{S}(\xi)] \tag{7}\] \(\mathbb{P}^{\infty}\)-almost surely. Following a methodology similar in spirit to [23], for each coalition \(S\subset\mathcal{N}\) and for every \(K_{S}\in\mathbb{N}\) by the definition of supremum for any \(\delta_{S}>0\) there exists \(\hat{\mathbb{Q}}_{K_{S}}\) such that \[\sup\limits_{Q_{S}\in\hat{\mathcal{P}}_{K_{S}}}\mathbb{E}_{Q_{S}}[u_{S}(\xi)] \leq\mathbb{E}_{\hat{Q}_{K_{S}}}[u_{S}(\xi)]+\delta_{S}\] By the Kantorovich-Rubinstein theorem (Theorem 1), we have that \[L_{S}d_{W}(\hat{\mathbb{Q}}_{K_{S}},\mathbb{P})\geq L_{S}\left(\mathbb{E}_{ \hat{\mathbb{Q}}_{K_{S}}}\left[\frac{1}{L_{S}}u_{S}(\xi)\right]-\mathbb{E}_{ \mathbb{P}}\left[\frac{1}{L_{S}}u_{S}(\xi)\right]\right),\] since, under Assumption 3, \(\frac{u_{S}(\xi)}{L_{S}}\) is a Lipschitz continuous function with Lipschitz constant less than or equal to 1. The relation above can then be written as \[\mathbb{E}_{\hat{\mathbb{Q}}_{K_{S}}}[u_{S}(\xi)]\leq\mathbb{E}_{\mathbb{P}}[u_{S}( \xi)]+L_{S}d_{W}(\hat{\mathbb{Q}}_{K_{S}},\mathbb{P}).\] We then have that: \[\lim_{K_{S}\to\infty}\sup_{\mathbb{Q}\in\mathcal{P}_{K_{S}}}\mathbb{E}_{ \mathbb{Q}_{S}}[u_{S}(\xi)]\leq\lim_{K_{S}\to\infty}\mathbb{E}_{\hat{\phi}_{K_{S} }}[u_{S}(\xi)]+\delta_{S}\] \[\leq\lim_{K_{S}\to\infty}\big{\{}\mathbb{E}_{\mathbb{P}}[u_{S}(\xi )]+L_{S}u_{W}(\hat{\phi}_{K_{S}},\mathbb{P})\big{\}}+\delta_{S}=\mathbb{E}_{ \mathbb{P}}[u_{S}(\xi)]+\delta_{S}\] \[\mathbb{P}^{\omega}\text{-almost surely},\] since by adapting [23, Lemma 3.7] in our setting, we have that for each \(S\subset\mathcal{N}\) \[\lim_{K_{S}\to\infty}d_{W}(\mathbb{P},\hat{\mathbb{Q}}_{K_{S}})=0,\ \mathbb{P}^{\omega}\text{-almost surely}.\] Letting \(\delta_{S}\downarrow 0\) we have that \[\lim_{K_{S}\to\infty}\sup_{\mathbb{Q}\in\mathcal{P}_{K_{S}}}\mathbb{E}_{Q_{S} }[u_{S}(\xi)]\leq\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)]. \tag{8}\] From relations (7) and (8) we have that for any \(S\subset\mathcal{N}\) \[\lim_{K_{S}\to\infty}\sup_{\mathbb{Q}_{S}\in\mathcal{P}_{K_{S}}}\mathbb{E}_{ Q_{S}}[u_{S}(\xi)]=\mathbb{E}_{\mathbb{P}}[u_{S}(\xi)],\ \mathbb{P}^{\omega}\text{-almost surely}.\] As such, by Definitions 3 and 6, any sequence of DR cores \(\{\mathcal{C}_{\text{DR}}(G_{\widehat{\mathcal{P}}_{K}})\}_{K\in\mathbb{N}^{ \prime}}\), for which \(S\subset\mathcal{N},\ \beta_{S}^{K_{S}}\in(0,1),K_{S}\in\mathbb{N}\) satisfies \(\sum\limits_{K_{S}=1}^{K_{S}}\beta_{S}^{K_{S}}<\infty\) and \(\lim_{K_{S}\to\infty}\varepsilon_{S}(\beta_{S})=0\) for all \(S\subset\mathcal{N}\) with \(K=(K_{S})_{S\subset\mathcal{N}}\), converges \(\mathbb{P}^{\omega}\)-almost surely to the true expected core \(C_{\text{E}}(G_{\mathbb{P}})\) as \(K_{S}\to\infty\) for all \(S\subset\mathcal{N}\). \(\blacksquare\) ### _Finding allocations inside the DR core_ The results of this subsection hold irrespective of what type of sampling we perform. Leveraging results from [23], we show that an allocation inside the DR core can be computed by solving a finite-dimensional convex optimization problem. Here we impose the following assumption. **Assumption 4**: 1. For any \(S\subset\mathcal{N}\) the value function \(u_{s}(\xi)\) can be written as \(u_{S}(\xi)=\max\limits_{m_{S}=1,\ldots,M_{S}}u_{m_{S}}(\xi)\), where \(-u_{m_{S}}(\xi)\) is proper, convex and lower semi-continuous for all \(m_{S}\in\{1,\ldots,M_{S}\}\) and any \(S\subset\mathcal{N}\). 2. For any \(S\subset\mathcal{N}\)\(u_{S}\) does not take the value \(-\infty\) on \(\Xi\). 3. The support set \(\Xi\) is closed and convex. \(\square\) Under these assumptions we have the following result: **Lemma 2**: _Let Assumption 4 hold. By drawing \(K_{S}\) samples and considering the dual variables \(\lambda_{S},\ell_{k_{S}},z_{k_{S}m_{S}},v_{k_{S}m_{S}}\) that correspond to the Wasserstein ball constraint of each coalition \(S\subset\mathcal{N}\), an allocation inside the DR core is found by solving the optimization problem_ \[P:\left\{\begin{aligned} &\min_{x,(\lambda_{S},\ell_{k_{S}},z_{k_{S}m_{S}},v_{k_{S}m_{S }})_{S\subset\mathcal{N}}}||x||_{2}^{2}\\ &\text{s.t.}\ \sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}}\\ &\lambda_{S}\xi_{S}+\frac{1}{K_{S}}\sum_{k_{S}=1}^{K_{S}}\ell_{k_ {S}}\leq\sum_{i\in S}x_{i},\ \forall\ S\subset\mathcal{N}\\ &[-u_{m_{S}}]^{*}(z_{k_{S}m_{S}}-v_{k_{S}m_{S}})+\sigma_{\Xi}(v_{ k_{S}m_{S}})-z_{k_{S}}^{\top}\xi^{(k_{S})}\leq\ell_{k_{S}},\\ &\qquad\|z_{k_{S}m_{S}}\|_{*}\leq\lambda_{S}\ \forall k_{S},\forall m_{S},\ \forall\ S\subset\mathcal{N}\\ \end{aligned}\right.\] _where \([f]^{*}\) denotes the conjugate function of a function \(f\), i.e., \([f]^{*}(y)=\sup_{x\in dom(f)}(y^{\top}x-f(x))\). and \(\|\cdot\|_{*}\) is the dual norm, while \(\sigma_{X}\) denotes the conjugate of the characteristic function. \(\square\)_ _Proof_: We wish to solve the following feasibility problem \[\left\{\begin{aligned} &\min_{x\in\mathbb{R}^{N}}\ ||x||_{2}^{2}\\ &\text{s.t.}\ x\in\mathcal{C}_{\text{DR}}(G_{\widehat{\mathcal{P}} _{K}}),\end{aligned}\right.\] which, by Definition 6 and considering the data-driven Wasserstein ball as the ambiguity set of each coalition, is equivalent to \[\left\{\begin{aligned} &\min_{x\in\mathbb{R}^{N}}\ ||x||_{2}^{2}\\ &\text{s.t.}\ \sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}},\\ &\sum_{i\in S}x_{i}\geq\sup_{\mathbb{Q}_{S}\in\mathcal{P}_{K_{S}}} \mathbb{E}_{\mathbb{Q}_{S}}[u_{S}(\xi)],\ \forall\ S\subset\mathcal{N}.\end{aligned}\right.\] Note that calculating the DR core based on the drawn \(K_{S}\) samples from each coalition \(S\) boils down to solving the worst-case expected value problem: \[P_{S}^{\prime}\colon\sup_{Q_{S}\in\mathcal{P}_{K_{S}}}\mathbb{E}_{Q_{S}}[u_{S}( \xi)],\ \forall\ S\subset\mathcal{N}.\] This is an infinite-dimensional optimization problem over probability measures. Under Assumption 4 each of these programs parameterized by \(S\) can be rewritten as a finite-dimensional convex program under Theorem 4.2 in [23], thus taking the dual form: \[\left\{\begin{aligned} &\min_{x}\ ||x||_{2}^{2}\\ &\text{s.t.}\ \sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}}\ \text{ and }\forall\ S\subset\mathcal{N},\\ &\sum_{i\in\mathcal{S}}x_{i}\geq\left\{\begin{aligned} &\min_{x_{S},t_{k_{S}}}\lambda_{S}\xi_{S}+\frac{1}{K_{S}}\sum_{k_{S}=1 }^{K_{S}}\ell_{k_{S}}\\ &\text{s.t.}\ [-u_{m_{S}}]^{*}(z_{k_{S}m_{S}}-v_{k_{S}m_{S}})+\\ &\sigma_{\Xi}(v_{k_{S}m_{S}})-z_{k_{S}}^{\top}\xi^{(k_{S})}\leq \ell_{k_{S}}\ \forall k_{S},\forall m_{S}\\ &\qquad\|z_{k_{S}m_{S}}\|_{*}\leq\lambda_{S}\ \forall k_{S},\forall m_{S}.\end{aligned}\right.\end{aligned}\right.\] The problem above can then be rewritten as \[\left\{\begin{aligned} &\min_{x}\ ||x||_{2}^{2}\\ &\text{s.t.}\ \sum_{i\in\mathcal{N}}x_{i}=u_{\mathcal{N}}\ \text{ and }\forall\ S\subset\mathcal{N},\\ &\qquad\exists\ \lambda_{S},\ell_{k_{S}},z_{k_{S}m_{S}},v_{k_{S}m_{S}}: \lambda_{S}\xi_{S}+\frac{1}{K_{S}}\sum_{k_{S}=1}^{K_{S}}\ell_{k_{S}}\leq\sum_{i \in S}x_{i}\\ &\text{s.t.}\ [-u_{m_{S}}]^{*}(z_{k_{S}m_{S}}+v_{k_{S}m_{S}})+\\ &\sigma_{\Xi}(v_{k_{S}m_{S}})-z_{k_{S}}^{\top}\xi^{(k_{S})}\leq \ell_{k_{S}}\ \forall k_{S},\forall m_{S}\\ &\qquad\|z_{k_{S}m_{S}}\|_{*}\leq\lambda_{S}\ \forall k_{S},\forall m_{S}.\end{aligned}\right.\] Due to the existential operator we can minimize with respect to \(\lambda_{S},\ell_{k_{S}},z_{k_{S}m_{S}},v_{k_{S}m_{S}}\) for all \(S\subset\mathcal{N}\). This concludes then the proof. \(\blacksquare\) Note that the additional decision variables of \(P\) correspond to Wasserstein distance constraints in the primal problems \(P_{S}^{\prime}\) for each \(S\subset\mathcal{N}\)[23, Theorem 4.2]. ## V Numerical example We consider a stochastic coalitional game with \(N=3\) agents and coalitional values \(u_ and multi-sample sizes per coalition \(S\subset\mathcal{N}\), we assume here the same radius and the same multi-sample size for illustration purposes (see Corollary 1). Initially, we consider for each coalition a Wasserstein ball of radius \(\varepsilon=0.3\). Figure 1 focuses on coalition \(S=\{1\}\) and shows the range of normalized DR values for the expectation of \(u_{S}(\xi)\), denoted by \(\sup_{\mathbb{Q}_{S\in\tilde{P}_{K_{S}}}}\mathbb{E}_{\mathbb{Q}_{S}}[u_{S}( \xi)]\) over 500 simulations per multi-sample size \(K_{S}\). We note that a similar behaviour is exhibited in all other coalitions, with the graphs being centered at different values. As the observed pattern is similar across coalitions, it is not shown to avoid repetition. For the same number of samples per coalition varying in \([5,500]\), we observe that as the number of samples increases, all the DR coalitional values, illustrated by the red shaded area in Figure 1, are above the corresponding expected value (blue solid line). Since the same pattern is observed across all coalitions, this implies (see Equation (6) in the proof of Theorem 2) that the DR core is contained within the expected value core as \(K_{S}\) increases for each \(S\subset\mathcal{N}\). This observation is in line with Theorem 1, since for an increasing multi-sample size per coalition, the confidence \(\beta\) in the provided theoretical guarantees tends to \(1\). Figure 2 shows that following the same approach as Figure 1 for a significantly smaller Wasserstein radius \(\varepsilon=0.03\) leads to a smaller empirical confidence that is improved the more samples we obtain. At \(K_{S}=500\), however, only a small portion of DR values is below the expected value, which implies stability of DR stable allocations in the mean sense with high confidence. Figure 3 illustrates the DR coalitional values for the empirical expectation of \(u_{S}(\xi)\) (red shaded areas), over 500 simulations, compared to the corresponding expected value (blue solid line) as the Wasserstein radius \(\varepsilon\) increases (for a fixed number of samples \(K_{S}=100\)). We note again that for a radius \(\varepsilon\) larger than a certain threshold, all DR coalitional values are above their corresponding expected value and therefore, since this holds for all coalitions when the same number of samples is used, the DR core lies within the expected value core. This behaviour is expected because for a fixed number of samples the larger the Wasserstein ball the Fig. 1: DR coalitional values for the expectation of \(u_{S}(\xi)\) evaluated over 500 simulations (red shaded area) vs corresponding expected coalitional value (blue solid line) for \(K_{S}\in\{5,10,30,50,100,200,500\}\) and \(\varepsilon=0.3\). For \(K_{S}\geq 10\) the DR core is contained with empirical confidence \(1\) in the expected valued core. Fig. 4: Coalitional values (red shaded area), among 500 simulations with increasing Wasserstein radius \(\varepsilon\) for \(K_{S}=250\). When samples are drawn from distributions that admit a density with certain concentration properties, we conjecture that due to the more accurate empirical probability across simulations, the DR coalitional values variability is smaller compared to Figure 3. Fig. 3: Coalitional values (red shaded area), among 500 simulations with increasing Wasserstein radius \(\varepsilon\) for \(K_{S}=100\). As the radius increases, the DR coalitional values of the expectation of \(u_{S}(\xi)\) are above their corresponding expected value (blue solid line) and thus, since the same pattern is repeated across coalitions, the DR core lies within the expected valued core. Fig. 2: DR coalitional values for the expectation of \(u_{S}(\xi)\) evaluated over 500 simulations (red shaded area) vs corresponding expected coalitional value (blue solid line) for \(K_{S}\in\{5,10,30,50,100,200,500\}\) and \(\varepsilon=0.03\). It is observed that decrease in \(\varepsilon\) affects the confidence with which the DR core is contained within the expected value core. At \(K_{S}=500\), however, only a small portion of DR values is below the expected value. more likely it is to include the true probability distribution. As such, obtaining allocations stable in the mean sense can also be achieved, even for a small number of samples, by tuning the Wasserstein radius of each coalition accordingly. We note that the comparison of Figures 1 and 2 is consistent with Figure 3. The percentage of the width of the shaded area acts as an empirical estimate of the confidence. The higher \(\varepsilon\) or \(K_{S}\) are, the lower \(\beta_{S}\) and as a result the higher the confidence. Compared to Figure 3, in Figure 4 for each simulation a larger number \(K_{S}=250\) of samples is generated. Drawing conclusions for the general case is not straightforward, however, when samples are drawn from distributions that admit a density and with certain concentration properties we conjecture that the more samples are used, the more likely it is that the resulting empirical distributions \(\hat{P}_{K_{S}}\) are closer to each other across simulations (i.e., for different multi-samples). As such the centres of the Wasserstein balls would be closer, which in turn implies that the DR value for the expectation of \(u_{S}(\xi)\) would be closer to each other as well. In other words, the higher the number of samples the smaller the variability of the resulting DR value for the expectation of \(u_{S}(\xi)\) across simulations. In line with this intuition, the width of the shaded area in Figure 4 (an empirical estimate of variability) is smaller compared to that of Figure 3. ## VI Conclusion We have introduced the concept of distributionally robust core for coalitional games subject to distributional uncertainty, namely a set of payoff allocations that is robust in the expected value sense. We showed both theoretically and numerically that the concept of distributionally robust stability implies stability in the mean sense as more data becomes available given a certain radius. Furthermore, one can obtain payoff allocations in the expected core by tuning the Wasserstein radius even for a small sample size. This paper takes a first step towards studying the class of distributionally robust chance-constrained coalitional games. Future work will focus on improving the probabilistic guarantees and on designing distributed payoff allocation algorithms.
2302.12312
Advanced Accelerator Concepts: From Birth to High Impact Science
This recounting of the history of the last three-and-a-half decades of advanced accelerator concepts is offered from a decidedly parochial point of view -- that of the career of the author, Prof. James Rosenzweig of the UCLA Dept. of Physics and Astronomy. This short voyage through a by-now long career will illustrate the very beginning of the compelling field of advanced accelerators, proceed through their maturation into one of the fastest growing areas of beam-based science, and give a look into their emerging importance in applications. An important aspect of advanced accelerators is their relationship to other burgeoning fields, particularly free-electron lasers. The framework of this retelling lends itself particularly well to illustrating this relationship. Likewise, this quick summary serves to demonstrate the essential team nature of our field, and the contributions of participants from all levels, ranging from students to those scientists whose careers may have developed in previous eras of positive ferment in accelerator science.
James Rosenzweig
2023-02-23T20:18:33Z
http://arxiv.org/abs/2302.12312v1
# Advanced Accelerator Concepts: ###### Abstract This recounting of the history of the last three-and a-half decades of advanced accelerator concepts is offered from a decidedly parochial point of view - that of the career of the author, Prof. James Rosenzweig of the UCLA Dept. of Physics and Astronomy. This short voyage through a by-now long career will illustrate the very beginning of the compelling field of advanced accelerators, proceed through their maturation into one of the fastest growing areas of beam-based science, and give a look into their emerging importance in applications. An important aspect of advanced accelerators is their relationship to other burgeoning fields, particularly free-electron lasers. The framework of this retelling lends itself particularly well to illustrating this relationship. Likewise, this quick summary serves to demonstrate the essential team nature of our field, and the contributions of participants from all levels, ranging from students to those scientists whose careers may have developed in previous eras of positive ferment in accelerator science. wakefields, accelerators, lasers, electron beams, free-electron lasers + Footnote †: publication: XXX-X-XXXX-XXXX-X/XX/SXX.00 ©20XX IEEE ## I Introduction This article has been composed by the author, Prof. James Rosenzweig of the UCLA Dept. of Physics and Astronomy, to give a written appreciation and historic context for his award of the 2022 Advanced Accelerator Prize. This honor has been given with the accompanying citation, which follows: "For his seminal and pioneering contributions at the nexus of advanced accelerators, light sources and beam physics". There is quite a bit to account for in this citation, and so the occasion presents itself to recount the story of the modern development of advanced, high field accelerators, based on emerging new techniques. This retelling begins very near to the start of the field, and progresses to the current time period, where advanced accelerators. To tell this story in a familiar way, with the readers' indulgence, I will proceed to describe the my contributions and observations on the field directly in first person. I now appreciate, looking back, that my influence on the field of advanced accelerators has been extensive, ranging over a wide range of fields, from wakefields in plasmas and dielectrics, to laser-driven accelerators, and on to cryogenic, very-high-field accelerators. Indeed, with my vigorous research program and equally energetic educational efforts, my initiatives and accomplish-ments have marked not only the multitude of fields comprising advanced accelerator science, but also their burgeoning applications, particularly in the realm of new-generation light sources. These nominally diverse fields have been brought together by the dedicated efforts of my group at UCLA, the Particle Beam Physics Laboratory (PBPL). The approach instituted at the PBPL, which is a type of "school" in the sense of intellectual flavor, has indeed produced a body of work that embraces a coherent view of advanced accelerators, fundamental beam physics, and new light sources. This viewpoint is shared by the large cohort of students and postdocs trained in the PBPL program at UCLA. This group has had a notable impact on the field, having been recognized with an impressive list of awards: (four) SLAC Panofsky Fellowships; (two) Young International FEL Prizes; the APS DPB Best PhD Thesis Award; CERN Marie Curie Fellowship; and the EPS Sacherer Prize for Young Accelerator Scientists, and the DOE HEP Early Career Award. It is instructive to review the research achieve-ments motivating the recognition attendant to the 2022 AAC Prize, which have provided key direction to the development of the advanced accelerator concepts comm-unity for the last 35 years, or nearly the length of the field itself. I organize the list of these major accomplishments by theme, and include a discussion of present activity that follows on the previous work in the exciting field of advanced accelerator concepts. In order to properly set the stage, we begin with a brief discussion of my pre-UCLA years. ## II Prehistory Before the UCLA era began, my early career training occurred as a Wisconsin graduate student performing experimental and theoretical research at Argonne National Laboratory. The key players in advanced accelerator initiatives at Wisconsin set the tone for much of the subsequent work in the AAC field; they were Dave Cline, Fred Mills, and Sandro Ruggiero. Key support at the U.S. Dept. of Energy came from David Sutter, who was a keystone of the effort which gave birth to the field of advanced accelerators. At the pioneering ANL laboratory termed the Advanced Accelerator Test Facility, or AATF, we performed the first proof-of-principle experiments on plasma wakefield acceleration [1 ], uncovering critically important focusing effects [2, 3, 4, 5], and reporting the first aspects of nonlinear plasma waves [6]. This last work led to a deep investigation of nonlinear PWFA [7, 8, 9], and ultimately, at UCLA to my now-recognize as seminal proposal for operation of plasma -wavefield accelerators in the highly nonlinear "blowout" regime, discussed below. This emphasis on nonlinear operation, which initially seemed to be a curious distraction in the PWFA field, indeed turned out to be essential. During the time period of initial wakefield studies at ANL, I participated with some interest in the first demonstrations of dielectric wakefield accelerator (DWA) [10]. This also turned out to be a formative initiative, as I embraced the DWA line of inquiry two decades later, also similarly emphasizing a nonlinear limit, where the properties of dielectrics begin to change due to field strengths at the gigavolt- per-meter level. The scientists at the AAT that collaborated on both PWFA and DWA included Jim Simpson (group leader), Sekazi Mingwa, Paul Schoessow, Wei Gai, and Sandro Ruggiero. After the AAT, I accepted a Wilson Fellowship at Fermilab, one of the few instances for which this position was granted for a researcher from accelerator physics. At Fermilab, I concentrated on learning about colliders, and played a strong role in initiating the first wave of research on superconducting linear colliders [11], through comp-rehensive design studies. This initiative, initially aimed at the TESLA linear collider project, eventually morphed into the strong Fermilab effort towards realizing the International Linear Collider (ILC). A key collaborator in the effort at Fermilab was Helen Edwards. It should be noted that this formative period is by now thirty years past, and the majority of the protagonists from that era are no longer living. Their legacy is another matter; it lives on quite vigorously. ## III UCLA PWFA Research Upon arriving at UCLA, I initially placed my emphasis on continuing superconducting linear collider work, concentrating on linear collider-quality RF photoinjectors [12]. This direction was natural, as at the time I began my time at UCLA collaborating experimentally on photoinjector development for the budding free-electron laser program at UCLA. The situation where I de-emphasized PWFA research lasted only a few months, until met a talented computational plasma physicist named Boris Breizman, who gave me his program which was capable of simulating with minimal computational resources, the nonlinear response of plasmas to beams that were denser than the ambient plasma electron distribution. In this way I uncovered the PWFA blowout regime. This initiative [13] has had a profound effect on the subsequent dramatic progress in the PWFA field, as it identified new, highly advantageous properties in the plasma "bubble" produced: linear focusing, and acceleration dependent only on wave phase. Thus, it has been possible to predict excellent beam phase space preservation, attributes that have since been verified experimentally. has produced notable theoretical and experimental work exploring the PWFA in this nonlinear regime. My own experimental work in the PWFA blowout regime has ranged from path breaking demonstrations of ion-focusing transport [14, 15] as well as thin lens focusing [16], to first experiments on acceleration in this scheme [17, 18]. I, along with many collaborators, investigated some of the first high impact proposals for injection into PWFAs, first using density transitions [19, 20, 21, 22], a technique that endures today. Most recently, I co-led with Bernhard Hidding the successful first demonstration of injection of low emittance electron beams through laser-induced ionization, in the E210 Trojan Horse experiment at FACET [23, 24]. These experiments have been accompanied by a wide range of theoretical studies. One particularly impactful thrust involved more controlled injection schemes, which give beams with the highly increased brightness one expects from scaling up the field and frequency of the wave capturing the beam [25]. This focus most notably included the original and continuing Trojan Horse theoretical treatments with Hidding [26, 27, 28, 29, 30, 31]. Another area of high impact has been concentrated on the first predictions of ion collapse in the PWFA [32, 33], which has now led to an experiment proposal at FACET-II [34]. Another fundamental subject has been investigated in detailed, that of the physics involved in the scaling of very nonlinear plasma wakes [35, 36], which explains the persistence of linear Cerenkov scaling even in extremely nonlinear interactions. PBPL PWFA research has continued in recent times to include experimental examinations of optimized transformer ratios [37] based on spatially shaped beams, as well associated diagnostic schemes [38]. New schemes for beam injection that are offshoots of the Trojan Horse "plasma photocathode" approach are being investigated at FACET-II. Finally, acceleration the TeV/m level in very dense plasmas, based on ultra-short beams or resonant excitation is being pursued [39]. ## IV UCLA DWA Research In the past decade and one-half, the PBPL has led the effort to extend DWA to the GV/m frontier [40] and beyond, with first experiments at the SLAC FFTB showing over 5 GV/m longitudinal field before breakdown. These experiments led to follow on measurements of coherent Cerenkov radiation production from DWA [41] at UCLA Neptune Lab. When the FFTB was replaced by FACET, there the PBPL program in DWA demonstrated sustained acceleration in structures up to 15 cm in length [42]. In this period of active DWA research at both FACET and the BNL ATF, new approaches to structure design were explored, including photonic concepts [43], and exploitation of new symmetries [44], to control mode content in 3D and to suppress beam breakup (BBU) [45, 46]. As these are highly coupled structures, BBU is indeed thought to be a serious limitation of the DWA in application. Dielectric wakefield studies at GeV/m gradients have uncovered unexpected new physics effects in the DWA interaction, such as high-field-induced conductivity [47]. Recently the PBPL team have investigated acceleration of positrons in DWA [48]. This work, along with the above-described PWFA research, has had an outsized effect on the programs at SLAC FACET/FACET-II, the Argonne Wakefield Accelerator and the BNL ATF. These programs remain very active, with new experiments in DWA underway or planned, including a detailed understanding of normal and skew quadrupole effects in slab symmetric structures. These strong quadrupole of focusing effects can be harnessed to provide new mechanisms of BNS damping, permitting meter-scale and GeV energy gain experimental scenarios at FACET-II [49]. In recent years, wakefield acceleration at the PBPL has increasingly relied on the leading contributions of UCLA scientist Gerard Andonian. ## V Laser Acceleration Research I became interested in laser-based acceleration schemes in the mid-1990's and published several seminal papers that have strongly guided the development of the dielectric laser accelerator, or DLA [50, 51]. I led the GALAXIE DLA-based compact free-electron effort [52] for DARPA, which produced a deep understanding of the unique beam stability conditions in the DLA [53]. This program gave way to the highly successful ACHIP collaboration funded by the Moore Foundation. In this context at UCLA, Prof. Pietro Musumeci has been a key ACHIP protagonist Laser-based acceleration at UCLA has another emphasis, also mainly led by Musumeci; that of the inverse free-electron laser (IFEL). I have had the pleasure of collaborating with him on several initiatives, including the first demonstration of very high energy gain in the IFEL [54], and production of high-quality accelerated beams [55]. This work culminated on with a joint project, mentioned in appropriate context below, in which an IFEL-derived beam was used in production of beam-laser scattered Compton X-rays. ## VI Advanced Electron Sources Research on advanced electron sources has been critical in the development of the advanced accelerator and radiation production fields. As such, I have occupied myself with active research in this area. My resulting contributions range from the theoretical and experimental basis of high field particle dynamics [56] and emittance compensation [57, 58, 59], to the first exploration of the longitudinal blowout regime [60, 61, 62]. In this field, I have introduced a number generations of high brightness RF photoinjectors [63], which have been enabling technology for photoinjector laboratories worldwide (SLAC, BNL, UCLA, INFN-LNF Frascati, Sincrortone Trieste, FNAL, LLNL). This program has included not only standard 1.6 cell RF guns, but new integrated systems, including the hybrid photoinjector [64, 65, 66, 67]. Most recently, in collaboration with S. Tantawi and others, the PBPL has been developing cryogenic copper structures that we have collaboratively demonstrated support surface fields up to 500 MV/m [68, 69]. This is an enabling technology for applications in high energy physics (the C\({}^{3}\) linear collider [70]) and free-electron lasers. At UCLA, in collaboration with SLAC, the first step in developing this approach is to base a new generation of very high brightness RF photoinjector operated at 250 MV/m peak field [71, 72]. This device is capable of producing linear-collider-class asymmetric emittance beams when magnetized, and to drive new types of FELs (see below) when operated in high brightness mode. ## VII Electron Beam Manipulation and Diagnosis The PBPL program has played a pioneering role in the development of new methods for manipulating and diagnosing electron beams [73], so that new capabilities in advanced accelerator and light source research may be reached. The program has been a leader in the manipulation of high brightness beams, introducing or exploring new compression techniques based on chicanes [74, 75, 76], IFEL bunching [77], dogleg transport [78] and velocity bunching [79, 80]) and beam shaping methods [81]. In this context we have necessarily introduced numerous influential measurement methods, including emittance diagnosis in the presence of space-charge [82], and a variety of coherent radiation-based [83, 84, 85, 86, 87] longitudinal diagnostics - addressing measurement challenges from the picosecond down to attosecond level. ## VIII Advanced Light Sources Much of my work on electron sources was motivated by the needs of the first proof-of-principle self-amplified spontaneous emission free-electron laser (SASE FEL) experiments [88, 89, 90]. For the development of RF photoinjectors, and key participation (in the collaboration headed by Prof. Claudio Pellegrini) in the initial SASE FEL experiments I was awarded along with Ilan Ben-Zvi, the 2007 International Free-Electron Laser Prize. Special recognition was given for the introduction of start-to-end simulations to ascertain the microscopic physics of the beam-FEL interaction. I have in the past decade merged a considerable FEL research effort (including orbital investigation angular momentum effects [91, 92, 93, 94]) with advanced accelerators to produce a new concept - the _5\({}^{th}\) generation light source_. The PBPL is now working on several manifestations of this new class of instrument, including: MEMS-based undulators [95] driven by DLAs; inverse FEL acceleration to produce Compton X-rays [96]; and demonstrator FELs based on plasma accelerators (with INFN-LNF through the EuPRAXIA initiative [97], and with the LBNL BELLA team [98]). Our work on the Compton sources is the culmination of a steady campaign [99, 100, 101, 102, 103, 104] to advance the physics of ICS sources. The major focus of PBPL efforts on next-generation XFELs based on advanced accelerator methods, however, concentrates on cryo-RF at high field. This proposal, recently published in a highly influential article in _New Journal of Physics_[105], shows that the very high brightness beam produced by the new cryo-gun, accelerated in \(>\)100 MeV/m cryo-linacs, paired with innovative compression methods, state-of-the-art undulators, and compact X-ray optics [106, 107, 108] can produce an extremely attractive XFEL. This instrument, the ultra-compact XFEL (UC-XFEL), is aimed at revolutionizing access to XFEL facilities; its compact size (\(<\)40 m) and modest cost (\(\sim\)$35M) should permit it to be diffused widely in university or industry labs. The push towards UC-XFEL is also recognized as highly synergistic with the needs to beam physics and technology development for C\({}^{3}\). ## IX New Directions Extending interest in high gradient-based electron source, of late the PBPL and collaborators (notably Peter Hommelhoff, Univ. of Erlangen) have utilized very high field (to 30 GV/m) laser-surface interactions in a nano-blade geometry to show extremely low emittance, femtosecond electron pulses that may be ideal injectors for DLAs, or be extended to linear collider asymmetric emittance sources [109, 110]. The fields in this interaction are the largest non-destructive fields measured in such a device. Together with DWA and cryo-RF initiatives, this illustrates an emerging theme in PBPL research, that of high field effects in solid-state matter. The nano-blade initiative, as well as that of cryogenic gun and the UC-XFEL, are vital components of the successful NSF STC, the Center for High Brightness Beams, that the PBPL has played a strong role in for the last six years. In addition to his vigorous programs at national user facilities, we have now constructed a new, ambitious laboratory at UCLA, hair to previous efforts (this is the fourth photoinjector lab constructed by the PBPL at UCLA), that will be a venue for wakefield acceleration, compact light (FEL and ICS) sources, and frontier high brightness beam sources. This lab will host the frontier cryo-RF development at UCLA, and its proximity to the Basic Plasma Science Facility will permit new wakefield studies that explore phenomena related to long time-scale PWFA behavior, in the laboratory and in the space environment [111]. ## X Summary The contributions described above have produced several tangible lasting features, prominent among them; and the training of a very large cohort of graduate students who have had major secondary influence in the field of advanced accelerators. In addition, my laboratory program has produced 15 patents applied for and/or granted, and spun off a highly successful company, RadiaBeam Tech-nologies, which has played a key role in development of beam science, technology and advanced accelerator methods. This contribution has led to a notable strengthening of the US industrial accelerator landscape, and gave further dimension to the impact of PBPL trainees. It is only appropriate that we list here these trainees, those who spent all or part of their graduate or post-doctoral career at the UCLA PBPL training for a career in advanced accelerators and related fields: - - Gil Travish - Nikolai Barov - Eric Colby - Aaron Tremaine - Hyyong Suk - Matthew Thompson - Salime Boucher - Ron Agusstson - Adnan Doyuran - Alex Murokh - Oliver Williams - Kip Bishopberger - Gerard Andonian - Pedro Frigola - Scott Anderson - Rodney Yoder - Luigi Faillace - Alan Cook - Joel England - Yusuke Sakai - Atsushi Fukasawa - Alessandra Valloni - Agostino Marinelli - Gabriel Marcus - Erik Hemsing - Andrey Knayzik - Josh McNeur - Diktys Stratakis - Sam Barber - Aihua Deng - Yunfeng Xi - Alex Cahill - Brendan O'Shea - Finn O'Shea - Claudio Emma - Egor Dyunin - Ariel Nause - Phuc Hoang - Ryan Roussel - Ivan Gadjev - Nathan Majernik - Joshua Mann - Gerard Lawler - Pratik Manwani - Monika Yadav - Walter Lynn - Fabio Bosco I am truly grateful to have had the opportunity to provide mentorship to these colleagues. ## Acknowledgment I would like to thank the following agencies for their support over the course of my career: US Dept. of Energy High Energy Physics and Basic Energy Sciences; US National Science Foundation; Defense Advanced Research Projects Agency; US Domestic Nuclear Detection Office; Italian Istituto Nazionale di Fisica Nucleare; Israel Ministry of Defense; the Keck Foundation; the Sloan Foundation.
2310.02310
Exploring the low-mass regime of galaxy-scale strong lensing: Insights into the mass structure of cluster galaxies
We aim at a direct measurement of the compactness of three galaxy-scale lenses in massive clusters, testing the accuracy of the scaling laws that describe the members in strong lensing (SL) models of galaxy clusters. We selected the multiply imaged sources MACS J0416.1$-$2403 ID14 ($z=3.221$), MACS J0416.1$-$2403 ID16 ($z=2.095$), and MACS J1206.2$-$0847 ID14 ($z=3.753$). Eight images were observed for the first SL system, and six for the latter two. We focused on the main deflector of each galaxy-scale SL system (identified as members 8971, 8785, and 3910, respectively), and modelled its total mass distribution with a truncated isothermal sphere. We accounted for the lensing effects of the remaining cluster components, and included the uncertainty on the cluster-scale mass distribution through a bootstrapping procedure. We measured a truncation radius value of $6.1^{+2.3}_{-1.1} \, \mathrm{kpc}$, $4.0^{+0.6}_{-0.4} \, \mathrm{kpc}$, and $5.2^{+1.3}_{-1.1} \, \mathrm{kpc}$ for members 8971, 8785, and 3910, respectively. Alternative non-truncated models with a higher number of free parameters do not lead to an improved description of the SL system. We measured the stellar-to-total mass fraction within the effective radius $R_e$ for the three members, finding $0.51\pm0.21$, $1.0\pm0.4$, and $0.39\pm0.16$, respectively. We find that a parameterisation of the properties of cluster galaxies in SL models based on power-law scaling relations with respect to the total luminosity cannot accurately describe their compactness over their full total mass range. Our results agree with modelling of the cluster members based on the Fundamental Plane relation. Finally, we report good agreement between our values of the stellar-to-total mass fraction within $R_e$ and those of early-type galaxies from the SLACS Survey. Our work significantly extends the regime of the current samples of lens galaxies.
Giovanni Granata, Pietro Bergamini, Claudio Grillo, Massimo Meneghetti, Amata Mercurio, Uros Meštrić, Antonio Ragagnin, Piero Rosati, Gabriel Bartosch Caminha, Luca Tortorelli, Eros Vanzella
2023-10-03T18:00:02Z
http://arxiv.org/abs/2310.02310v2
Exploring the low-mass regime of galaxy-scale strong lensing: Insights into the mass structure of cluster galaxies ###### Abstract Context:Several recent studies have highlighted a discrepancy between the strong lensing (SL) properties of observed cluster galaxies and the predictions of \(\Lambda\) cold dark matter (CDM) cosmological hydrodynamical simulations. This discrepancy can be interpreted as the result of observed cluster members being more compact than their simulated counterparts. Aims:In this work, we aim at a direct measurement of the compactness of a few selected galaxy-scale lenses in massive clusters, testing the accuracy of the scaling laws adopted to describe the members in SL models of galaxy clusters. Methods:We selected the multiply imaged sources MACS J0416.1\(-\)2403 ID14 (\(z=3.22\)), MACS J0416.1\(-\)2403 ID16 (\(z=2.095\)), and MACS J1206.2\(-\)0847 ID14 (\(z=3.753\)). Eight multiple images were observed for the first SL system, and six for the latter two. We focused on the main deflector of each galaxy-scale SL system (identified as members 8971, 8785, and 3910, respectively), and modelled its total mass distribution with a truncated isothermal sphere. To account for the lensing effects of the remaining components of the cluster, we took the most accurate SL model of its mass distribution available. To include the uncertainty and the systematics affecting the cluster-scale mass models, we explored the posterior probability distribution of its parameters and extracted 100 cluster mass distributions. For each of them, we optimised the mass parameters of the galaxy-scale lens: the bootstrapping procedure allowed us to obtain a realistic estimate of the uncertainty on their values. Results:We measured a truncation radius value of \(6.1^{+2.3}_{-1.3}\) kpc, \(4.0^{+0.6}_{-0.4}\) kpc, and \(5.2^{+1.3}_{-1.3}\) kpc for members 8971, 8785, and 3910, corresponding to total mass values of \(M=1.2^{+0.3}_{-0.1}\times 10^{11}\,M_{\odot}\), \(M=1.0^{+0.2}_{-0.1}\times 10^{10}\,M_{\odot}\), and \(M=6.3^{+1.0}_{-1.1}\times 10^{10}\,M_{\odot}\), respectively. Alternative non-truncated models with a higher number of free parameters do not lead to an improved description of the SL system and show some parametric degeneracies. We measured the stellar-to-total mass fraction within the effective radius for the three cluster members, finding \(0.51\pm 0.21\), \(1.0\pm 0.4\), and \(0.39\pm 0.16\), respectively. Conclusions:We find that a parameterisation of the physical properties of cluster galaxies in SL models based on power-law scaling relations with respect to the observed total luminosity cannot accurately describe the compactness of the members over their full total mass range. Our results, instead, agree with recent modelling of the cluster members based on the Fundamental Plane relation. Finally, we report good agreement between our predicted values of the stellar-to-total mass fraction within the effective radius and those of early-type galaxies from the Sloan Lens ACS Survey. Our work significantly extends the regimes of the current samples of lens galaxies, towards the mass range that will be probed by the _Euclid_, _Rubin_, and _James Webb_ Telescopes. ## 1 Introduction Gravitational lensing has recently become an extremely effective technique to study the dark-matter (DM) distribution in galaxies and clusters of galaxies (e.g. Natarajan & Kneib, 1997; Treu, 2010). The observed light deflection effects only depend on the total gravitational potential of the lens, without any discrimination between the baryonic and DM components. Due to their high mass and deep gravitational potential well, clusters often determine strong lensing (SL) of several tens of background sources. Strong lensing studies have led to measurements of the cluster total mass profile with an uncertainty of a few percent near the core, that is within a few hundreds of kiloparsecs from the cluster centre (e.g. Grillo et al., 2015; Jauzac et al., 2015; Caminha et al., 2017a, b; Sharon et al., 2020; Acebron et al., 2022). The mass budget of galaxy clusters is made up by DM for up to more than 85% of the total mass, while the remaining, baryonic, mass component is dominated by a hot plasma, the intra-cluster medium (ICM), whose mass distribution can be estimated from X-ray data. Observations of SL can thus be combined with baryonic mass diagnostics to disentangle the number and mass distributions of the DM haloes of galaxy clusters from the total mass profile of the lens (e.g. Annunziatella et al., 2017; Sartoris et al., 2020). Several _Hubble_ Space Telescope (HST) photometric campaigns have supported the effort to identify multiply imaged sources in the cores of different samples of galaxy clusters, in order to build detailed SL models. The most notable examples are the Cluster Lensing And Supernova survey with _Hubble_(CLASH, Postman et al., 2012), the _Hubble_ Frontier Fields (HFF, Lotz et al., 2017) programme, the Reionization Lensing Cluster Survey (RELICS, Coe et al., 2019), and the Beyond Ultra-deep Frontier Fields And Legacy Observations (BUFFALO, Steinhardt et al., 2020). Spectroscopic follow-up campaigns on the Very Large Telescope (VLT), carried out with multi-object spectrographs, such as CLASH-VLT (dark matter mass distributions of _Hubble_ treasury clusters and the foundations of LCDM structure formation models, Rosati et al., 2014), complemented with data from the integral-field Multi Unit Spectroscopic Explorer (MUSE, Bacon et al., 2010), have identified and confirmed up to more than 1000 cluster members (e.g. Mercurio et al., 2021) and more than 200 multiple images per cluster (e.g. Bergamini et al., 2023, hereafter B23). Thanks to these data, lensing models have reached extremely high resolution in mapping the mass distribution of all the cluster components, down to the scale of the single member galaxies. In the framework of the currently adopted \(\Lambda\)CDM cosmological model, dominated by CDM and with a cosmological constant \(\Lambda\), cosmological simulations allow us to describe the formation and evolution of DM haloes at different scales. They show that haloes form hierarchically from subsequent mergers of smaller structures (e.g. Tormen, 1997; Moore et al., 1999; Borgani and Kravtsov, 2011): numerous less massive haloes, or sub-haloes, are therefore found in the proximity of the most massive ones. The accuracy of simulations has been significantly increased in the last few years by the inclusion of baryons and of the physical effects of their interplay with DM. Simulations can therefore be used to obtain quantitative predictions on the expected number and mass distributions of haloes and sub-haloes in the Universe. Any significant discrepancy between these results and what is inferred from observations may imply that the formation of structures does not proceed as forecast by the underlying cosmological hypotheses and therefore put into question the \(\Lambda\)CDM paradigm and/or our current understanding of the effects of baryons in shaping the mass distribution of the DM haloes. Comparing 25 clusters extracted from a suite of \(\Lambda\)CDM hydrodynamical simulations (Planelles et al., 2014; Rasia et al., 2015) with 11 state-of-the-art SL models of massive clusters observed in CLASH, Meneghetti et al. (2020) reported a discrepancy of around an order of magnitude between the simulated and observed probability for the clusters to produce galaxy-galaxy strong lensing (GGSL) events (i.e. SL phenomena in which several multiple images are observed around one or a few cluster members). This discrepancy can be interpreted as a result of simulated galaxy cluster members being less compact than their observed counterparts. To investigate the possible impact of the numerical setup chosen for the simulations, Meneghetti et al. (2022), Ragagnini et al. (2022), and Meneghetti et al. (2023) repeated the test by Meneghetti et al. (2020) with different simulation resolutions and baryonic feedback schemes, confirming the previously reported tension with observations. Focusing, instead, on the possible systematics affecting SL models, Granata et al. (2022), hereafter G22, tested the impact on the discrepancy of the power-law scaling relations used in lensing models to link the mass of the cluster members to their luminosity, replacing them with the Fundamental Plane (FP, Dressler et al., 1987; Djorgovski and Davis, 1987; Bender et al., 1992) relation, defined in the three-dimensional (\(\log\sigma_{0}\), \(\log R_{e}\), SB\({}_{e}\)) space, where \(\sigma_{0}\) is the central velocity dispersion, \(R_{e}\) is the effective radius, and SB\({}_{e}\) is the average surface brightness within \(R_{e}\) in mag arcsec\({}^{-2}\) units. The procedure allows for a more complex description of the physical properties of the members, but does not significantly reduce the observed discrepancy. In this work, we considered the reference sample of lens clusters included in Meneghetti et al. (2020): Abell S1063 (AS1063), at \(z=0.348\), MACS J0416.1\(-\)2403 (MACS J0416), at \(z=0.396\), and MACS J1206.2\(-\)0847 (MACS J1206), at \(z=0.439\). The first two clusters were part of the HFF sample, while all three were CLASH and CLASH-VLT targets. We examined all main GGSL events in the three clusters, looking for systems with a clear morphology, in which several multiple images are observed very close to a cluster member, providing us with stringent constraint on its total mass profile, and therefore on its compactness. We selected two GGSL systems in MACS J0416, identified as ID14 and ID16 in B23, and one in MACS J1206, identified as ID14 in B19. The first analysis of this system was performed by Grillo et al. (2014). Gravitational lensing, in combination with stellar dynamics, has allowed for a considerable improvement of our understanding of the internal structure of galaxies, such as their stellar-to-total mass fraction and the mass density distribution of their DM haloes (see Shajib et al., 2022). In particular, observational campaigns such as the Lenses Structure and Dynamics (LSD; Treu and Koopmans, 2004), the Sloan Lens ACS Survey (SLACS; Bolton et al., 2006; Treu et al., 2006; Auger et al., 2010), the Strong Lensing Legacy Survey (SL2S; Gavazzi et al., 2012), and the Dark Energy Survey (DES; Abbott et al., 2019) have mostly focused on isolated, massive early-type galaxies, due to observational limitations. Their total mass distribution is very well fit by an isothermal profile out to a large distance from their centre (Treu et al., 2006). In the next few years, new, wide, and deep surveys such as _Euclid_ and the _Rubin_ Observatory Legacy Survey of Space and Time (LSST) will boost the number of known galaxy-scale lenses by a few orders of magnitude, and significantly extend the lower mass threshold of observed lenses (Collett, 2015). The intense gravitational field of the cluster leads to secondary critical lines forming around several faint galaxies. In this work, we build SL models for galaxies whose mass is too low for current surveys of lens galaxies to detect them in the field, probing a total lens mass range which will only be fully explored by the upcoming surveys. Furthermore, SL models of clusters favour truncated mass profiles for the cluster members, with a half-mass radius a few times larger than the half-light radius (G22). In this work, we evaluate the robustness of the assumption of truncated mass profiles for the cluster members and test possible alternative parametrisations. The paper is organised as follows. In Sect. 2, we give details on the reference SL models for MACS J0416 and MACS J1206, published in B23 and B19, respectively. In Sect. 3, we present our observations and the layout of the three GGSL systems. In Sect. 4, we build a SL model for the lens galaxies, and in the following Sect. 5, we discuss their inferred properties, with a special focus on their compactness. Finally, in Sect. 6, we sum maries our results. In this work, we use a flat \(\Lambda\)CDM cosmology with \(\Omega_{\rm m}=0.3\) and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), in which \(1^{\prime\prime}\) corresponds to a scale of 5.34 kpc at \(z=0.396\), the redshift of MACS J0416, and of 5.68 kpc at \(z=0.439\), the redshift of MACS J1206. All magnitudes are expressed in the AB system. ## 2 Reference strong lensing models As anticipated, we considered the most recent versions of the SL models of MACS J0416 and MACS J1206 adopted in Meneghetti et al. (2020). They were presented in B23 and B19, respectively. Both models were built using the publicly available code LensToo1(Kneib et al., 1996; Jullo et al., 2007; Jullo & Kneib, 2009). All mass components were described with a parametric dual pseudo-isothermal elliptical (dPIE) mass density profile (Limousin et al., 2005; Eliasdottir et al., 2007), which is the ellipsoidal generalisation of a truncated isothermal sphere with a central core. The three-dimensional mass density profile of a spherical dPIE is (Ellasdottir et al., 2007) \[\rho(r)=\frac{\rho_{0}}{(1+r^{2}/r_{c}^{2})(1+r^{2}/r_{t}^{2})}, \tag{1}\] where \(r\) is the distance from the halo centre, while \(r_{c}\) and \(r_{t}\) are the core and the truncation radii, respectively. Equation (1) implies a smooth transition for the mass density between a central flat core for \(r<r_{c}\), an isothermal behaviour \(\langle\rho(r)\propto r^{-2}\rangle\) for \(r_{c}<r<r_{t}\), and a steeper profile \(\langle\rho(r)\propto r^{-4}\rangle\) for \(r>r_{t}\). The truncation radius can also be interpreted as the three-dimensional half-mass radius (B19). The central density scale, \(\rho_{0}\), is related to the isothermal velocity dispersion \(\sigma\) by (Limousin et al., 2005) \[\rho_{0}=\frac{\sigma^{2}}{2\pi\Omega}\frac{r_{c}+r_{t}}{r_{c}^{2}r_{t}}. \tag{2}\] The velocity dispersion parameter \(\sigma\) is connected to the observed aperture-averaged stellar velocity dispersion by a projection coefficient (see Appendix C of B19). The ellipticity is introduced substituting the projected distance from the centre \(R\) with \(\hat{R}\) such that (Eliasdottir et al., 2007) \[\hat{R}^{2}=\frac{x^{2}}{\left(\frac{2\mu}{a+b}\right)^{2}}+\frac{y^{2}}{\left( \frac{2\mu}{a+b}\right)^{2}}, \tag{3}\] where \(a\) and \(b\) are the major and minor projected semi-axes of the ellipsoid, and \(x\) and \(y\) are the coordinates along them1. Footnote 1: Equation (3) implies that, compared to the spherical case, the area enclosed by a given iso-surface-density contour changes by a factor \((4ab)(a+b)^{-2}\). The diffuse DM haloes and the ICM were modelled with cluster-scale dPIEs. The total mass distributions of each cluster member was described with a spherical coreless dPIE. An external shear term was taken into account while modelling MACS J1206. Following Bonamigo et al. (2018), the ICM mass distribution was fixed from X-ray observations, while all remaining components were optimised. The optimisation of the free parameters in LensToo1 is driven by a \(\chi^{2}\)-based likelihood which quantifies how well a given set of parameters reproduces the observed positions of the multiple images. The function can be written as \[\chi^{2}(\@vec{\theta})=\sum_{j=1}^{N_{\rm lim}}\sum_{i=1}^{N_{\rm lim}^{i}} \left(\frac{\left|\mathbf{x}_{\rm obs,i,j}-\mathbf{x}_{\rm pred,i,j}(\@vec{ \theta})\right|}{\sigma_{x,i,j}}\right)^{2}, \tag{4}\] where \(\mathbf{x}_{\rm obs,i,j}\) and \(\sigma_{x,i,j}\) are the observed position of the \(i\)-th image of the \(j\)-th family and its uncertainty, respectively, and \(\mathbf{x}_{\rm pred,i,j}(\@vec{\theta})\) is the position of the same image as predicted by the model defined by the set of parameter values \(\@vec{\theta}\). The accuracy of a given model at reproducing a given set of multiple images is usually measured with the root mean square difference between their model-predicted and observed positions (indicated as \(\Delta_{\rm rms}\)). A large sample of multiple images is therefore crucial to increase the number of observational constraints for the determination of the best-fit set of parameters. The SL model of MACS J0416 presented in B23 is based on the largest sample of spectroscopically confirmed multiple images ever built for such scope: 237 multiple images from 88 background sources in the redshift range \(z=0.94-6.63\). The best-fit model has \(\Delta_{\rm rms}=0.43^{\prime\prime}\) and includes 213 cluster members. The SL model of MACS J1206 from B19, on the other hand, includes 82 multiple images from 27 background sources in the redshift range \(z=1.01-6.01\). The best-fit model has \(\Delta_{\rm rms}=0.46^{\prime\prime}\) and includes 258 cluster members. The values of \(\Delta_{\rm rms}\) found by B23 and B19, summarised in Table 1, are significantly higher than the astrometric uncertainty on the positions of multiple images (typically smaller than \(0.01^{\prime\prime}\)). They are however in line with the state of the art of parametric SL models of massive clusters: in spite of the availability of exquisite samples of spectroscopic multiple images, models are still affected by systematics that have an impact on their predicted image positions, such as the choice of the parametrisation of the cluster total mass distribution and the lensing effects of the cluster environment (Meneghetti et al., 2017; Acebron et al., 2017; Grillo et al., 2016; Johnson & Sharon, 2016). For instance, as found by Chirivi et al. (2018), the line-of-sight mass distribution of MACS J0416, which is not included in B23, has a significant impact on the reconstruction of the multiple images, although its inclusion is not sufficient to reconcile observations and model predictions. Galaxy-scale SL models are not as affected by intrinsic systematics: we need to fully propagate the uncertainty impacting the cluster-scale mass distribution on the determination of the galaxy-scale parameters. ### Modelling the mass distribution of the cluster members As described in the previous sub-sections, the cluster members of MACS J0416 and MACS J1206 were modelled with spherical dPIEs with a vanishing core radius. Their total mass only depends on two parameters: their velocity dispersion parameter \(\sigma\), and their truncation radius \(r_{t}\). In the case of a vanishing core radius, the velocity dispersion parameter \(\sigma\) of a dPIE profile, defined in Eq. (2), is well approximated by the measured stellar central velocity dispersion \(\sigma_{0}\) (B19). In order to reduce the number of free parameters during the model optimisation, power-law scaling relations were calibrated between the two free parameters and the total luminosity of the cluster members. These relations are usually expressed as \[\sigma_{i}=\sigma^{\rm ref}\left(\frac{L_{i}}{L_{0}}\right)^{\alpha}, \tag{5}\] \[r_{x,i}=r_{t}^{\rm ref}\left(\frac{L_{i}}{L_{0}}\right)^{\beta}, \tag{6}\] where \(L_{0}\) is the luminosity of the BCG. Cluster-scale SL models are mostly sensitive to the total mass of the cluster members, rather than to the way it is distributed on the scale of the single galaxy, determining a clear degeneracy between \(\sigma\) and \(r_{t}\). An important step forward in the reduction of the impact of this degeneracy has been driven by the MUSE integral-field spectroscopy, which allows for the measurement of the velocity dispersion of several cluster members, thus obtaining an independent (i.e. not related to the SL constraints) observational prior on the values of \(\alpha\) and \(\sigma^{\rm ref}\), the slope and the normalisation of the first scaling law presented in Eq. (5) (also known as the Faber-Jackson law, Faber & Jackson 1976). Respectively 64 and 58 measured velocity dispersions have been included in the SL models of MACS J0416 and MACS J1206. This procedure partly breaks the degeneracy between the two parameters describing the cluster members. Using the SL forward modelling code GravityF7H (Bergaminini et al. in preparation), B23 reconstructed the original surface brightness profile of some sources lensed by MACS J0416 and then traced back their images to the lens plane. They showed that, despite the parametric degeneracies involved, SL models can faithfully reproduce the observed morphology of the GGSL images, and that the accuracy of the reconstruction is enhanced by the inclusion of a larger sample of observed multiple images as a constraint for the optimisation of the model. The degeneracy between the values of the velocity dispersion and of the truncation radius of the cluster members has also been addressed by G22 by replacing the Faber-Jackson law from Eq. (5) with a newly-calibrated FP, a more complex scaling law of which the Faber-Jackson is a projection. This allowed to estimate the central velocity dispersion for all the cluster members included in the model from their observed magnitude and half-light radius. The \(\sigma-r_{t}\) degeneracy has a very significant impact on the estimate of the compactness of the cluster members. The same total mass value for the cluster members can be obtained with different combinations of the two parameters resulting in a more (or less) compact mass distribution. On the other hand, in a GGSL system, the positions of the multiple images are stringent constraints on the critical lines around the lens galaxies, which strongly depend on its compactness, thus breaking the degeneracy. ## 3 The galaxy-galaxy strong lensing systems In this section, we lay out the structure of the three GGSL systems on which we focus our work, the data available for the cluster members, and the procedure followed to obtain the multiple image identification which we later adopt in our lensing analyses. ### MACS J0416.1\(-\)2403 Id14 The source ID14, studied in detail by Vanzella et al. (2017), is composed of a pair of faint, young, and compact stellar systems. It has been included in B23 with a spectroscopic redshift of \(z=3.221\)(Balestra et al., 2016). It is split by the gravitational potential of MACS J0416 into three images separated by up to 50''. One of the three images, shown in Fig. 1, falls close to a pair of elliptical cluster galaxies (members 8971 and 8980 in the reference SL model) and is further split into four images. Following Vanzella et al. (2017), we refer to the two sources as 1 and 2, and to the four images as a, b, c, and d. Knots 1 and 2 have a very similar magnitude in images a and b. Image 1c is the brightest observed, while 2c is much fainter and its position is hard to measure precisely. A detailed photometric study of the lensing system has been performed to build the most recent lensing models: we therefore adopted the positions thereby reported for the images a, b, and c. As such, we also chose to consider 2c in the same position as 1c for the purpose of SL modelling, with a larger uncertainty on its value. In the case of image d, unlike B23, where only image 1d was included in the model, we identified the two components of the source and measured their positions separately. The adopted multiple-image positions are reported in Table 2. ### MACS J0416.1\(-\)2403 Id16 The double source identified as ID16 in Bergamini et al. (2021), at a spectroscopic redshift of \(z=2.095\)(Balestra et al., 2016), is split into three multiple images by the gravitational potential of the cluster main halo. One of the three images falls very close to a pair of elliptical cluster members (8785 and 9129 in the reference SL model, see Fig. 2). The light distribution in the bluest of the HFF bands (ACS F435W) suggests that two further multiple images might fall very close to the centre of cluster member 8785, one of them being almost completely superimposed to it. In order to correctly measure the position of this image and include it in our lensing model, we therefore separated its light from that of the cluster galaxy. We did this in the HFF ACS F606W band, in which both the cluster member and the multiple image are visible. To perform this task, first, we masked the light from the lensed image. The masked pixels are deter \begin{table} \begin{tabular}{l c c c} \hline \hline Cluster & \(N_{\rm m}\) (\(N_{\rm m}^{\rm s}\)) & \(N_{\rm i}\) (\(N_{\rm s}\)) & \(\Delta_{\rm rms}\) \\ \hline MACS J0416 & 213 (64) & 237 (88) & 0.43′′ \\ MACS J1206 & 258 (58) & 82 (27) & 0.46′′ \\ \hline \hline \end{tabular} \end{table} Table 1: Reference lensing models of MACS J0416 (B23) and MACS J1206 (B19): relevant parameters. \(N_{\rm m}\) and \(N_{\rm m}^{\rm n}\) indicate the number of cluster members included in the model and the number of those with measured velocity dispersion, respectively. \(N_{\rm i}\) and \(N_{\rm i}\) indicate the number of multiple images included in the model and the number of sources, respectively. Figure 1: _Hubble_ Frontier Fields RGB image of MACS J0416.1\(-\)2403 ID14. The two cluster members are identified as in B23. The two components of the source are identified as 1 and 2. The four multiple images are indicated with the letters a, b, c, and d. mined in the HST F435W band (where ID16 is less affected by the light of the cluster member). This way, we could run GALFIT(Peng et al., 2010) on the masked F606W image to model the light of the cluster member and measure its structural parameters. In the GALFIT run, the position of the cluster member and its Sersic index \(n=0.5\) (Gaussian light distribution) were kept fixed. Parameters such as the magnitude and the half-light radius of the cluster member were manually tuned and fixed to minimise the residual image obtained after its subtraction. No significant over- or under-subtraction regions were observed in the residual image and the residual values are less than 20% of the original image value in every pixel. We therefore used the residual F606W image to measure the positions of all the multiple images, determining their \(x\) and \(y\) coordinates with Gaussian fits of the light profile. We refer to the two sources as 1 and 2 and to the three images produced by the cluster member 8785 as c, d, and e. We also included image b in the model, which is only at a distance of around 1'' from the cluster galaxy, and for which we have also measured the positions of the two components. The two sources are not clearly resolved in images d and e: we only included the brightest source, identified as 2, in our model. In Fig. 3, we confirm the new identification of two additional multiple images using the reference cluster-scale SL model. Even when only images a, b, and c are included in the optimisation process, the critical lines close to cluster member 8785 create images d and e. As shown in B23, SL forward modelling of the source ID16 performed with GravityFM allows us to very accurately reproduce the observed multiple image configuration and surface brightness distribution. As such, our new multiple image catalogue, presented in Table 3, has been adopted by B23. ### MACS J1206.2\(-\)0847 ID14 The double source ID14 was first included by Grillo et al. (2014) in a similar galaxy-scale lensing study, before a cluster-scale lensing model for MACS J1206 was available. It is treated by B19 as a single source, with a spectroscopic redshift of \(z=3.753\)(Biviano et al., 2013; Caminha et al., 2017). Five multiple images are observed, three of them very close to the centre of a cluster member (ID 3910), on which we focus our attention. Their positions are also influenced by the deflection caused by the second brightest cluster member (ID 2541). We refer to the two sources as 1 and 2 and to the three images as a, b, and c, as shown in Fig. 4. We measured their \(x\) and \(y\) positions, reported in Table 4, using CLASH photometry in the ACS F435W band. ## 4 Strong lensing modelling Our main aim is to use galaxy-scale SL modelling to directly constrain the compactness of the cluster members, and to compare our results with those obtained with cluster-scale modelling, to study their effectiveness at recovering the mass distribution of the cluster sub-structures. In previous galaxy-scale SL studies in clusters (e.g. Grillo et al., 2014; Parry et al., 2016), a simplified description of the cluster-scale mass distribution was included in the SL models to account for its effects on the image deflection. An incorrect determination of the cluster-scale mass distribution can significantly hinder the description of galaxy-scale lenses, especially as far as their azimuthal structure is concerned. In our case, instead, the cluster-scale mass distribution is constrained with accuracy, owing to SL models based on up to more than 200 multiple images of background sources. In SL models of massive galaxy clusters, the diffuse and galaxy-scale mass components are jointly optimised with the \begin{table} \begin{tabular}{c c c} \hline Reference & 64.034084 & \(-24.066743\) \\ \hline Position & \(x\) (′′) & \(y\) (′′) \\ \hline 1a & \(-1.34\) & \(-0.77\) \\ 2a & \(-1.26\) & \(-0.42\) \\ 1b & \(-0.34\) & \(0.93\) \\ 2b & \(-0.74\) & \(0.70\) \\ 1c (2c) & \(0.27\) & \(1.09\) \\ 1d & \(0.49\) & \(-0.59\) \\ 2d & \(0.38\) & \(-0.57\) \\ \hline 8980 & \(-0.36\) & \(-1.20\) \\ \hline \end{tabular} \end{table} Table 2: Multiple image positions for the GGSL system MACS J0416 ID14. We report the relative positions of the multiple images with respect to the centre of the cluster member 8971, for which we provide the values of R.A. and Dec. The images are identified as in Fig. 1. We also report the position of the centre of the cluster member 8980. \begin{table} \begin{tabular}{c c c} \hline Reference & 64.032442 & \(-24.068485\) \\ \hline Position & \(x\) (′′) & \(y\) (′′) \\ \hline 1b & \(-0.51\) & \(-0.44\) \\ 2b & \(-0.70\) & \(-0.64\) \\ 1c & \(0.23\) & \(0.42\) \\ 2c & \(0.09\) & \(0.25\) \\ 2d & \(-0.03\) & \(0.18\) \\ 2e & \(0.05\) & \(-0.004\) \\ \hline 9129 & \(-1.02\) & \(0.82\) \\ \hline \end{tabular} \end{table} Table 3: Multiple image positions determined from HFF photometry for the GGSL system MACS J0416 ID16. We report the relative positions of the multiple images with respect to the centre of the cluster member 8785, for which we provide the values of R.A. and Dec. The images are identified as in Fig. 2. We also report the position of the centre of the cluster member 9129. Figure 2: _Hubble_ Frontier Fields RGB image of MACS J0416.1\(-\)2403 ID16. The two cluster members are identified as in B23. same set of constraints, as described in Sect. 2. The cluster-scale DM haloes dominate the total mass budget and determine the position of its primary critical lines, therefore the parameters that describe their mass distribution have the largest impact on the positions of the multiple images (Meneghetti et al., 2017; Limousin et al., 2022). In addition, the scaling laws connecting the parameters adopted to model the cluster galaxies are mostly influenced by the properties of the most massive members, and power-laws can be too simple to describe the wide mass and morphology range of the cluster members (see G22, Beauchesne et al., 2024). The total mass distribution of the low- and intermediate-mass cluster galaxies is therefore not entirely constrained by cluster-scale SL models, as shown by the difference between the results obtained by describing them with different scaling laws (G22). When modelling a galaxy-scale lens, we thus only optimised the parameters defining the cluster galaxies whose mass distribution significantly impacts the positions of the multiple images, keeping fixed the mass distribution of the rest of the galaxy cluster. To build the \(\chi^{2}\) function driving the parametric optimisation (see Sect. 2), we only included the multiple images close to the main galaxy-scale lens (within 2\({}^{\prime\prime}\) from its centre). More distant multiple images could provide us with additional constraints the position of the source, but their position is primarily determined by the mass distribution of other cluster mass components, whose description is subject to a higher uncertainty compared to the determination of the mass profile of the galaxy-scale lens. As such, their inclusion may propagate systematics affecting the total cluster mass onto the determination of the galaxy-scale mass distribution. \begin{table} \begin{tabular}{c c c} \hline Reference & 181.566661 & \(-\)8.804784 \\ \hline Position & \(x\) (\({}^{\prime\prime}\)) & \(y\) (\({}^{\prime\prime}\)) \\ \hline 1a & 0.26 & 1.36 \\ 2a & 0.40 & 1.10 \\ 1b & 0.68 & \(-\)0.04 \\ 2b & 0.68 & 0.18 \\ 1c & 0.61 & \(-\)1.06 \\ 2c & 0.70 & \(-\)1.29 \\ \hline 2541 & 5.02 & \(-\)4.67 \\ \hline \end{tabular} \end{table} Table 4: Multiple image positions determined from CLASH photometry for the GGSL system MACS J1206 ID14. We report the relative positions of the multiple images with respect to the centre of the cluster member 3910, for which we provide the values of R.A. and Dec. The images are identified as in Fig. 4. We also report the position of the centre of the cluster member 2541. Figure 4: CLASH RGB image of MACS J1206.2\(-\)0847 ID14. The two cluster members are identified as in B19. The two components of the source are identified as 1 and 2. The three multiple images are indicated with the letters a, b, and c. Figure 3: _Hubble_ Frontier Fields imaging of MACS J0416.1\(-\)2403 ID16. Left panel: the two cluster members in the ACS F814W band. Right panel: the multiple images in the ACS F435W band. The two components of the source are identified as 1 and 2. The four multiple images are indicated with the letters b, c, d, and e. The critical lines predicted by the model by B23 for a source at redshift \(z=2.095\) (the same as ID16) are marked in red. To account for this uncertainty on the cluster mass distribution, we did not limit ourselves to considering the best-fit mass models, which may be affected by a systematic bias in the studied region. Instead, we extracted 100 random sets of parameter values from the Markov Chain Monte Carlo (MCMC) sampling of the posterior probability distribution of the reference cluster-scale SL model. We then fixed the mass distribution of the cluster-scale components and of all the other members to one of the 100 realisations thus obtained, and optimised the galaxy-scale lenses. We repeated the optimisation for each of the 100 mass models of the cluster. This bootstrapping procedure also allowed us to estimate the uncertainty on the determination of the parameters of the galaxy-scale lenses, and the degeneracies between them. Similarly to the reference models, we modelled the main lens galaxy of each galaxy-scale SL system as a spherical truncated dPIE. We also tested an elliptical non-truncated dPIE mass distribution to understand whether the preference for truncated models could arise from the insufficient azimuthal complexity of the spherical total mass model adopted. We performed all SL optimisations using LensTool. ### MACS J0416.1\(-\)2403 member galaxy 8971 The galaxy-scale SL system MACS J0416 ID14 was described in sub-section 3.1. Throughout the SL modelling procedure we adopted the multiple image catalogue presented in Table 2, comprising of eight multiple images from two background sources. All of the multiple images of the source ID14 are observed close to the cluster member 8971 (hereafter member 8971), at an average angular distance of 1.07\({}^{\prime\prime}\): we thus focused on constraining its truncation radius. Cluster member 8980 significantly influences the multiple-image configuration, so we also optimised its mass distribution. In B23, member 8971 was not included in the scaling relations adopted to describe the remaining cluster members, and it was modelled separately as a truncated dPIE profile. The values of the parameters of its total mass distribution are reported in Table 5, with an uncertainty provided by the MCMC sampling of their marginalised posterior probability distribution. The ellipticity of the halo converges to the upper limit of its prior, indicating that it is poorly constrained and perhaps compensates for some unaccounted shear. As anticipated, we first described member 8971 as a spherical truncated dPIE with vanishing core (hereafter SISt), whose centre is fixed at the light centre. The two free parameters are thus the velocity dispersion \(\sigma^{2}\) and the truncation radius \(r_{t}\). The alternative model, an elliptical non-truncated dPIE with a vanishing core (hereafter SIE), has three free parameters in \(\sigma\), the ellipticity \(e\)3, and the orientation angle \(\theta_{e}\)4. As we wish to focus on studying the radial structure of member 8971, we considered simpler models for the mass distribution of the cluster member 8980. We tested spherical truncated and non-truncated isothermal models, and we noticed some degeneracy between the value of \(r_{t}\) for member 8980 and the parameters describing member 8971. To avoid this, we chose to describe member 8980 with a SIS mass profile. Footnote 3: \(e=(a^{2}-b^{2})(a^{2}+b^{2})^{-1}\), where \(a\) and \(b\) are the major and minor semi-axes of the ellipse. Footnote 4: Counter clockwise sexagesimal degree between the semi-major axis and the positive \(x\) axis on the lens plane. We optimised the two galaxy-scale SL models that we just outlined (with member 8971 parametrised as a SISt and a SIE respectively) for each of the 100 cluster-scale total mass distribution realisations obtained as described earlier. With this bootstrapping procedure, we obtained a set of 100 best-fit values for the parameters of the galaxy-scale SL system: we used the median, the 16th, and the 84th percentiles of the resulting distribution of best-fit values to obtain an estimate of their value and its uncertainty. Studying the marginalised distribution of the 100 best-fit parameters also provides us with insights on the degeneracy between them. Unlike in B23, where LensTool struggled to recover \(r_{t}\) and the ellipticity of member 8971 at the same time, all the parameters of the SISt and SIE models are well constrained, with a low uncertainty. Their values are reported in Table 6. In the case of the SISt mass profile, the total mass can be obtained as \[M=\frac{\pi\sigma^{2}r_{t}}{G}. \tag{7}\] We can thus estimate a total mass value of \(M=1.2^{+0.3}_{-0.1}\times 10^{11}\,M_{\odot}\) for 8971. For each best-fit model, we estimate the value of \(\Delta_{\rm rms}\), and we refer to the average value of \(\Delta_{\rm rms}\) obtained for the 100 realisations of the model as \(\langle\Delta_{\rm rms}\rangle\): we find \(\langle\Delta_{\rm rms}\rangle=0.08^{\prime\prime}\) for the SISt models and \(0.04^{\prime\prime}\) for the SIE model. We note that the SIE model allows for lower values of \(\Delta_{\rm rms}\), which might be due to having one additional parameter compared to the SISt model. This may imply that a spherical mass distribution is too simple to describe the total mass distribution in the region, perhaps also due to unaccounted shear from the cluster-scale mass distribution. This is also suggested by the high value of ellipticity reported by B23. Given the significant difference in the best-fit value of \(\sigma\) between the models, we compare them with the measured stellar line-of-sight velocity dispersion (LOSVD) for member 8971. The NE region of MACS J0416 was included in the MUSE Deep \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & SISt & SIE \\ \hline Degrees of freedom & 8 & 7 \\ \(\sigma_{8971}\) (km s\({}^{-1}\)) & \(164.9^{+6.8}_{-7.5}\) & \(132.0^{+1.5}_{-1.2}\) \\ \(r_{t,8971}\) (\({}^{\prime\prime}\)) & \(1.14^{+0.43}_{-0.20}\) & \(-\) \\ \(e\) & \(-\) & \(0.20^{+0.03}_{-0.03}\) \\ \(\theta_{e}\) (\({}^{\circ}\)) & \(-\) & \(57.5^{+3.8}_{-3.5}\) \\ \(\sigma_{8980}\) (km s\({}^{-1}\)) & \(67.2^{+3.5}_{-5.8}\) & \(45.6^{+3.1}_{-4.4}\) \\ \(\langle\Delta_{\rm rms}\rangle\) (\({}^{\prime\prime}\)) & \(0.08\) & \(0.04\) \\ \hline \hline \end{tabular} \end{table} Table 6: Best-fit values and \(1\sigma\) errors of the parameters of our galaxy-scale SL models of MACS J0416 ID14. In the first column 8971 is described as a SISt, in the second one as a SIE. \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline \(\sigma\) (km s\({}^{-1}\)) & \(134^{+7}_{-6}\) \\ \(r_{t}\) (\({}^{\prime\prime}\)) & \(18.6^{+8.5}_{-8.3}\) \\ \(e\) & \(0.52^{+0.06}_{-0.11}\) \\ \(\theta_{e}\) (\({}^{\circ}\)) & \(-40^{+20}_{-15}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Best-fit values and \(1\sigma\) errors of the parameters describing the cluster member 8971 in the SL model of MACS J0416 by B23. Lens Field Vanzella et al. (2021), a very deep (17.1h integration time) MUSE observation with a seeing of approximately 0.6''. The value of \(\sigma\) recovered by SL models is a central three-dimensional mass density scale, while the value of LOSVD is a projected stellar velocity dispersion, which depends on the light distribution and on the point spread function of the observations, so the two values do not need to be exactly identical. We extracted the spectrum of member 8971 within a circular aperture centred in the centre of light of the galaxy and with a radius equal to the seeing of 0.6'', to probe the central regions of the cluster galaxy. We measured the LOSVD using pPXF (penalised pixel-fitting) by Cappellari and Emsellem (2004) and Cappellari (2017, 2023), comparing the observed spectra with a set of 463 UVB stellar templates from the X-shooter Spectral Library (XSL) DR2 (Gonneau et al., 2020) with a signal-to-noise ratio \((S/N)\) greater than 100 A\({}^{-1}\), combined and convolved with a LOSVD. The fit, whose results are shown in Fig. 5, minimises a \(\chi^{2}\) function between the observed spectrum and a model and provides a measured LOSVD value of \(178.0\pm 2.4\) km s\({}^{-1}\), for an average \(S/N\) value of 64.6 over the spectrum. The very low error on the measured value only includes the statistical uncertainty of the final LOSVD fit and does not include possible systematics introduced by the choice of the stellar templates adopted to fit the spectrum. The velocity dispersion value found by our SISt galaxy-scale SL model is consistent within approximately 2\(\sigma\) with the measured LOSVD. The comparison with the value found by our SIE model is less straightforward due to the change in the surface area within a given iso-density contour at a fixed \(\sigma\) determined by the introduction of the ellipticity through transformation defined in Eq. (3). However, we note that our ellipticity value of \(e=0.18\) implies a change in iso-density areas of only 1% with respect to the spherical case, which allows us to compare the two values in first approximation. While the SL and the kinematic estimates of \(\sigma\) do not probe exactly the same physical quantity, we would expect them to have similar values, and only our SISt model seems to allow for that. This comparison is not performed in the following sections for members 8785 and 3910, whose spectra have low \(S/N\) values, probably due to a very faint lens magnitude and a lower MUSE exposure time, respectively. The determination of the \(S/N\) threshold for a reliable LOSVD measurement requires further tests and larger samples, and will be presented in an upcoming work (Granata et al. in preparation). The predicted compactness of the cluster galaxy 8971 and the evidence for a truncated total mass density profile are discussed in Sect. 5. ### MACS J0416.1\(-\)2403 member galaxy 8785 The galaxy-scale SL system MACS J0416 ID16 was described in sub-section 3.2. Throughout the SL modelling procedure we adopted our new multiple image catalogue, presented in Table 3 and comprising of six multiple images from two background sources. All six of the multiple images of the source ID16 are observed close to the cluster member 8785 (hereafter member 8785), at an average angular distance of only 0.43'' from its centre. We thus focused on constraining the truncation radius of member 8785, and optimised the mass distribution of member 9129, which significantly influences the multiple-images configuration, as well. In B23, member 8785 was modelled using the scaling relations (Eqs. 5 and 6) leading to a velocity dispersion value \(\sigma=83.3^{+2.7}_{-6.7}\) km s\({}^{-1}\) and a truncation radius \(r_{t}=0.8^{+0.2}_{-0.1}\)'', resulting in a total mass \(M=2.1^{+0.3}_{-0.2}\times 10^{10}\)\(M_{\odot}\). We first described member 8785 as a SISt, and we studied an alternative SIE model. Again, we tested several models for member 9129, and we chose a SIS mass profile to avoid degeneracies between parameters. We ran the SISt and SIE models for each of the 100 cluster-scale total mass distributions extracted from the MCMCs. In Table 7, we present the results of our bootstrapping procedure: the median values of the parameters of the best-fit galaxy-scale lensing models, and the 1\(\sigma\) uncertainties derived from their 16th and 84th percentiles. The offsets between the observed and model-predicted positions of the multiple images are extremely small, with a value of \(\langle\Delta_{\rm rms}\rangle\), the average of \(\Delta_{\rm rms}\) for the 100 model realisations, of 0.01'' for both the SISt and the SIE models. This value is lower than the observational uncertainty on the determination of the positions of the multiple images. The distribution of the best-fit values of the free parameters, for the 100 cluster-scale mass distributions we consider, suggests that they are well constrained, with a low uncertainty and without clear degeneracies between them. Comparing our results for the SISt model with those from B23, we find lower values for \(\sigma\) and \(r_{t}\), corresponding to a lower total mass value of \(M=1.0^{+0.2}_{-0.1}\times 10^{10}\)\(M_{\odot}\). This discrepancy may be due to a bias introduced by the scaling relations adopted in \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & SISt & SIE \\ \hline Degrees of freedom & 5 & 4 \\ \(\sigma_{\rm 8785}\) (km s\({}^{-1}\)) & \(57.6^{+0.3}_{-0.3}\) & \(55.5^{+0.6}_{-1.2}\) \\ \(r_{t,\rm 8785}\) (′′) & \(0.74^{+0.12}_{-0.08}\) & – \\ \(e\) & – & \(0.18^{+0.08}_{-0.06}\) \\ \(\theta_{e}\) (°) & – & \(110^{+10}_{-8}\) \\ \(\sigma_{9129}\) (km s\({}^{-1}\)) & \(98.8^{+2.4}_{-2.2}\) & \(88.3^{+1.6}_{-2.1}\) \\ \(\langle\Delta_{\rm rms}\rangle\) (′′) & 0.01 & 0.01 \\ \hline \hline \end{tabular} \end{table} Table 7: Best-fit values and 1\(\sigma\) errors of the parameters of our galaxy-scale SL models of MACS J0416 ID16. In the first column 8785 is described as a SISt, in the second one as a SIE. Figure 5: Fitting of the LOSVD of the MUSE spectrum of member 8971 with pXF. The observed spectrum is shown in black, the red curve is the best-fit model, while the green points show the difference between the data and the model. The blue shaded regions along the wavelength axis were excluded in the fitting procedure, due to the presence of sky subtractions around emission lines or laser lines in the spectrum. B23: cluster-scale SL models that describe the sub-halo component with the FP find that power-law scaling relations can overpredict the total mass (G22) and the velocity dispersion (Beauchesne et al. 2024) of low- and intermediate-mass cluster galaxies. In Sect. 5, we discuss the inferred compactness of member 8785. ### MACS J1206.2-0847 member galaxy 3910 The galaxy-scale SL system MACS J1206 ID14 was described in sub-section 3.3. Throughout the SL modelling procedure we adopted our new multiple image catalogue, presented in Table 4 and comprising of six multiple images from two background sources. All six of the multiple images of the source ID14 are observed close to the cluster member 3910 (hereafter member 3910), at an average angular distance of \(1.11^{\prime\prime}\) from its centre: we thus focused on constraining its truncation radius. Cluster member 2541, at a distance of \(6.86^{\prime\prime}\) from the centre of 3910, is the second brightest cluster galaxy, with a predicted total mass of \(9.55\times 10^{11}\,M_{\odot}\) in B19. Closer to 3910 is the low-mass (\(M=1.97\times 10^{9}\,M_{\odot}\)) cluster member 3920. Due to its high total mass, we optimised the mass distribution of cluster member 2541 as well, whereas we kept the parameters describing 3920 fixed to the values predicted by cluster-scale modelling. Member 3910 was described by B19 with the two scaling relations (Eqs. 5 and 6), leading to best-fit values of the velocity dispersion and of the truncation radius of \(\sigma=136.6^{+7.6}_{-6.6}\) km s\({}^{-1}\) and \(r_{t}=0.53^{+0.09}_{-0.07}\,\arcsec\), for a total mass \(M=4.2^{+0.4}_{-0.4}\times 10^{10}\,M_{\odot}\). Again, we first described 3910 as a SISt, and we tested an alternative SIE model. With regards to cluster member 2541, we note that in this case we are able to constrain its truncation radius without unforeseen parametric degeneracies, and thus we adopted a SISt model. We ran the SISt and SIE models for each of the 100 cluster-scale total mass distributions extracted from the MCMCs. In Table 8, we present the median values of the parameters of the best-fit galaxy-scale lensing models, as derived from the bootstrapping procedure, with \(1\sigma\) uncertainty derived from their 16th and 84th percentiles. As in the case of MACS J0416 ID16, we find very small offsets between the observed and predicted multiple images, with an average \(\Delta_{\rm rms}\) value of \(\langle\Delta_{\rm rms}\rangle=0.01\arcsec\) for the SISt model and \(0.03\arcsec\) for the SIE model. In spite of having fewer free parameters, the SISt model allows for a more accurate reconstruction of the lensing observables, and all parameters are estimated with a low statistical uncertainty, as clear from Table 8. On the other hand, the SIE model cannot constrain well the value of the velocity dispersion of member 2540, which tends to the upper limit of our prior, and the predicted ellipticity of member 3910 is very high6, suggesting that it might be needed to compensate for the lack of truncation in the total mass profile. Comparing our SISt model with B19, we find lower values for \(\sigma\) and significantly higher values for \(r_{t}\), for a total mass value of \(M=6.3^{+1.0}_{-1.1}\times 10^{10}\,M_{\odot}\), higher than the best-fit value from B19. Again, the Faber-Jackson law adopted by B19 may be biasing the value of \(\sigma\) and therefore of the truncation radius. Grillo et al. (2014) use a one-parameter SIS mass profile to describe the cluster member, finding \(\sigma=97\pm 3\) km s\({}^{-1}\). The lower value of \(\sigma\) is expected, given the lack of a truncation of the mass profile. In the next section, we perform a more meaningful comparison, looking at the mass value within the effective radius. Footnote 6: As such, the value of \(\sigma_{3910}\) of the SISt model cannot be compared directly to that of the SIE model. ## 5 Analysis and discussion ### Truncation radius of the cluster members As detailed in the previous section, our galaxy-scale SL modelling procedure was aimed at estimating the truncation radius of three lens galaxies in massive clusters. The recovered values of \(r_{t}\) for members 8971, 8785, and 3910 are \(6.1^{+2.3}_{-1.1}\) kpc, \(4.0^{+0.6}_{-0.4}\) kpc, and \(5.2^{+1.3}_{-1.1}\) kpc, respectively. To better illustrate the radial scales at play, we built a cumulative total mass profile of the three members by comparing the 100 best-fit mass profiles obtained with the bootstrapping procedure and by taking the 50th percentile at each projected radius. We also estimated the statistical uncertainty on the total mass profile from the 16th and 84th percentiles. We performed the same procedure on the SL models of the three cluster members by B23 and B19, using their MCMC sampling of the parametric posterior probability distribution to quantify the uncertainty on their total mass distributions. These profiles are shown in Fig. 6. We note that this procedure only accounts for the statistical uncertainty on the mass distribution parameters, and not for the possible systematics. As shown in Figs. 6 and 7, the measured values of the truncation radius are, for all three members, within the range of the observed positions of the multiple images. In this radial range, SL allows for the highest accuracy in the reconstruction of the total mass profile of a lens, and is therefore more sensitive to its slope. In all three cases, we have tested an alternative non-truncated SIE mass parametrisation to inquire whether the truncated profile could actually be compensating for an insufficient description of the lens azimuthal structure. Despite having 4 free parameters, as opposed to the 3 of the SISt model, only in one of the three cases (member 8971), the SIE model leads to a slightly lower value of \(\Delta_{\rm rms}\). In one case (member 3910), the SIE model predicts a very high value of the lens ellipticity, which is not suggested by the light distribution. In conclusion, in all three systems we find that a spherical truncated total mass distribution for the lens galaxy is able to reproduce the SL observations with a small \(\Delta_{\rm rms}\) and provide the value of \(r_{t}\) with a low statistical error. In two out of the three cases, a non-truncated model with a higher number of parameters does not improve the description of the system, in one case leading to unrealistic parameter values. Suyu & Halkola (2010) performed a similar truncation radius measurement for a satellite galaxy (\(z=0.351\)) of the massive elliptical lens SL2S \(J08544-0121\), which influences the \begin{table} \begin{tabular}{c c c} \hline Parameter & SISt & SIE \\ \hline Degrees of freedom & 3 & 4 \\ \(\sigma_{3910}\) (km s\({}^{-1}\)) & \(129.2^{+5.1}_{-3.6}\) & \(113.8^{+2.5}_{-3.0}\) \\ \(r_{t,3910}\) (\(\arcsec\)) & \(0.90^{+0.20}_{-0.22}\) & – \\ \(e\) & – & \(0.77^{+0.06}_{-0.09}\) \\ \(\theta_{e}\) (\({}^{\circ}\)) & – & \(96.53^{+0.29}_{-0.21}\) \\ \(\sigma_{2540}\) (km s\({}^{-1}\)) & \(347^{+18}_{-16}\) & \(399^{+11}_{-5}\) \\ \(r_{t,2540}\) (\(\arcsec\)) & \(1.75^{+0.20}_{-0.18}\) & \(0.49^{+0.03}_{-0.02}\) \\ \(\langle\Delta_{\rm rms}\rangle\) (\(\arcsec\)) & \(0.01\) & \(0.03\) \\ \hline \end{tabular} \end{table} Table 8: Best-fit values and \(1\sigma\) errors of the parameters of our galaxy-scale SL models of MACS J1206 ID14. In the first column 3910 is described as a SISt, in the second one as a SIE. shape of an Einstein ring determined by the main lens. They find a value of \(r_{t}\) close to its theoretically predicted tidal radius. Figure 8 shows that the value of \(r_{t}\) estimated in Suyu & Halkola (2010) seems to agree with the \(\sigma\)-to-\(r_{t}\) relation found in our work. Monna et al. (2015) repeated the procedure for the CLASH-VLT galaxy cluster Abell 383. They combined SL with observational priors on the observed velocity dispersion for two high mass (total mass greater than \(10^{12}\,M_{\odot}\)) members close to two lensed arcs of the same sources. The recovered value of \(r_{t}\) is greater than \(50\,\mathrm{kpc}\) for both galaxies, significantly higher than predicted by the scaling law calibrated for the other cluster members of Abell 383. The authors suggest that very bright members may have not undergone strong stripping as a result of being a central galaxies prior to accretion on the cluster. Both Suyu & Halkola (2010) and Monna et al. (2015) based their SL modelling procedure on surface brightness reconstruction of extended arcs, finding that this significantly improves the accuracy of the constraints on the value of the truncation radius. In this work, we only used the centroid position of the multiple images, rather than their full surface brightness. However, unlike in Suyu & Halkola (2010) and Monna et al. (2015), the cluster-scale mass distribution determines several different multiple images of the background source close to the galaxy-scale lens, rather than a single extended lensed arc. In our case, several multiple images are observed at different projected distances from the lens centre, providing us with detailed information about the galaxy total mass profile and its slope. Finally, we tested whether the inferred values of \(r_{t}\) may be biased by the parametrisation chosen by B23 and B19 for the total mass profile of the haloes included in the model. To do so, we built an alternative cluster-scale model for MACS J1206 in which all of the cluster members are described with singular isothermal sphere (SIS) mass profiles (i.e. with infinite \(r_{t}\)) and repeated the statistical analysis described above. We find that this does not influence significantly the estimated parameter values for the lens galaxy 3910, suggesting that our modelling procedure is robust with respect to the parametrisation choices adopted for the remaining cluster mass components. ### Compactness of the cluster galaxies Galaxy-scale SL events in massive clusters allowed us to infer the physical properties of some selected cluster galaxies without relying on the scaling laws typically adopted to describe them. Cluster-scale SL modelling is mostly sensitive to the total mass of the cluster galaxies, rather than to the details of their mass density profiles. As such, \(\sigma\) and \(r_{t}\) suffer from a strong degeneracy and cannot be separately constrained in absence of an observational prior: this significantly limits the insights on the compactness of the cluster galaxies. To break the degeneracy between \(\sigma\) and \(r_{t}\), B19 and B23 obtained a kinematic prior on the value of the slope of the Faber-Jackson law (marked as \(\alpha\) in Eq. 5). Assuming a total mass-to-light ratio \(M/L\propto L^{\gamma}\) leads to \(\beta=\gamma-2\alpha+1\) (\(\beta\) is defined in Eq. 6). From Eq. (7), one can then derive a total mass-to-\(\sigma\) scaling law \[M\propto\sigma^{\frac{1\gamma}{\sigma}}. \tag{8}\] B19 and B23 assumed \(\gamma=0.2\), as suggested by the FP. The reference values of the scaling laws (\(\sigma^{\mathrm{ref}}\) and \(r_{t}^{\mathrm{ref}}\) in Eqs. 5 and 6) are mostly determined by the high-mass cluster members, which have a stronger influence on the positions of the observed multiple images. On the other hand, once \(\gamma\) is fixed, the value of \(\alpha\) fixes the slopes of the two laws, determining the description of the total mass distribution of the low- and intermediate-mass cluster members. Considering the reference sample of clusters included in Meneghetti et al. (2020), B19 find \(\alpha=0.28^{+0.02}_{-0.02}\) for MACS J1206 and \(\alpha=0.27^{+0.04}_{-0.04}\) for AS1063, while Bergamini et al. (2021) and B23 find \(\alpha=0.30^{+0.03}_{-0.03}\) for MACS J0416, corresponding to \(M\propto\sigma^{A3}\), \(M\propto\sigma^{A4}\), and \(M\propto\sigma^{A4}\), respectively. The same procedure was adopted by Bergamini et al. (2023a) for the HFF cluster Abell 2744 (A2744), finding a value of \(\alpha=0.40^{+0.03}_{-0.03}\) (implying \(M\propto\sigma^{3.0}\)). In the top panel of Fig. 9, we compare the \(M\)-to-\(\sigma\) scaling law for these four cluster scale SL models with the values of total mass and velocity dispersion for the galaxy-scale SL systems. In cluster-scale models, only the values \(\sigma^{\mathrm{ref}}\) and \(r_{t}^{\mathrm{ref}}\) are optimised: we estimated the uncertainty on the determination Figure 6: Projected cumulative total mass profiles for the lens galaxies studied in this work compared with the predictions of B23 and B19. From the top to the bottom panel: member 8971, member 8785, and member 3910. The vertical bars indicate the inferred values of the truncation radius. Shaded regions indicate the 16th and 84th percentiles for the mass profile and the truncation radius. The projected distances of the observed multiple images from the lens centre are marked with vertical black lines. of the \(M\)-to-\(\sigma\) relation based on the MCMC sampling of the posterior probability distribution for these two parameters. We notice that the scaling relations used in cluster-scale models have significantly different normalisation and slope values. In particular, the SL model of MACS J0416 consistently predicts higher mass values at a fixed \(\sigma\) compared to those of MACS J1206 and AS1063. On the other hand, the model of A2744 has a significantly lower slope, predicting higher total mass values at low \(\sigma\). Looking at the three cluster members studied in this work, only the \(M\)-to-\(\sigma\) relation found for A2744 seems to represent well their compactness. In the SL model of AS1063 by G22, the values of \(\sigma\) were fixed from the observed luminosity and half-light radius \(R_{e}\) of the cluster galaxies, while a proportionality law was calibrated between the observed \(R_{e}\) and \(r_{r}\). As such, the \(M\)-to-\(\sigma\) relation is not a power-law and is able to include a realistic scatter. As shown by the bottom panel of Fig. 9, the relation significantly differs from those obtained with a power-law approach: a bi-logarithmic fit of the relation predicts a slope of 2.0. The panel also shows that the three galaxy-scale lenses that we have modelled in this work lie within the scatter of the scaling relation derived by G22 using the FP. While a sample of three objects is very small, it is interesting to note that the \(M\)-to-\(\sigma\) relation for the galaxy scale lenses, derived exclusively with SL, is very close to the predictions of G22, where lensing observables are only used to estimate the ratio between \(R_{e}\) and \(r_{r}\). A more complex description of the cluster galaxies based on the FP leads to inferred properties which are compatible with our analysis. Beauchesne et al. (2024) recently modelled AS1063 adopting an intermediate approach between B19 and G22, where galaxies are described with the Faber Jackson law or the FP depending on the observations available for them. The value of \(\alpha\) was optimised together with the parameters of the FP to avoid inconsistent slopes. Choosing Figure 8: Relation between the values of the velocity dispersion and of the truncation radius for the three cluster members included in this study (in blue) and for a satellite of SL2S J08544\(-\)0121, as measured by Suyu & Halkola (2010) (in red). The inferred uncertainty on the velocity dispersion of member 8785 is too small to be visible in this plot. Figure 7: Truncation radius of members 8971, 8785, and 3910 superimposed to the RGB cutout of the respective galaxy-scale SL system. We show the best fit value as a solid line and the 1\(\sigma\) uncertainty range with dashed lines. Figure 9: Comparison between the \(M\)-to-\(\sigma\) relation estimated by our analysis and those adopted for the cluster galaxies in the SL models of MACS J0416 (B23), AS1063 (B19; G22), MACS J1206 (B19), and A2744 (Bergamini et al. 2023a). Top panel: comparison with the SL models describing the cluster galaxies according to Eqs. (5) and (6). Bottom panel: comparison with the SL model of AS1063 by G22, based on the FP. \(\gamma=0\), they find \(\alpha=0.34\), obtaining to \(M\propto\sigma^{3}\). On the other hand, fixed power-law scaling relations do not seem to be able to correctly describe the spatial structure of the cluster galaxies on the whole mass range included in SL models. As shown by Fig. 6, we predict significantly different properties for members 8785 and 3910 compared to B23 and B19: the former has a similar value of \(r_{t}\) but a higher \(\sigma\), while the latter has a significantly larger \(r_{t}\). These differences may have a non-negligible impact on the magnification factor predicted close to the galaxy-scale lenses, which connects the observed and the unlensed magnitude for the multiply imaged sources. We measured the magnification predicted by the three galaxy-scale models. For each of them, we used the 100 models from our boot-strapping procedure to estimate the uncertainty on the magnitude value. In Fig. 10, we map the value of \(\xi=\log\left(\frac{\mu-\mu_{\rm CS}}{\sigma_{\mu}}\right)\), where \(\mu\) is the magnification predicted by our models, \(\sigma_{\mu}\) is the uncertainty on its value, and \(\mu_{\rm CS}\) is the magnification from the cluster-scale models (by B23 and B19). The upper panel of Fig. 10 shows that our work and B23 predict significantly different magnifications close to member 8971. However, these differences are probably due to the different mass parametrisation chosen by the two models and are less pronounced close to the positions of the multiple images, where the total mass distribution is better constrained. As expected, close to members 8785 and 3910, the value of \(\xi\) is closer to zero as a consequence of choosing the same mass parametrisation in the two models. In all three cases, with few exceptions, the difference between the predicted values of the magnification is relatively small close to the multiple images, where the total mass distribution of the lens is better reconstructed. In conclusion, the systematics related to the mass modelling of the cluster members affect the predicted magnification map of lens clusters close to member galaxies. These predictions are more robust in proximity of the observed positions of the multiple images, but a more realistic description of the total mass properties of lens galaxies can benefit the accuracy of the studies of high-redshift lensed sources. ### Comparison with cosmological simulations In this sub-section, we compare the compactness of the cluster members as obtained from our galaxy-scale SL models with the theoretical predictions of cosmological simulations. This study offers an excellent opportunity to contrast the properties of observed and simulated galaxy-scale lenses in massive clusters (although we note that member 8785 is not the main lens responsible for the secondary critical line that produces system ID16). As in Meneghetti et al. (2020), we adopt the maximum circular velocity of the cluster members, defined as \[v_{\rm max}={\rm max}\left(\sqrt{\frac{GM(<r)}{r}}\right), \tag{9}\] where \(v_{\rm max}=\sqrt{2}\sigma\) for an isothermal model, as a proxy for the compactness of cluster galaxies (see Meneghetti et al., 2020, 2022, 2023; Ragagnin et al., 2022): more compact objects have higher values of \(v_{\rm max}\) at a fixed total mass. In Fig. 11, we compare the \(v_{\rm max}\)-to-\(M\) relation for our work with those found in Ragagnin et al. (2022) for a set of zoom-in re-simulations of the Dianoga suite (Planelles et al., 2014; Rasia et al., 2015) of simulated galaxy clusters. These setups differ from one another in terms of their softening and feedback schemes. As in Ragagnin et al. (2022), we refer to the three models considered as R15 (presented in Rasia et al., 2015), RF18 (presented in Ragone-Figueroa et al., 2018), and B20 (presented in Bassini et al., 2020). The setups are also referred to as 1x or 10x if they have the same mass resolution as the Dianoga suite, or a ten times lower particle mass, respectively. Figure 11 shows that members 8971 and 3910 both have a maximum circular velocity value higher than those for simulated sub-haloes with the same total mass, irrespective of the feedback scheme or the resolution considered in Ragagnin et al. Figure 10: Comparison between the magnification values close to the three galaxy-scale lenses predicted in this work and in B23 and B19. The colour map is based on the value of \(\xi=\log\left(\frac{\mu-\mu_{\rm CS}}{\sigma_{\mu}}\right)\) close to members 8971, 8785, and 3910, respectively. The red crosses indicate the observed positions of the multiple images included in this work. (2022), indicating a level of compactness higher than that predicted by simulations. This seems to suggest that the discrepancy between SL models and simulations cannot be entirely ascribed to systematics affecting the former, as a result of the adoption of power-law scaling relations to describe the cluster members, as noted already by G22. Member 8785 falls in a total mass range (\(M<10^{10}\,M_{\odot}\)) which was excluded from the analyses of Meneghetti et al. (2020), because the current cosmological simulations do not have enough mass resolution. ### Stellar mass of the cluster members In this sub-section, we study the stellar-to-total mass fraction of the cluster members. Measuring its value within the effective radius is an important probe of the interplay between the effects of the baryonic feedback processes and the gravitational potential of DM during galaxy formation (see Shajib et al. 2022; Smith 2020), and of stellar populations in early-type galaxies. These processes are particularly significant for lower-mass galaxies, which should have a higher stellar-to-total mass fraction compared to the very massive early-type galaxies that dominate the samples of lens galaxies. Our work significantly extends the typical mass range of current studies, similarly to what will be performed with the upcoming samples of galaxy-scale lenses unveiled by _Euclid_ and LSST. The values of the total stellar mass of the three lens galaxies have been measured in recent works. For members 8971 and 8785, we followed the best-fit relation between the stellar mass and the magnitude in the HST F160W band found by Grillo et al. (2015), \(\log\,(M^{*}(M_{\odot}))=18.541-0.416\times F160W\), where \(M^{*}\) is the stellar mass of the cluster member. To consider the scatter about this mean relation, we chose a 40% uncertainty on the derived stellar mass values of \((2.4\pm 1.0)\times 10^{10}\,M_{\odot}\), and \((4.0\pm 1.6)\times 10^{9}\,M_{\odot}\), respectively. For member 3910, we adopted the value measured by Grillo et al. (2014) of \((1.7\pm 1.0)\times 10^{10}\,M_{\odot}\). In both cases, the stellar mass measurements were based on the HST spectral energy distribution (SED) of the lens galaxies, using composite stellar population models, and a Salpeter (Salpeter 1955) stellar initial mass function (IMF). In Fig. 12, we show the relation between the stellar mass and the velocity dispersion for the three cluster members studied in this work and for the 85 SLACS lenses presented in Auger et al. (2009), which are representative of the currently known population of lens galaxies. We note that the SLACS lenses were modelled with non-truncated total mass profiles, which could affect the recovered values of the velocity dispersion. The figure showcases the significant extension of the range of stellar mass and velocity dispersion values probed in this work compared to current samples of lens galaxies. Using GALFIT on the HFF (for MACS J0416) or CLASH (for MACS J1206) F814W band images, we measured the effective radii, \(R_{e}\), of the three galaxies, finding \(1.41\pm 0.02\,\mathrm{kpc}\)7, \(0.77\pm 0.03\,\mathrm{kpc}\), and \(2.13\pm 0.09\,\mathrm{kpc}\), respectively. They correspond to a ratio between the truncation and the half-light radius \(\frac{M}{R_{e}}\) of \(4.3\pm 1.6\), \(5.13\pm 0.72\), and \(2.77\pm 0.47\), all higher than the average value of 2.3 found by G22 for the cluster members of AS1063, although they fall within the scatter around their mass-to-\(\sigma\) relation. From the total mass profile derived for each of the 100 best-fit models of the three galaxies, we obtained the total mass enclosed within the effective radius \(M(<R_{e})\) and its uncertainty. The stellar-to-total mass fraction within the effective radius for the three members is therefore Footnote 7: Compatible with the value measured by Tortorelli et al. (2023). \[f^{*}(<R_{e})=\frac{M^{*}/2}{M(<R_{e})}. \tag{10}\] We find \(0.51\pm 0.21\), \(1.0\pm 0.4\), and \(0.39\pm 0.16\), for members 8971, 8785, and 3910, respectively. We compare our values of the stellar-to-total mass fraction as a function of the stellar mass with the analogous relation found for the 85 SLACS lens galaxies presented in Auger et al. (2009) (similar results had previously been obtained by Grillo et al. 2008). We take their stellar mass values measured with a Salpeter stellar IMF and the stellar-to-total mass fraction within the effective radius measured in the rest-frame \(V\)-band (comparable with the F814W band for the two clusters considered in this work). As clear from Fig. 12, we probe a lower stellar mass range, but find compatible values between the two samples. Our values agree with the results of G22 for the cluster members of AS1063. We also included in Fig. 12 the values found, starting from the same hypotheses, by Grillo (2010) for a selected sample of \(2\times 10^{5}\) SDSS early-type galaxies, which we find to be compatible both with our work and with that by Auger et al. (2009). These results suggest that the tidal truncation to which the three lens galaxies are subject to, by virtue of the dense cluster environment in which they reside, only marginally affects their structure within the effective radius. This is in agreement with the conclusions of Grillo and Gobat (2010), Grillo et al. (2014), Parry et al. (2016), and G22, who compared the stellar fraction within the effective radius of cluster and field early-type galaxies. ## 6 Conclusions In this article we have presented the measurement with SL of the truncation radius of three cluster galaxies. We considered the reference sample of galaxy clusters included in the analysis by Meneghetti et al. (2020) and selected galaxy-scale SL systems with a clear morphology and several multiple images close to one or a few member galaxies. We chose to focus on members 8971 and 8785 of MACS J0416, and member 3910 of MACS J1206. We built galaxy-scale SL models for the three cluster members and for the neighbouring galaxies which influence the lensing system the most. We accounted for the lensing effects of the remaining mass components of the cluster according to the predictions of the Figure 11: Comparison between the \(v_{\mathrm{max}}\)-to-\(M\) relation obtained from our analysis and those predicted by the cosmological hydrodynamical simulations described in Ragagnin et al. (2022). The naming convention for the different suites is presented in the text. most recent and accurate SL models of MACS J0416 and MACS J1206, presented in B23 and B19, respectively. To properly consider the uncertainty on the total mass distribution of the two clusters, we sampled the posterior probability distribution of the parameters of the two models, and extracted 100 points, corresponding to 100 realisations of the cluster-scale mass distribution. For each of them, we optimised the models of the mass distribution of the three galaxy-scale lenses. This bootstrapping procedure allowed us to obtain a realistic estimate of the uncertainty on the lens parameters and of the degeneracy between them. We described the three members on which we focus our analysis with spherical truncated profiles and test alternative ellipsoidal non-truncated models. The main conclusions of our analyses are summarised as follows: 1. We measured a truncation radius value of \(6.1^{+2.3}_{-1.1}\,\mathrm{kpc}\), \(4.0^{+0.6}_{-0.4}\,\mathrm{kpc}\), and \(5.2^{+1.3}_{-1.1}\,\mathrm{kpc}\) for members 8971, 8785, and 3910, respectively. These values correspond to a total mass of \(M=1.2^{+0.3}_{-0.1}\times 10^{11}\,M_{\odot}\), \(M=1.0^{+0.2}_{-0.1}\times 10^{10}\,M_{\odot}\), and \(M=6.3^{+1.0}_{-1.1}\times 10^{10}\,M_{\odot}\), respectively, and to velocity dispersion values of \(164.9^{+6.8}_{-7.5}\,\mathrm{km\,s^{-1}}\), \(57.6^{+0.3}_{-0.3}\,\mathrm{km\,s^{-1}}\), and \(129.2^{+5.1}_{-3.6}\,\mathrm{km\,s^{-1}}\), respectively. 2. The values of \(r_{\mathrm{r}}\) are well constrained, with a low statistical uncertainty. We compare our results with those of Suyu & Halkola (2010) for the satellite galaxy of the lensing system SL2S J08544\(-\)0121, finding very similar \(r_{\mathrm{r}}\) values for galaxies in the same total mass range. 3. In the case of member 8971, the SIE model leads to a lower value of \(\Delta_{\mathrm{rms}}\), but the comparison between the SL-derived \(\sigma\) and the measured LOSVD value of \(178.0\pm 2.4\,\mathrm{km\,s^{-1}}\) strongly favours the SISt model. 4. In the other two instances, SIE models do not lead to an improved accuracy of the description of the SL observations, in spite of their more complex azimuthal structure and a higher number of free parameters. In the case of member 3910, the parameters of the non-truncated models are not well constrained and show clear degeneracies. 5. Our inferred values of \(\sigma\) and \(r_{\mathrm{r}}\) for the three cluster galaxies differ significantly from the results of B23 and B19, especially in the case of members 8785 and 3910, where they were derived with power-law scaling relations with respect to the galaxy total luminosity. 6. We compare our results with the total-mass-to-\(\sigma\) relations for MACS J0416, MACS J1206, AS1063, and A2744 from B23, B19, and Bergamini et al. (2023a), obtained with the power-law approach. We find that the scaling relations cannot consistently describe all three members studied in this work. Our results instead agree with the mass-to-\(\sigma\) relation derived by G22 for AS1063, based on the FP relation and showing a larger scatter. 7. We juxtapose the estimated compactness of the three lens galaxies with the predictions of the hydrodynamical cosmological simulation suites presented in Ragagnin et al. (2022), which differ in feedback and softening scheme, and mass resolution. Members 8971 and 3910 fall in the total mass range included in the analyses performed on the simulations. Their measured compactness is higher than what is found for simulated sub-haloes of the same total mass, independently of the simulation set-up considered, confirming the discrepancy between observations and simulations first reported in Meneghetti et al. (2020). 8. We measured the stellar mass and the effective radius, and the stellar-to-total mass fraction within the effective radius for the three cluster galaxies. For the latter parameter, we find \(0.51\pm 0.21\), \(1.0\pm 0.4\), and \(0.39\pm 0.16\) for members 8971, 8785, and 3910, respectively. Our values span the same range as those observed for the members of AS1063 by G22, for the 85 SLACS lens galaxies presented in Auger et al. (2009), and for a selected sample of early-type SDSS galaxies by Grillo (2010), suggesting that the tidal truncation of cluster galaxies does not significantly affect their structure within the effective radius. As clear from Fig. 12, our work significantly extends the mass range probed by current SL studies of early-type galaxies, towards the regimes that will be systematically explored by the upcoming lens surveys with the _Euclid_ and the _Rubin_ telescopes. On-going integral-field observations with the _James Webb_ Space Telescope Near Infrared Spectrograph will provide us with spatially resolved kinematic constraints of galaxy-scale lenses in clusters, allowing for a more detailed reconstruction of the mass structure of member galaxies. ###### Acknowledgements. We thank the anonymous referee for some useful suggestions that helped us improve the paper. We acknowledge financial support by PRIN-MIUR 2017WSCC32 (P:I.: P. Rosati), PRIN-MIUR 20208KSTHZ (P:I.: C. Grillo), INAF main-stream 1.05.01.86.20 (P:I.: M. Nonino) and INAF 1.05.01.86.31 (P:L. E. Vanzella). MM was supported by INAF Grant The BigData era of cluster lensing. MM also acknowledges support from the Aspen Center for Physics and Simons Foundation. This work uses the following software Figure 12: Stellar mass of lens galaxies: comparison between this work and the 85 SLACS lens galaxies presented in Auger et al. (2009). Left panel: Stellar mass as a function of the velocity dispersion. Right panel: Stellar-to-total mass fraction measured within the effective radius; the 68% confidence interval found by Grillo (2010) for a selected sample of \(2\times 10^{5}\) SDSS early-type galaxies is also shown. packages: Lenstool (Dulto et al., 2007; Jullo & Kneib, 2009), Astropy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018), matplotlib (Hunter, 2007), NumPy (van der Wal et al., 2011; Harris et al., 2020), pPXF (Capellari & Emsellem, 2004; Cappellari, 2023), Python (Van Rossum & Drake, 2009), Scipy (Virtanen et al., 2020).
2302.02101
GRANDE: a neural model over directed multigraphs with application to anti-money laundering
The application of graph representation learning techniques to the area of financial risk management (FRM) has attracted significant attention recently. However, directly modeling transaction networks using graph neural models remains challenging: Firstly, transaction networks are directed multigraphs by nature, which could not be properly handled with most of the current off-the-shelf graph neural networks (GNN). Secondly, a crucial problem in FRM scenarios like anti-money laundering (AML) is to identify risky transactions and is most naturally cast into an edge classification problem with rich edge-level features, which are not fully exploited by the prevailing GNN design that follows node-centric message passing protocols. In this paper, we present a systematic investigation of design aspects of neural models over directed multigraphs and develop a novel GNN protocol that overcomes the above challenges via efficiently incorporating directional information, as well as proposing an enhancement that targets edge-related tasks using a novel message passing scheme over an extension of edge-to-node dual graph. A concrete GNN architecture called GRANDE is derived using the proposed protocol, with several further improvements and generalizations to temporal dynamic graphs. We apply the GRANDE model to both a real-world anti-money laundering task and public datasets. Experimental evaluations show the superiority of the proposed GRANDE architecture over recent state-of-the-art models on dynamic graph modeling and directed graph modeling.
Ruofan Wu, Boqun Ma, Hong Jin, Wenlong Zhao, Weiqiang Wang, Tianyi Zhang
2023-02-04T05:54:25Z
http://arxiv.org/abs/2302.02101v1
# GRANDE : a neural model over directed multigraphs with application to anti-money laundering ###### Abstract The application of graph representation learning techniques to the area of financial risk management (FRM) has attracted significant attention recently. However, directly modeling transaction networks using graph neural models remains challenging: Firstly, transaction networks are _directed multigraphs_ by nature, which could not be properly handled with most of the current off-the-shelf graph neural networks (GNN). Secondly, a crucial problem in FRM scenarios like anti-money laundering (AML) is to identify risky transactions and is most naturally cast into an _edge classification_ problem with _rich_ edge-level features, which are not fully exploited by the prevailing GNN design that follows node-centric message passing protocols. In this paper, we present a systematic investigation of design aspects of neural models over directed multigraphs and develop a novel GNN protocol that overcomes the above challenges via efficiently incorporating directional information, as well as proposing an enhancement that targets edge-related tasks using a novel message passing scheme over an extension of edge-to-node dual graph. A concrete GNN architecture called GRANDE is derived using the proposed protocol, with several further improvements and generalizations to temporal dynamic graphs. We apply the GRANDE model to both a real-world anti-money laundering task and public datasets. Experimental evaluations show the superiority of the proposed GRANDE architecture over recent state-of-the-art models on dynamic graph modeling and directed graph modeling. * Equal contribution ## I Introduction Recent years have witnessed an increasing trend of adopting modern machine learning paradigms to financial risk management (FRM) scenarios [25]. As a typical use case in operational risk scenarios like fraud detection and anti-money laundering, the identification of risky entities (user accounts or transactions) is cast into a supervised classification problem using behavioral data collected from the operating financial platform [6, 20]. For institutions like commercial banks and online payment platforms, the most important source of behavior information is the _transaction records_ between users. making _transaction networks_ (with users as nodes and transactions as edges) a direct and appropriate data model. Unlike standard pattern recognition tasks like image recognition where decisions are made according to information of individual objects, identification of risky patterns over transaction network requires reasoning beyond any individual scope. The phenomenon is particularly evident in the area of anti-money laundering (AML), where suspicious transactions are usually related by several users or accounts, with transactions between them being highly correlated, thereby exhibiting a cascading pattern which makes i.i.d. approaches in machine learning unsuitable. The surging developments of machine learning models over graphs, especially graph representation learning [11], have attracted significant attention in the financial industry and have shown promising results in the area of FRM [23, 21]. The dominant practice in graph representation learning is to utilize the panoply of graph neural networks (GNN) [3] that produce node-level representations via principled aggregation mechanisms which are generally described via message passing protocols [9] or spectral mechanisms [16]. Despite their convincing performance, the majority of the existant GNN models operate over _undirected graphs_, which makes them inadequate for the direct modeling of transaction networks. Firstly, many graphs that arise in FRM applications are directed by nature: i.e., in the case of a transaction network with users as nodes and transactions as edges, the direction of an edge is typically understood as the direction of its corresponding cash flow. In areas like anti-money laundering (AML), directional information is generally perceived to be of significant importance and shall not be neglected [28]. Secondly, there might exist multiple transactions between certain pairs of users. Thirdly, transactions are naturally associated with timestamps that indicate the time of occurrence. Therefore, to fully utilize the graphical structure of transaction networks, we need representation learning frameworks that support _temporal directed multigraphs_. While recent progress on _dynamic graph neural networks_[38, 15] provide appropriate methods to handle temporality, discussions over neural architectures that supports directed multigraphs remains nascent [9, 24, 31, 30, 44]. From a practical point of view, the targeted risky entities may be either node (i.e., malicious users) or edges (i.e., suspicious transactions). Conventional GNN architectures produce node-level representations via encoding information of each node's rooted subtrees [39], making them a good fit for _user or account level_ risk identifications. When the underlying task is to detect risky transactions, the prevailing practice is to present edges using a combination of node representations corresponding to both ends of edges. While such design may be adequate for tasks like link prediction, it lacks a way to effectively integrate edge-level information into the edge representation. Since financial networks usually contain rich edge-level features (i.e., detailed transaction-related information), refinements on edge-level representations are needed. For example, to accurately represent a transaction, we need to combine the information of its buyer (cash sender) and seller (cash receiver), and the transaction-related information, with each of them requiring aggregating relevant information from related users and transactions. A recent line of work [13, 4, 14] focused directly on _learned edge representations_ using the idea of _edge-to-node duality_ and obtained satisfactory performance over downstream tasks like edge classification. However, previous works on edge representation learning all applies to undirected graphs, making the extension to transaction networks highly non-trivial. In this paper, we propose a general message passing neural network protocol that simultaneously outputs node and edge representations over directed multigraphs. Based on this protocol, we derive a GNN architecture called GRANDE with an extension to temporal graphs that efficiently leverages the underlying structural property of transaction networks. More specifically, we summarize our contribution as follows: * We develop a novel bi-directional message passing protocol with duality enhancement (BiMPNN-DE) that strengthens previous proposals over message passing neural architectures over directed multigraphs. The improvement is two-fold: Firstly, it effectively combines neighborhood information from both incoming and outgoing edges of nodes. Secondly, it simultaneously outputs node and edge representations via performing message passing over both the original graph and its _augmented_ edge adjacency graph. * We derive a concrete GNN architecture following the proposed BiMPNN-DE protocol called GRANDE, that devices the acclaimed transformer [32] mechanism for neighborhood aggregation. The proposed GRANDE framework is made compatible with temporal directed multigraphs through the integration of a generic time encoding module that further extends previous works on dynamic graph modeling [38]. * To show the practical effectiveness of GRANDE, we apply it to a suspicious transaction identification task in anti-money laundering, with the underlying transaction network data collected from one of the world's leading online payment platforms. Comparisons against various undirected and directed GNN baselines show the superiority of the proposed model. We also provide evaluations on two public datasets generated from transaction networks to further verify the strength of GRANDE framework when underlying graph features are relatively weak. ## II Methodology ### _Problem formulation_ Under the context of financial risk management, we consider the following _event stream_ representation of recorded transaction data that are available in most online transaction systems: \[\mathcal{E}=\{(u_{1},v_{1},t_{1},\chi_{1}),(u_{2},v_{2},t_{2},\chi_{2}),\ldots\} \tag{1}\] Each event \((u,v,t,\chi)\) is interpreted as a transaction from user \(u\) to user \(v\) that occurred at time \(t\), with related features \(\chi\) that could often be further decomposed as user-level features like user account information, and event-level features like transaction amount and channels. In this paper, we focus on the representative task of _transaction property prediction_ that typically takes the form of binary classification that aims at identifying illicit or fraudulent transactions. The task could be cast into a graph learning problem of edge classification in a straightforward manner. We consider the temporal graph modeling paradigm [15] that views the underlying temporal graph as being generated from the event stream \(\mathcal{E}\). Therefore, given a time period \(\mathcal{T}=[\tau_{\text{start}},\tau_{\text{end}}]\), we construct the graph data as the snapshot \(G(\mathcal{T})=(V(\mathcal{T}),E(\mathcal{T}))\) of the underlying temporal graph. Since there may exist multiple transactions between the same set of users, we consider \(G(\mathcal{T})\) to be a Fig. 1: Illustration on the deficiency of the directed message passing protocol in [3]: suppose the node of interest is \(n_{0}\), using GNNs designed according to the protocol in [3], it becomes impossible for \(n_{0}\) to aggregate information of \(n_{9}\) and \(n_{10}\). Under the context of financial risk management, suppose \(n_{9}\) and \(n_{10}\) corresponds to known fraudsters, and edges correspond to transactions. Although the riskiness of \(n_{5}\) might be undetermined, the transaction pattern makes it highly suspicious and therefore uplifts the riskiness of \(n_{0}\). To build models that behave coherently with the above reasoning process, GNN protocols that aggregates information from _both directions_ are required directed multigraph_ with each edge in the edge multiset \(E(\mathcal{T})\) represents an event that happens inside the time interval \(\mathcal{T}\), and the node set \(V(\mathcal{T})\) consists of related users corresponding to the included events. During the training stage, we construct a snapshot \(G(\mathcal{T}_{\textsf{train}})\), and obtain a possibly incomplete set of edge labels that are understood as edge properties annotated using expert knowledge. During testing stage, we perform inference over snapshots \(G(\mathcal{T}_{\textsf{test}})\) that are based on later time intervals than \(\mathcal{T}_{\textsf{train}}\). We assume \(\mathcal{T}_{\textsf{train}}\cap\mathcal{T}_{\textsf{test}}=\emptyset\), hence the problem of interest could be viewed as _inductive edge classification over temporal graphs_. ### _Message passing protocols and directed graphs_ Let \(G=(V,E)\) be a directed multigraph with node set \(V\) and edge multiset \(E\). For any pairs of nodes \((u,v)\), denote \(\mu(u,v)\) as the number of edges going from \(u\) to \(v\). Then \(G\) becomes a (simple) graph when \(\max_{u\in V,v\in V}\mu(u,v)\leq 1\). For each \(v\in V\), denote \(N^{+}(v)=\{u,(v,u)\in E\}\) and \(N^{-}(v)=\{u,(u,v)\in E\}\) as its out-neighborhood and in-neighborhood respectively, and let \(N(v)=N^{+}(v)\cup N^{-}(v)\) be its neighborhood. For the sake of presentation clarity, we will overload the notation \(uv\) for both an edge in the undirected graph, or a directed edge from \(u\) to \(v\) in a directed (multi)graph from time to time, with its exact meaning being clear from the context. We are interested in the general case where both node features \(X=\{x_{v}\}_{v\in V}\) and edge features \(Z=\{z_{uv}\}_{(u,v)\in E}\) are available, where we assume both kinds of features to be of dimension \(d\). In this paper we focus on neural approaches to such directed multigraphs. A good starting point is the neural message passing scheme for undirected graphs [9]: let \(h^{(l)}_{v}\) denote the hidden representation of node \(v\) at the \(l\)-th layer of the network, and \(h^{(0)}_{v}=x_{v},\forall v\in V\). The message passing graph neural network protocol (abbreviated as GNN hereafter) is described recursively as: \[h^{(l+1)}_{v}=\textsf{COMBINE}\left(h^{(l)}_{v},\textsf{AGG}\left(\textsf{ MESSAGE}(h^{(l)}_{v},h^{(l)}_{v},z_{uv},u\in N(v))\right)\right) \tag{2}\] Different combinations of COMBINE, AGG and MESSAGE mechanisms thus form the _design space_ of undirected GNNs [41]. To the best of our knowledge, there are three types of generalization strategies to directed graphs: **Symmetrization** The most ad-hoc solution is to "make it undirected" via padding necessary reverse edges so that \(N(v)=N^{+}(v)=N^{-}(v)\), and apply standard graph neural networks that operate on undirected graphs like GCN or GAT. Despite its simplicity and clearness, the symmetrization approach discards directional information in the digraph and may raise subtleties when dealing with multigraphs. **DiGraph-theoretic motivations** A more recent line of work [24, 31, 30, 44] drew insights from directed graph theory, especially the spectral branch [7]. The proposed models are mostly digraph analogs of GCN, without the consideration for edge features, therefore severely limiting the design space of directed message passing GNNs. **Directed Protocol** In the seminal work [3, Algorithm 1], the authors proposed a GNN protocol that operates on directed multigraphs with edge features via aggregating messages from only the in-neighborhood, i.e., replacing \(N(v)\) in (2) with \(N^{-}(v)\)). 1 While being a natural extension, such kind of GNN protocol losses information from the outgoing direction of each node. We present a pictorial illustration in figure 1. Footnote 1: The original version also considered incorporation of a _global node_ that aggregates information from the whole graph regardless of the connectivity structure. While such design choice may have some gains in moderate size graphs [36], it does not scale to large graphs. Therefore we will not consider such design choice in this paper To address the aforementioned shortcomings, we propose a novel GNN protocol that operates on directed digraphs termed _bi-directional message passing neural network (BiMPNN)_. The protocol extends the standard undirected protocol (2) via enabling each node to aggregate information from both its in-neighborhood and out-neighborhood: \[h^{(l+1)}_{v} =\textsf{MERGE}\left(\phi^{(l+1)}_{v},\psi^{(l+1)}_{v}\right)\] (3) \[\phi^{(l+1)}_{v} =\textsf{COMBINE}_{\textsf{in}}\left(h^{(l)}_{v},\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad **Definition 1** (Line graph and Line digraph [10, 2]).: For both undirected graph and directed (multi)graphs where we overload notation without misunderstandings, \(G=(V,E)\), the node set of its line graph \(L(G)=(L(V),L(E))\) is defined as its edge (multi)set \(L(V)=E\). **Undirected graph** the edge set of its line graph is defined as \[L(E)=\{(uv,rs):(u,v)\in E,(r,s)\in E,\{u,v\}\cap\{r,s\}\neq\emptyset\} \tag{4}\] **Directed (multi)graph** the edge set of its line graph is defined as \[L(E)=\{(uv,rs):(u,v)\in E,(r,s)\in E,v=r\} \tag{5}\] For undirected graphs, their line graphs provides a natural way to update edge representations under standard message passing protocols like (2). However, trivially extending (3) using the definition of line digraphs may incur significant information loss: We take the graph in figure 1 as an example, its line graph has an _empty_ edge set, which makes the message passing framework useless over the derived line graph. While the graph-theoretic definition enjoys some nice properties [2], the adjacency criterion might be overly stringent for deriving useful GNN architectures. Intuitively, we may expect different transactions triggered by the same account as correlated rather than independent, which makes connectivity of edges like \((n_{5},n_{9})\) and \((n_{5},n_{0})\) as desirable. Therefore, we propose the following _augmentation strategy_ to obtain an _augmented edge adjacency graph_\(\overline{L(G)}=(\overline{L(V)},\overline{L(E)},T(E))\): The node set is still defined as \(\overline{L(V)}=E\), and we augment the edge set using the undirected adjacency criterion (4). To retain directional information, we encode the adjacency pattern of two edges (with four possible patterns: _head-to-head_, _head-to-tail_, _tail-to-head_, _tail-to-tail_) into a categorical vector, which we denote as \(T(E)=\{\textsf{type}(uv,rs):(uv,rs)\in\overline{L(E)}\}\). By construction, for each edge in \(\overline{L(E)}\), its reverse is also in \(\overline{L(E)}\) with possibly different edge types. We provide a pictorial illustration in the left part of figure 2. To derive an edge representation update rule, we follow the spirit of the BiMPNN node update rule (3): let \(N_{L}^{+}(uv),N_{L}^{-}(uv)\) be the out and in neighborhoods in the ordinary line graph \(L(G)\) of \(G\), and \(\overline{N_{L}^{+}(uv)},\overline{N_{L}^{-}(uv)}\) to be those in \(\overline{L(G)}\), respectively. For each edge \((uv,rs)\in\overline{L(E)}\), we use \(\textsf{C}(uv,rs)\in V\) as the common incident node of the edges \(uv\) and \(rs\). The following updating rule enhances BiMPNN protocol with duality information, which we term BiMPNN-DE: \[h_{v}^{(l+1)}=\textsf{MERGE}^{\textsf{node}}\left(\phi_{v}^{(l+1)},\psi_{v} ^{(l+1)}\right)\] \[g_{uv}^{(l+1)}=\textsf{MERGE}^{\textsf{edge}}\left(\theta_{uv}^{(l+1)},\gamma _{uv}^{(l+1)}\right)\] \[\phi_{v}^{(l+1)}=\textsf{COMBINE}^{\textsf{node}}\left(h_{v}^{(l)},\] \[\textsf{AGG}_{\text{in}}^{\textsf{node}}\left(\textsf{MESSAGE}_{\text{in}}^{ \textsf{node}}(h_{v}^{(l)},h_{u}^{(l)},g_{uv}^{(l)},u\in N^{-}(v))\right)\right)\] \[\psi_{v}^{(l+1)}=\textsf{COMBINE}^{\textsf{node}}_{\text{out}}\left(h_{v}^{(l)},\] \[\textsf{AGG}_{\text{out}}^{\textsf{node}}\left(\textsf{MESSAGE}_{\text{out}}^{ \textsf{node}}(h_{v}^{(l)},h_{v}^{(l)},g_{uv}^{(l)},r\in N^{+}(v))\right)\right) \tag{6}\] \[\theta_{uv}^{(l+1)}=\textsf{COMBINE}^{\textsf{edge}}_{\text{in}}\left(g_{uv}^{( l)},\right.\] \[\gamma_{uv}^{(l+1)}=\textsf{COMBINE}^{\textsf{edge}}_{\text{out}}\left(g_{uv}^{(l)},\] \[\textsf{AGG}_{\text{out}}^{\textsf{edge}}\left(\overline{\textsf{MESSAGE}}_{\text{out}}^{ \textsf{edge}}(g_{uv}^{(l)},g_{x}^{(l)},\hat{h}_{uv,rs}^{(l)},rs\in\overline{ N_{L}^{+}(uv)})\right)\right)\] \[h_{uv,rs}^{(l)}=\textsf{COMBINE}^{\textsf{type}}\left(h_{\textsf{C}(uv,rs)}^{ (l)},T_{\textsf{type}(uv,rs)}\right)\] Where we use \(g_{uv}^{(l)}\) to denote the hidden representation of edge \(uv\) at the \(l\)-th layer of GNNs derived from the BiMPNN-DE protocol. The protocol devices an additional edge representation update component that mirrors the BiMPNN protocol over the augmented edge adjacency graph \(\overline{L(G)}\) (see the last four equations in the display (6)). To obtain an edge representation counterpart \(\tilde{h}_{uv,rs}^{(l)}\) during the aggregation process over \(\overline{L(G)}\), we use an additional \(\textsf{COMBINE}^{\textsf{type}}\) mechanism that combines features of the common incident node and the information of adjacent types, with is encoded into a learnable edge type embedding matrix \(T\in\mathbb{R}^{4\times d}\). The BiMPNN-DE protocol (6) offers a much larger design space than that of BiMPNN protocol. In its full generality, we may specify up to \(15\) different mechanisms corresponding to different MERGE, COMBINE, AGG and MESSAGE operations. From a practical point of view, we may design the aforementioned operations using parameterized functions that share the same underlying structure. ### _The GRANDE architecture_ In this section, we devise the previously developed BiMPNN-DE protocol (6) to derive a concrete GNN architecture that simultaneously outputs node and edge representations, along with an improvement strategy that targets edge-property prediction tasks. We base our design upon the acclaimed Transformer architecture [32], which has seen abundant adaptations to GNNs recently [38, 8, 40]. We define the multiplicative attention mechanism that incorporates edge information as follows: \[\textsf{ATTN}(h_{v},\{h_{u},g_{uv}\}_{u\in N(v)})=\sum_{u\in N(v)\cup\{v\}} \alpha_{uv}W_{N}h_{u}+\beta_{uv}W_{E}g_{uv}\] \[\alpha_{uv}=\frac{\exp\left(\langle W_{Q}h_{u},W_{K}h_{u}\rangle/\sqrt{d} \right)}{\sum_{u\in N(v)\cup\{v\}}\exp\left(\langle W_{Q}h_{v},W_{K}h_{u} \rangle/\sqrt{d}\right)} \tag{7}\] \[\beta_{uv}=\frac{\exp\left(\langle W_{Q}h_{u},W_{E}g_{uv}\rangle/\sqrt{d} \right)}{\sum_{u\in N(v)\cup\{v\}}\exp\left(\langle W_{Q}h_{v},W_{E}g_{uv} \rangle/\sqrt{d}\right)}\] We include commonly used operations in a transformer block, namely LayerNorm (LN), skip connection and a learnable two layer MLP (FF) as nonlinearity [32], and wraps them into a transformer block: \[\begin{split}\text{TRANSFORMER}(h_{v},\{h_{u},g_{uv}\}_{u\in N(v)})= \text{LN}(\tilde{h}_{v}+\text{FF}(\tilde{h}_{v}))\\ \tilde{h}_{v}=\text{LN}(h_{v}+\text{ATTN}(h_{v},\{h_{u},g_{uv}\}_{u \in N(v)}))\end{split} \tag{8}\] After defining the basic mechanisms, we write the node and edge update rules as follows: \[\begin{split} h_{v}^{(l+1)}&=\text{CONCAT}\left( \phi_{v}^{(l+1)},\psi_{v}^{(l+1)}\right)\\ g_{uv}^{(l+1)}&=\text{CONCAT}\left(\theta_{v}^{(l+1) },\gamma_{v}^{(l+1)}\right)\\ \phi_{u}^{(l+1)}&=\text{TRANSFORMER we have its time of occurrence \(t_{e}\). The causal pruning strategy deletes edges in \(\overline{L(G)}\) with the occurrence time of the head node being earlier than that of the tail node. When the causal pruning strategy is applicable, we may prune up to \(50\%\) of the edges in \(\overline{L(G)}\). An illustration is provided in the right part of figure 2. Note that the proposed strategy is closely related to the construction of causal temporal subgraphs in temporal graph modeling literature [38, 29]. ### _Scalability and complexity_ Most of the real-world financial networks like transaction networks are _sparse_, i.e., most people only make transactions to a few others given a finite time window. Consequently, the computational complexity of any message passing neural networks could be roughly regarded as \(O(Ed^{2})\). Extending ordinary MPNN architectures to BiMPNN protocol doubles the computation cost, which could be easily resolved through parallelization in modern deep learning frameworks like tensorflow [1]. The extra computational cost brought by introducing the dual component (6) requires more care: even when the original graph is sparse, its augmented edge adjacency graph might be dense or even complete. Such cases do happen in realistic scenarios since large hubs frequently exist in transaction networks, which corresponds to a complete subgraph in the dual network. Therefore the worst-case computation cost of \(O(E^{2}d^{2})\) is sometimes inevitable in architectures derived from the BiMPNN-DE protocol (6). Hence to meet the computational requirement of GNN architectures like GRANDE, performing GNN training/inference over the whole graph is unrealistic. Instead, we resort to a _local_ computation alternative implemented by the AGL system [43] which grabs the \(K\)-hop rooted subgraph of each target node and performed batched stochastic training and efficient parallel inference given distributed infrastructures like MapReduce [43]. In practical scenarios, it is often reasonable to set an upper bound \(M_{\mathsf{max}}\) on the edges of any \(K\)-hop rooted subgraph and device proper sampling methods to meet the requirement. The resulting computational complexity during training is reduced to \(O(BM_{\mathsf{max}}^{2}d^{2})\), where \(B\) denotes the batch size. Since we may control \(M_{\mathsf{max}}\) so that the whole batch of subgraphs fits the storage requirement of high-performance hardware like GPU, the computational costs of running GRANDE becomes fully affordable for industry-scale distributed training and inference. ## III Related Works ### _Neural models over directed graphs_ Directional extensions of message passing GNN protocol were mentioned in pioneer works [9, 3] without providing empirical evaluations. Recent developments toward designing GNNs for digraphs are mostly inspired by different types of graph Laplacians that are defined over digraphs. For example, [30] used the definition in [7] and [44] used the Hermitian magnetic Laplacian to decouple the aggregation process of graph connectivity and edge orientations. ### _GNNs for edge representation learning_ The idea of utilizing node-to-edge duality was explored in early works like LGNN [5], where the authors drew insights from community detection literature, and use the non-backtracking walk operator [17] to define the dual graph and perform GCN-like aggregations simultaneously over both graphs. Later developments [13, 4, 14] focused on variants of LGNN with the alternative definition of the dual, such as the standard LINE graph [10]. ### _Transformer architectures over graphs_ The renowned GAT architecture [33] could be regarded as using the _additive_ attention mechanism to form the attention layer, as opposed to the _multiplicative_ attention mechanism adopted by the Transformer architecture [32]. Adaptations of the original Transformer to graph context have been assessed recently, [8] replaced the additive attention in GAT with inner product attention and use spectral embedding as a proxy for the positional embedding component in the original transformer architecture. In [40], the authors proposed to use _full-attention_ transformers and use graph-theoretical attributes of nodes and edges to guide the attention procedure. While the results were shown competitive over biological benchmarks, the computational overhead is way too heavy for industrial-level graphical applications. ## IV Experiments In this section, we report empirical evaluations of GRANDE over an industrial application as well as assessments over public datasets. We focus on the edge classification task over temporal directed multigraphs. Finally, we present a detailed ablation study to decompose the contributions of different constituents of GRANDE. ### _Datasets_ We use one industrial dataset and two public datasets, with their summary statistics listed in table III in appendix A. **AML dataset** This dataset is generated from transaction records collected one of the world's leading online payment systems. The business goal is to identify transactions that exhibit risky patterns as being highly suspicious of money Fig. 2: An illustration of the proposed line graph augmentation strategy: The left figure stands for the augmented edge adjacency graph \(\overline{L(G)}\) for the digraph depicted in figure 1. We use colored edges to represent edge types: head-to-head and tail-to-tail, note that the remaining two kinds of edge types do not appear in \(\overline{L(G)}\). The right figure shows the effect of the causal pruning strategy under the additional temporal constraint that \(t_{0}<t_{1}<\cdots<t_{9}\), with \(t_{i}\) being the occurrence time of edge \(t_{i}\) for \(i\in\{0,\ldots,9\}\) laundering. The underlying graph is constructed by treating users as nodes and transactions as directed edges with arbitrary multiplicity. We engineer both node and edge features under a two-stage process: We first obtain raw node features via statistical summaries of corresponding user's behavior on the platform during specific time periods, and raw edge features consist of transaction properties as well as related features of two users involved in the transaction. 2 The decision tree feature transforms [12] is then applied to both features so that after the transform, the input node and edge feature for all the assessed models are sparse categorical features with dimension \(6400\). For both training and testing, we collect data under a \(10\)-day period with no overlap between the training period and the testing period. A random subset corresponding to 10% of the testing data is held out for validation. Footnote 2: Per organizational regulations, the detailed feature engineering logic is not fully described. We will consider (partially) releasing the AML dataset after passing relevant security checks of the company, as well as the source code. **Bitcoin datasets** We use two who-trusts-whom networks of people who trade using Bitcoin on two different platforms, Bitcoin OTC and Bitcoin Alpha [19, 18]. Both networks are directed without edge multiplicities, each edge is associated with a timestamp and a trust score ranging from \(-10\) to \(10\). We consider the task of binary edge classification with edge labels generated as whether the trust score is negative. Using node features represented as the concatenation of one-hot representation of in and out-degree of nodes. For both datasets, we use the chronological split that uses \(70\%\) data for training, \(10\%\) for validation, and \(20\%\) for testing ### _Baselines_ We compare the proposed GRANDE framework with the following types of baselines: **Undirected approaches** We consider two representative GNN architectures GCN [16] and GAT [33] that operate on undirected graphs. Since temporal information is available in all three datasets, we also include the TGAT architecture [38]. As all the aforementioned methods produce node-level representations, we use the concatenation of node representations as edge representation according to the adjacency structure. As frameworks that directly output edge representation remain few, we include the EHGNN architecture [14] as a strong baseline. To make the undirected architectures compatible with directed (multi)graphs, we add reverse edges with duplicated edge features if there exist no edge multiplicities in the digraph. Otherwise, we keep only one edge between each pair of nodes, with the corresponding edge feature generated via aggregating the original edge features (according to the "multigraph to graph" hierarchy) using the DeepSet method [42], and add reverse edges thereafter. **DiGraph-oriented approaches** We consider two digraph GNN architectures that utilizes different notions of directed graph Laplacians, DGCN [24] and MagNet [44]. The aforementioned baselines exclude some of the recently proposed state-of-the-art GNN models like Graphormer [40] for undirected graphs or directed approaches like DiGCN [30] due to scalability issues, i.e., they require either full graph attention or solving eigen programs over the full graph Laplacian, which are computationally infeasible for industry-scale graphs. ### _Experimental setup_ Across all the datasets and models, we use a two-layer architecture with hidden dimension \(d=128\) without further tuning. For models with generic time encodings, we fix the dimension of time encoding to be \(128\). For transformer related architectures, we follow the practice in [32] and use a two-layer MLP with ReLU activation with hidden dimension \(512\). As all the relevant tasks are binary classifications, we adopt the binary cross entropy loss as the training objective, with \(\ell_{2}\) regularization under a coefficient \(0.0001\) uniformly across all experiments. The graph data are constructed via the GraphFlat component of the AGL system [43] that transforms the raw graph data into batches of subgraphs with appropriate sampling. 3 We use Adam optimizer with a learning rate of \(0.0001\) across all tasks and models. For the bitcoin datasets, we train each model for \(10\) epochs using a batch size of \(128\) and select the best-performed one according to the roc-auc score on the validation data under periodic evaluations every \(100\) steps. For the AML dataset, we train the model for \(2\) epochs with a batch size of \(256\) as the size of the dataset is sufficiently large. We adopt similar model selection criterion as those of Bitcoin datasets, with periodic evaluations every \(500\) steps. Footnote 3: The AGL framework is particularly useful when dealing with industry-scale graphs that are barely possible to process as a whole. However, it may lose some information in the sampling stage of the preprocessing phase. To fully mimic the industrial setup, we preprocess all three datasets using AGL, therefore the results of Bitcoin datasets are not directly comparable to previously published results. **Metrics** Since the primary focus of this paper is applications to the FRM scenario, we choose three representative metrics, namely roc-auc score (AUC), Kolmogorov-Smirnov statistic (KS) and F1 score (F1). ### _Performance_ We present evaluation results in table I. Apart from the proposed GRANDE architecture, we report a _reduced_ version of GRANDE via discarding all operations on the augmented edge adjacency graph, as well as the cross-query attention module (10). The resulting model could be considered as implementing a time-aware variant of graph transformer under the BiMPNN protocol. We summarize our experimental findings as follows: * For the Bitcoin datasets which could be considered as under the _weak feature_ regime, the GRANDE architecture obtains substantial performance improvement: On the Bitcoin-OTC dataset, the relative improvement over the best baselines are \(10.1\%\), \(30.7\%\) and \(22.6\%\) with respect to AUC, KS, and F1. On the Bitcoin-Alpha dataset, the relative improvement is more significant with \(19.2\%\), \(67.3\%\), and \(35.4\%\) respectively. We attribute the improvements to both the directional information and the duality information that GRANDE utilizes. The improvements of the directional information could be inferred from the results of the reduced GRANDE variant, which exhibits solid improvements over all the baselines. The incorporation of edge-to-node duality and cross-query attention systematically encodes more structural information, therefore yielding further improvements. * For the AML dataset which could be regarded as under the _strong feature_ regime, the performance improvement is significant with respect to KS and F1 metrics while being less significant with respect to AUC. Such improvements are still valuable in FRM applications since a higher F1 score potentially suggests better patterns of the precision-recall (PR) curve, which we plot in figure 3. The PR curve shows the dominant performance of GRANDE against baselines: under various precision levels, the recall of GRANDE surpasses the best baseline (TGAT) by as many as \(5.29\%\) in absolute value and \(13.4\%\) in relative. ### _Ablation study_ We evaluate the following variants of GRANDE over all three datasets to investigate contributions of different constituents: **Reduced version**: this is the one reported in table I **Without causal pruning**: in this model variant we retain the full edge adjacency graph without pruning. Which is computationally heavier than the GRANDE architecture **Without time encoding**: in this model variant we discard the temporal component of GRANDE and use the update rule (9) **Without cross-query attention**: in this model variant we discard the cross-query attention module (10), and use \(\mathsf{CONCAT}(g_{uv},h_{v},h_{u})\) as the output embedding for edge \((u,v)\). **With line graph**: in this model variant, we use the ordinary directed line graph instead of the proposed augmented edge adjacency graph. i.e., we replace \(\overline{N_{L}^{+}(uv)}\) and \(\overline{N_{L}^{-}(uv)}\) in (9) with \(N_{L}^{+}(uv)\) and \(N_{L}^{-}(uv)\), respectively. **Results**: we report results in table II using the same training configuration and evaluation metrics as in section IV-C. There are a couple of notable observations: Firstly, the causal pruning procedure saves computation as well as improves performance, providing a solid relational inductive bias in temporal graph modeling. Secondly, the incorporation of time encoding and cross-query attention are in general helpful. Finally, using the ordinary line graph performs on par with the reduced model, showing the insufficiency of additional information provided by line digraphs, thereby verifying the necessity of using the augmented edge adjacency graph.
2303.04881
Energy Band Structure of Relativistic Quantum Plasmon Excitation
In this paper we use the effective Schr\"{o}dinger-Poisson and square-root Klein-Gordon-Poisson models to study the quantum and relativistic quantum energy band structure of finite temperature electron gas in a neutralizing charge background. Based the plasmon band gap appearing above the Fermi level, new definitions on plasmonic excitations and plasma parameters in a wide electron temperature-density regime is suggested. The new equation of state (EoS) for excited electrons to the plasmon band leads to novel aspects of relativistic collective quantum excitations such as the plasmon black-out and quantum pressure collapse which are studied using both non-relativistic and relativistic quantum models. The plasmon black-out effect may be used to explain why metallic elements do not show collective behavior at low temperatures. The model can be used to predict phases of matter in which the plasmonic activities is shut down, hence, it may behave like a mysterious dark matter. On the other hand, the energy band structure model predicts the plasmon pressure collapse in temperature-density coordinates matching that of a white dwarf star. The prediction of energy band structure of collective quantum excitations may have direct implications for the inertial confinement fusion (ICF), the EoS of warm dense matter (WDM) and evolution of stellar and other unknown cosmological structures. It is found that predictions of non-relativistic and relativistic quantum excitation models closely match up to temperature-density of degenerate stars which confirms the relevance of non-relativistic plasmon models used in the warm and dense matter regime. The effect of positron on band structure of collective quantum excitations is also studied.
M. Akbari-Moghanjoughi
2023-02-19T04:15:27Z
http://arxiv.org/abs/2303.04881v1
# Energy Band Structure of Relativistic Quantum Plasmon Excitation ###### Abstract In this paper we use the effective Schrodinger-Poisson and square-root Klein-Gordon-Poisson models to study the quantum and relativistic quantum energy band structure of finite temperature electron gas in a neutralizing charge background. Based the plasmon band gap appearing above the Fermi level, new definitions on plasmonic excitations and plasma parameters in a wide electron temperature-density regime is suggested. The new equation of state (EoS) for excited electrons to the plasmon band leads to novel aspects of relativistic collective quantum excitations such as the plasmon black-out and quantum pressure collapse which are studied using both non-relativistic and relativistic quantum models. The plasmon black-out effect may be used to explain why metallic elements do not show collective behavior at low temperatures. The model can be used to predict phases of matter in which the plasmonic activities is shut down, hence, it may behave like a mysterious dark matter. On the other hand, the energy band structure model predicts the plasmon pressure collapse in temperature-density coordinates matching that of a white dwarf star. The prediction of energy band structure of collective quantum excitations may have direct implications for the inertial confinement fusion (ICF), the EoS of warm dense matter (WDM) and evolution of stellar and other unknown cosmological structures. It is found that predictions of non-relativistic and relativistic quantum excitation models closely match up to temperature-density of degenerate stars which confirms the relevance of non-relativistic plasmon models used in the warm and dense matter regime. The effect of positron on band structure of collective quantum excitations is also studied. Introduction The technology is rapidly moving towards the plasmonics and nanoplasmonics designs [1; 2; 3] which provide efficient and more reliable methods of communication and energy transport. The terahertz scale response of electron oscillations, makes the plasmonic an ideal candidate for fast electronic switches in integrated circuits [4; 5]. Also, due to collective effects in plasmon excitations a large number of electrons contribute to quantum phenomena instead of single electron-hole mechanisms in ordinary semiconductor devices [6; 7]. Plasmonics has become an active field of interdisciplinary research, since mid 19's after realization of the surface plasmon-polariton resonance effect and the surface-enhanced Raman scattering (SERS) effect in 1970's [8]. In plasmonic devices collective oscillations of free electrons is driven by either electromagnetic radiation on appropriate plasmonic material causing the localised surface plasmon resonance (LSPR) [9] or by external field stimulation coupled to the device causing the hot electron ejection which then are collected in a Schottky junction with an appropriate semiconductor coating. Due to extreme sensitivity of plasmonic devices to the size and geometry a wide range of applications are foreseeable in this technology [10; 11]. However, the efficient plasmonic designs is twisted with many technological obstacles which slows down the rapid developments, such as that occurred for semiconductor industry [12; 13; 14; 15]. Plasmon effects dominate mostly at nanoscale which is relatively costly to fabricate. By overcoming these limitations, plasmonic devices can find their full potential in fields such as plasmon focusing [16], optical emitters [17], solar cells [18], nanoscale waveguiding [19], optical antennas [20], communication devices [21], plasmonic sensors [22] and modulator [23] nanoscale switches [24] and even spasers [25]. The plasmonic energy conversion device may require few important technological considerations [26; 27; 28; 29; 30; 31; 32] with respect to the materials, integration and low dimensional fabrication and final assembly [33]. Investigations have shown that harvesting of the electromagnetic energy depends on the right choice of plasmonic geometries and fine-tuned design of nanoplasmonic device in order to make more effective solar-cell technology. In this way the solar energy is efficiently transferred to the large collection of electrons and can be absorbed in a single quantum well, in array of quantum dots or molecular chromophores [34]. Collective electron excitations play inevitable role in astrophysical plasmas [35; 36; 37; 38], on the other hand. They contribute to many important linear and nonlinear effects in laboratory and cosmological scales, such as the charge screening, communication black-out, collisionless Landau damping, large variety of instabilities, localized density formations as soliton and shock waves, harmonic generation and endless other phenomena [39]. In extreme temperature and density conditions, collective electron effects combine with quantum and relativity to produce some novel phenomena such as stellar pressure collapse beyond the Chandrasekhar mass limit [40; 41]. In the laboratory, the inertial confinement fusion (ICF) [42] device uses a delicate design to compress and heat the thermonuclear fuel containing palette in order to ignite the efficient nuclear fusion towards an efficient and clean source of energy. Benefits of numerous plasma applications are indebted to the theoretical and experimental developments which take place over the past century with pioneering works of many people [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. There has been a large attempt toward developing efficient tools to cope with many-body collective effects in quantum plasmas in recent decade. From Wigner-Poisson-Maxwell kinetic theory and corresponding quantum hydrodynamic developments [60; 61; 62; 63] to density functional theory [64; 65] and quantum Monte Carlo technique are all effective tool with their pros and cones. Due to simplicity, quantum hydrodynamic theory has attracted increased attraction over the past few years [66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77]. However, due to dual-scale nature of electron oscillations in quantum plasmas which was not incorporated in the original quantum hydrodynamic theory some results has led to intense recent debate among researchers [78; 79; 80; 81; 82; 83; 84; 85; 86]. On the other hand, the effective Schrodinger-Poisson model [87] has shown some recent success in capturing essential features of collective quantum excitations [88; 89; 90; 91; 92; 93; 94]. The latter model suggests that collective quasiparticles in a dense quantum plasma can behave as if they possess two distinct de Broglie's wavelengths corresponding to wave-like and particle-like plasmon oscillation. It has been shown that this fundamental aspect leads to new features of collective quantum excitations. In current work we use the square-root Klein-Gordon-Poisson model [95] to deduce the energy band structure of relativistic quantum plasmon excitations which extends our previous work for applications in a wide range of density and temperature, including astrophysical plasmas. Within the relativistic quantum band structure model we study various thermodynamic quantities of plasmon excitations in extreme density-temperature regimes, where, the non-relativistic theories can not be used. ## II Collective quantum electron excitations The collective electron excitations in a non-relativistic electron gas of arbitrary degenerate electron gas with a positive neutralizing background is modeled via the following coupled effective Schrodinger-Poisson system [87, 91] \[i\hbar\frac{\partial\mathcal{N}(\mathbf{r},t)}{\partial t}=-\frac{ \hbar^{2}}{2m}\Delta\mathcal{N}(\mathbf{r},t)-e\phi(\mathbf{r})\mathcal{N}( \mathbf{r},t)+\mu\mathcal{N}(\mathbf{r},t), \tag{1a}\] \[\Delta\phi(\mathbf{r})=4\pi e\left[|\mathcal{N}(\mathbf{r})|^{2}- n_{0}\right], \tag{1b}\] where \(\mathcal{N}(\mathbf{r},t)=\psi(\mathbf{r},t)\exp[iS(\mathbf{r},t)/\hbar]\) characterizes the statefunction with \(n(\mathbf{r})=\psi(\mathbf{r})\psi^{*}(\mathbf{r})\) being the local electron number density and \(p(r,t)=\)\(\nabla S(r,t)\) being the electron momentum. The electrostatic interaction between electron is modeled via the scalar potential \(\phi(\mathbf{r})\) which couples the Schrodinger equation to the Poisson relation and \(\mu\) is the chemical potential which is defined through the following non-relativistic isothermal equation of state (EoS) \[n_{e}(\mu,T) =\frac{2^{1/2}m^{3/2}}{\pi^{2}\hbar^{3}}\int_{0}^{+\infty}\frac{ \sqrt{\varepsilon}d\varepsilon}{e^{\beta(\varepsilon-\mu)}+1}, \tag{2a}\] \[P_{e}(\mu,T) =\frac{2^{3/2}m^{3/2}}{3\pi^{2}\hbar^{3}}\int_{0}^{+\infty}\frac{ \varepsilon^{3/2}d\varepsilon}{e^{\beta(\varepsilon-\mu)}+1}. \tag{2b}\] where \(\beta=1/k_{B}T\) with \(T\) being the equilibrium electron temperature and \(P_{e}\) being the quantum statistical electron gas pressure. The simple thermodynamic relation, \(n_{e}\nabla\mu=\nabla P_{e}(n_{e})\), holds between the dependent thermodynamic quantities. In the quasi-stationary limit \(p=0\), the statefunction modulus may be decomposed into the separate variable \(\psi(\mathbf{r},t)=\psi(t)\psi(\mathbf{r})\) functionals and one arrives at the following linear coupled pseudoforce system \[i\hbar\frac{d\psi(t)}{dt}=\varepsilon\psi(t), \tag{3a}\] \[\Delta\Psi(\mathbf{r})+\Phi(\mathbf{r})+E\Psi(\mathbf{r})=0,\] (3b) \[\Delta\Phi(\mathbf{r})-\Gamma\Psi(\mathbf{r})=0, \tag{3c}\] where we have used the expansion scheme \(\{\psi^{0}=1,\phi^{0}=0,\mu^{0}=\mu_{0}\}\) and used the normalized functionals \(\Psi(\mathbf{r})=\psi(\mathbf{r})/n_{0}\) with \(n_{0}\) being the equilibrium electron number density, \(\Phi(\mathbf{r})=e\phi(\mathbf{r})/E_{0}\) with \(E_{0}=m_{0}c^{2}\) being the electron rest energy. The parameter \(\Gamma=8\alpha R^{3}/3\pi\) characterized the collective electrostatic interaction strength with \(R=(n_{0}/n_{c})^{1/3}\) being the relativity parameter and \(n_{c}=k_{c}^{3}/3\pi^{2}\) being the characteristic Compton number density defined through the Compton wavenumber \(k_{c}=m_{0}c/\hbar\). Note that the number density \(n_{c}\) defines a single electron inside the Compton sphere of radius equal to the Compton wavelength \(\lambda_{c}=h/m_{0}c\). The reason for normalization in relativistic units is the later comparison between non-relativistic and the fully relativistic quantum electron gas models. The normalized energy \(E=(\epsilon-\mu_{0})/E_{0}\) is the kinetic of electrons as measured from top of the electron Fermi sea, which in this case, is \(\mu_{0}\) and depends on both temperature and density of the arbitrary degenerate electron gas. In this normalization the space and time variables are then normalized to the Compton wavelength and characteristic Compton frequency, \(\omega_{c}=\hbar k_{c}^{2}/2m_{0}\), respectively. Figure 1 shows the characteristic scale units and parameters for our normalization. The relativity parameter variation with the electron density is shown in Fig. 1(a). This parameter sharply increase with increase in the electron number density and reaches unity when the electron density coincides with the critical Compton number density \(n_{c}\simeq 5.86\times 10^{29}\)cm\({}^{-3}\) which Figure 1: (a) variation of the relativity parameter with electron density. The dashed line indicates the value at Compton electron density. (b) Variatio of the conventional plasmon wavelength in terms of electron number density. (c) Variation of collective interaction strength parameter with electron number density. (d) Variation of conventional plasmon energy versus the electron number density. is typical of white dwarfs and leads to the gravitational stellar collapse in the Chandrasekhar limit [41]. It is well known that the electron gas pressure changes a polytropic dependence beyond this critical relativity parameter value [97]. Figure 1(b) depicts variation of the plasmon wavelength \(2\pi/k_{p}\) with \(k_{p}=\sqrt{2m_{0}E_{p}}/\hbar\) being the plasmon wavenumber and \(E_{p}=\hbar\omega_{p}\) being the plasmon energy with \(\omega=\sqrt{4\pi e^{2}n_{0}/m_{0}}\) being the plasmon frequency. The index zero in \(\lambda_{p0}\) indicates that this parameter depends on the total equilibrium electron number density \(n_{0}\) of the gas which is used to define conventional plasmon parameters. It will be shown that these definitions only apply in the density-temperature regimes where all electrons are excited to the so-called plasmon energy band. The conventional plasmon wavelength is seen to decrease with increase in the electron number density. The plasmon wavelength at the critical Compton density can be as low as, \(\lambda_{p0}\simeq 0.0727\)A, which is much lower than the value of the Bohr radius, \(r_{B}=\hbar^{2}/m_{0}e^{2}\simeq 0.53\)A. The variation of the electrostatic interaction strength parameter \(\Gamma\) is shown in Fig. 1(c). It increases sharply with electron concentration and vanishes in the single electron limit leading the system (3) to the original Schrodinger equation. The variation of conventional plasmon energy with electron number density is shown in Fig. 1(d). While for metallic elements this energy varies in a few electron volts range, it increases to MeV range for fully relativistic quantum electron concentrations. The Fourier analysis of the normalized system leads to the generalized energy dispersion \(E=k^{2}/2+\Gamma/k^{2}\) in which the energy and wavenumber are normalized to the electron rest energy and Compton wavenumber, respectively. Note that in the zero electron density limit \(\Gamma\to 0\) and \(\mu_{0}\to 0\), one obtains the non-relativistic free electron dispersion \(E=\hbar^{2}k^{2}/2m_{0}\) in dimensional units. Note that the dispersion relation is composed of two distinct branches due to both particle-like and wave-like behavior of the electron gas. Figure 2(a) depicts the energy band structure of non-relativistic electron gas for different electron number density in relativistic quantum regime. The dashed curve shows the non-relativistic free electron dispersion curve. The dispersion curve for a given electron density is composed of two branches connecting at the minimum plasmon conduction energy. Note that below this energy, collective quantum excitations become unstable. This feature is analogous to the features in energy band structure of crystalline solids in which electrons with energies below the conduction level do not contribute to the single electron-hole excitation phenomena, such as the electronic transport. In our case, however, electrons at the Fermi level need enough energies to excite to the plasmon conduction band in order to take part in various collective phenomena. Note that occurrence of energy band gap is due to the electrostatic interaction and vanished in the single electron limit. It is revealed that the plasmon band gap increases with in crease in the electron number density and the plasmon conduction wavenumber moves to higher values. Figure 2(b) shows the variation of non-relativistic effective electron mass ratio to the rest mass for different electron concentration. It is noted that the effective mass ratio is vanishingly low for long wavelength collective excitations and reaches the rest mass limit at the Figure 2: The non-relativistic plasmon energy band structure for different electron number densities. (b) The non-relativistic effective electron mass-ratio for different electron number densities. (c) The non-relativistic plasmon group speed for different electron number densities. (d) The non-relativistic these speed of plasmon excitations for different electron number densities. The dashed curves indicates the free electron value. The increase in the thickness of curves indicate increase in the value of varied parameter above each panel. small wavelength limit. It is also remarked that effective mass decreases with increase in the electron number density of Fermi gas. The appropriately normalized fractional electron mass parameter is given as \[\frac{m}{m_{0}}=\left(\frac{d^{2}E}{dk^{2}}\right)^{-1}=\frac{k^{4}}{k^{4}+6 \Gamma}, \tag{4}\] where, the single electron limit \(\Gamma=0\) reduces to \(m=m_{0}\). Variation of the normalized group speed (to the speed of light in vacuum) \(v_{g}=dE/dk=k-2\Gamma/k^{3}\), of collective plasmon excitations for different electron density is shown in Fig. 2(c). The dashed line represents the free electron (zero density) group speed limit. It is remarked that group speed varies from negative to positive values unboundedly approaching the free electron value in small wavelength limit. It is also noted that increase in electron concentration lowers the group speed of collective electron excitations. The phase speed is shown in Fig. 2(d) with the dashed line corresponding to the single electrn limit. The phase speed has a minimum value for given electron number density and approached the dashed line in large wavenumber excitation limit. It is revealed that the phase speed increases by increase in electron density with its minimum value moved to lower wavelengths. The unboundedness of the collective wave speed in by no means violation of the special relativity which puts a limit on the single electron speed even in current non-relativistic model. ## III Relativistic quantum energy dispersion In order to study collective quantum electron excitations in the relativistic gas we use the relativistic energy dispersion \(\varepsilon=\sqrt{E_{0}^{2}+p^{2}c^{2}}\) in which \(p\) is the relativistic electron momentum. Consider the Hamiltonian \(\mathcal{H}=K+E_{0}-e\phi(\mathbf{r})+\mu\) in which \(K\) denotes the relativistic kinetic energy. Using the identity \(\mathcal{H}\mathcal{N}(\mathbf{r},t)=\varepsilon\mathcal{N}(\mathbf{r},t)\) and applying the quantum operators, \(E\to i\hbar\partial/\partial t\) and \(p\rightarrow-i\hbar\nabla\), we arrive at the following square-root Klein-Gordon system [95] \[i\hbar\frac{\partial\mathcal{N}(\mathbf{r},t)}{\partial t}= \left(\sqrt{E_{0}^{2}-\hbar^{2}\Delta}\right)\mathcal{N}(\mathbf{r},t)+e\phi (\mathbf{r})\mathcal{N}(\mathbf{r},t)-\mu\mathcal{N}(\mathbf{r},t), \tag{5a}\] \[\Delta\phi(\mathbf{r})=4\pi e\left[|\mathcal{N}(\mathbf{r})|^{2}- n_{0}\right]. \tag{5b}\] The isothermal EoS of a relativistic quantum electron gas can be expressed as \[n_{e}(\eta,\zeta) =8\pi\sqrt{2}\frac{m_{0}^{3}c^{3}}{h^{3}}\zeta^{3/2}\left[F_{1/2} \left(\eta,\zeta\right)+\left(\zeta/2\right)F_{3/2}\left(\eta,\zeta\right)\right], \tag{6a}\] \[P_{e}(\eta,\zeta) =\frac{16\pi\sqrt{2}}{3}\frac{m_{0}^{4}c^{5}}{h^{3}}\zeta^{5/2} \left[F_{3/2}\left(\eta,\zeta\right)+\left(\zeta/2\right)F_{5/2}\left(\eta, \zeta\right)\right],\] (6b) \[U_{e}(\eta,\zeta) =8\pi\sqrt{2}\frac{m_{0}^{4}c^{5}}{h^{3}}\zeta^{5/2}\left[F_{3/2} \left(\eta,\zeta\right)+\zeta F_{5/2}\left(\eta,\zeta\right)\right], \tag{6c}\] where \(\eta=\mu/E_{0}\), \(\zeta=k_{B}T/E_{0}\) and \(F_{k}\) is the Fermi-Dirac integral defined as \[F_{k}\left(\eta,\zeta\right)=\int\limits_{0}^{\infty}\frac{x^{k}\sqrt{1+\zeta x /2}}{\exp\left(\eta+\zeta\right)+1}dx. \tag{7}\] Note also that the thermodynamic identity \(n_{e}\nabla\mu=\nabla P_{e}(n_{e})\) is also satisfied in this case. Using the similar separation of variables technique as in the non-relativistic case, and normalizing the linear system, one arrives at the relativistic pseudoforce system \[i\hbar\frac{d\psi(t)}{dt}=\varepsilon\psi(t), \tag{8a}\] \[\left(\sqrt{1-\Delta}\right)\Psi(\mathbf{r})+\Phi(\mathbf{r})+E \Psi(\mathbf{r})=0,\] (8b) \[\Delta\Phi(\mathbf{r})-\Gamma\Psi(\mathbf{r})=0. \tag{8c}\] Due to the asymmetric operation on space and time, the square-root Klein-Gordon system can not be solved analytically. However, the collective electrostatic wave dispersion in Wigner-Poisson system has been recently studied using this system [87]. The energy dispersion of the coupled system (8) can be readily obtained by the series expansion of the momentum functional \(f(\mathbf{p})=\sqrt{1+p^{2}}\) prior to the replacement \(p\rightarrow-i\hbar\nabla\) and the Fourier analysis and recollection of terms. We then obtain the eigenvalue system \[\left(\begin{array}{cc}\sqrt{1+k^{2}}-E&-1\\ \Gamma&k^{2}\end{array}\right)\left(\begin{array}{c}\Psi\\ \Phi\end{array}\right)=\left(\begin{array}{c}0\\ 0\end{array}\right), \tag{9}\] which easily leads to the generalized dispersion \(E=\sqrt{1+k^{2}}+\Gamma/k^{2}\) with the energy and wavenumber normalized to the rest electron energy and the Compton wavenumber, respectively. In the relativistic single electron limit the dispersion reduces to \(E=\sqrt{1+k^{2}}\). The small wavenumber expansion of the normalized relativistic kinetic energy dispersion, i.e., \(K=E-1\), leads to \[K=\frac{k^{2}}{2}+\frac{\Gamma}{k^{2}}-\frac{k^{4}}{8}+O(k)^{5}, \tag{10}\] which clearly coincides with the non-relativistic energy dispersion in lowest orders. Figure 3(a) shows the relativistic plasmon energy band structure with the dashed curve corresponding to the relativistic single electron energy dispersion. The photon lines, \(E=pc\), are asymptotes to the dispersion curves of particle-like branches. It is remarked that there is an energy band gap below the plasmon conduction energy which increases with increase in electron concentration. Comparing this plot with Fig. 2(a) reveals that the increase of the plasmon energy band structure with increasing \(E\) is due to the increase of the plasmon energy band structure. The increase of the plasmon energy band structure with increasing \(E\) is due to the increase of the plasmon energy band structure with increasing \(E\). energy gap with electron density is more significant in the case of the relativistic quantum excitations. The variation of the relativistic mass ratio is depicted in Fig. 3(b) for different electron concentrations. The fractional mass is given as \[\frac{m}{m_{0}}=\left(\frac{d^{2}E}{dk^{2}}\right)^{-1}=\frac{k^{4}{(1+k^{2})}^{ 3/2}}{k^{4}+6\Gamma{(1+k^{2})}^{3/2}}, \tag{11}\] The dashed curve corresponds to the relativistic mass of a single electron. It is seen that the fractional mass starts from zero at the long wavelength collective excitations to the infinity approaching the single electron limit at large wavenumber regime. It is also remarked that the fractional relativistic mass decreases with increase of electron density in the gas. The relativistic group speed is shown in Fig. 3(c) with the dashed curve indicating the relativistic single electron case. \[v_{gr}=\frac{dE}{dk}=\frac{k}{\sqrt{1+k^{2}}}-\frac{2\Gamma}{k^{3}}, \tag{12}\] The group speed increases with increase of the excitation wavenumber and reaches the speed of light in vacuum at small wavenumber limit. Finally, the phase speed decreases from infinity to the ultra-relativistic limit at small wavelength limit. The phase speed is seen to have larger values at large electron concentrations. The collective quantum statistical behavior of electron gas may be studied using the similar procedure as developed for electron excitations. The normalized thermodynamic quntities follow \[n_{p}(\zeta,\Gamma) =\int\limits_{0}^{\infty}\frac{g(k,\Gamma)d_{k}E(k,\Gamma)dk}{1+ \exp[E(k,\Gamma)/\zeta]}, \tag{13a}\] \[U_{p}(\zeta,\Gamma) =\int\limits_{0}^{\infty}\frac{g(k,\Gamma)E(k,\Gamma)d_{k}E(k, \Gamma)dk}{1+\exp[E(k,\Gamma)/\zeta]},\] (13b) \[C_{p}(\zeta,\Gamma) =\int\limits_{0}^{\infty}g(k,\Gamma)E(k,\Gamma)d_{\zeta}F(k, \Gamma,\zeta)d_{k}E(k,\Gamma)dk,\] (13c) \[P_{p}(\zeta,\Gamma) =\int\limits_{0}^{\infty}\frac{g(k,\Gamma)d_{k}E(k,\Gamma)\sqrt{E ^{2}(k,\Gamma)-1}}{1+\exp[E(k,\Gamma)/\zeta]}dk, \tag{13d}\] where \(v_{g}\) is the group speed of collective excitations, \(n_{p}\), \(U_{p}\), \(C_{p}\) and \(P_{p}\), respectively, denote the effective plasmon electron number density, internal plasmon energy, heat capacity of collective excitations and plasmon quantum pressure. The parameter \(\zeta=T/T_{0}\) denotes the normalized temperature with \(T_{0}=E_{0}/k_{B}\) is the electron rest temperature. The plasmon band density of states (DoS) is \(g=(dN/dk)/|dE/dk|\) with \(N=4\pi k^{3}/3\) being the number of plasmon modes within the spherical wavenumber volume. The DoS of non-relativistic excitations follow \[g_{nr}(k,\Gamma)=\frac{4\pi k^{5}}{|k^{4}-2\Gamma|}, \tag{14}\] and for the relativistic excitations we have \[g_{r}(k,\Gamma)=\frac{4\pi k^{5}\sqrt{1+k^{2}}}{\left|k^{4}-2\Gamma\sqrt{1+k^{ 2}}\right|}, \tag{15}\] Figure 4(a) shows the DoS for non-relativistic plasmon excitations. The Van-Hove singularity is present due to coupling between the single-electron and collective electrostatic interactions. The DoS vanishes at the long wavelength limit and approaches the single-electron limit (dashed line) for large wavenumbers. The singularity moves to large wavenumbers with increase in the electron density. Also the density of states increase with increase of electron concentration in the gas. The non-relativistic plasmon occupation function is depicted in Fig. 4(b) for different electron number density. The dashed curve shows the occupation for single electron states. It is seen that the occupation of states with long wavelength collective excitations is strongly limited in the electron gas and the occupation probability in the non-relativistic gas decreases with increase in electron density. The internal energy of collective non-relativistic quasiparticle excitations is depicted in Fig. 4(c). The dashed line denotes the energy of free electron gas which is independent of the electron concentration. It is indeed shown that the internal energy of plasmon excitations depends on the electron concentration and vanishes at some critical electron density. This quantity increases with increase of electron gas temperature. The later feature is a unique behavior of collective expiations in the electron gas and leads to the plasmon black-out effect at high electron densities. The thermal capacity of collective excitations is shown in Fig. 4(d) indicating an increase in this quantity with increase of electron concentration. The free electron gas heat capacity is shown as dashed curve. At very high electron density the thermal capacity of a non relativistic plasmon approaches that of the free electron gas. Figure 5(a) shows the DoS of relativistic electron gas with the dashed curve being the relativistic free electron gas DoS. It is seen that the Van-Hove-like singularity is present in the relativistic case, as well. The increase in DoS with increase of electron density more profound inthis case as compared to the non-relativistic DoS in Fig. 4(a). Figure 5(b) shows that the occupation of relativistic gas also drops at long wavelength excitations but approached the relativistic single electron curve (dashed curve) at short wavelength limit. The increase of the electron density leads relatively sharp drop of collective quasiparticle occupation probability. Figure 5(c) shows the internal plasmon energy of the relativistic electron gas with the dashed line denoting the value for relativistic free electron gas. It is seen that the internal energy in this case also drops to zero at a critical electron density. The internal energy increases with increase of electron gas temperature. The thermal capacity of relativistic interacting electron gas is \(\sim 10^{5}\) cm\({}^{-3}\). The thermal capacity of relativistic interacting electron gas is \(\sim 10^{5}\) cm\({}^{-3}\). gas is shown in Fig. 5(d) with the dashed curve denoting the heat capacity of relativistic non-interacting electron gas. The heat capacity increases sharply with increase of temperature and approaches the free electron curve which shows an oscillatory behavior. Note that for given electron gas temperature the plasmon heat capacity is lower for higher electron number density. Figure 5: (a) The relativistic plasmon density of states (DoS) for various electron concentrations. (b) The relativistic plasmon band occupation function for various electron concentrations. (c) The relativistic plasmon gas internal energy variations with electron number density for different electron gas temperatures. (d) The relativistic plasmon gas thermal capacity variations with electron gas temperature for different electron number density. The dashed curves indicates the free electron value. The increase in the thickness of curves indicate increase in the value of varied parameter above each panel. The effective plasmon electron number density is given as \[n_{p}\left(T,n\right)=\int\limits_{0}^{\infty}\frac{4\pi k^{2}dk}{1+\exp[E(k,n)/ \zeta\left(T\right)]}/\int\limits_{0}^{\infty}\frac{4\pi k^{2}dk}{1+\exp[E_{e}( k)/\zeta\left(T\right)]}, \tag{16}\] where \(E\) is the plasmon energy dispersion relation and \(E_{e}=E(\Gamma=0)\) is the corresponding free electron dispersion. Note that the maximum value of \(n_{p}=n_{0}\) corresponds to the case where all Fermi electrons are excited to the plasmon band. Then all the plasmon parameters may be defined in terms of this effective plasma electron number density. For instance the plasmon energy is given as \(E_{p}(T,n_{0})=\hbar\sqrt{4\pi e^{2}n_{p}/m_{0}}\) and the plasmon wavelength is redefined as \(\lambda_{p}(T,n_{0})=2\pi\hbar/\sqrt{2m_{0}E_{p}(T,n_{0})}\). Note that in current definitions all plasma parameters depend on total equilibrium number density as well as the electron gas temperature. Figure 6(a) show the variation of relativistic plasmon number density in terms of the electron temperature for different total electron number density. It is remarked that with increase of the temperature the plasmon density increases and reaches a saturated value which is, let say, the complete plasmonization state. It is also remarked that the electron gas with lower electron density reaches this state faster. We have shown the plasmon electron EoS in Fig. 6(b). The dashed line denotes the complete plasmonization state. It is remarked that for a given temperature up to a critical electron density the electron gas is in the sate of complete plasmonaization state and with further increase on the electron density the plasmon density sharply drops to zero value. We refer to the state of zero plasmon electron density the plasmon black-out state and takes place sooner at lower electron gas temperatures. Figure 6(c) depicts the variation of plasmon wavelength which is a characteristic length to many plasmonic and photo-plasmonic phenomena. It is seen that the plasmon wavelength decreases with increase of electron temperature and becomes constant after the critical temperature for a given electron number density is reached. Note that the higher the electron density the lower saturated plasmon wavelength is. Figure 6(d) reveals an important feature of the electron gas. It is remarked that the plasmon wavelength has minimum value corresponding to electron number density for given electron gas temperature. This minimum plasmon wavelength shifts to larger electron number density with increase of the temperature and corresponding wavelength becomes lower. Figure 7(a) shows the unstable collective excitation region due to plasmon energy band gap for a relativistic electron gas. It is noted that up to the critical density of a white dwarf star the band gap remains constant and equal to the rest energy of an electron but increases sharply for higher electron number densities. The plasmon conduction valley wavelength (normalized to the Compton wavelength) is plotted in terms of electron number density. This wavenumber decreases monotonically as the electron density increases. The plasmon black-out region in temperature-density plane is depicted in Fig. 7(c). This is the region where collective excitations become ineffective and electrons fall below the Fermi energy level (Fermi sea). The border of the plasmon indicates a quantum jumping feature which should be further studied and is out of the scope of current investigation. For the critical Compton electron density \(n_{c}\) the critical temperature below which the plasmon black-out sets in is approximately, \(T_{c}\simeq 8.8938\times 10^{6}\)K. Figure 7(d) shows the variation of plasmon energy with the electron number density for different electron gas temperatures. It is remarked that for any given temperature the plasmon energy maximizes the plasmon energy. The plasmon energy is calculated as the electron number density for different electron gas temperatures. at some electron number density which sharply shifts to higher density values and its peak value increases significantly with increase of the electron gas temperature. Figure 8(a) shows the variation of the plasmon heat capacity variation with electron density for various electron temperature values using the relativistic quantum electron gas model. It Figure 8: (a) The relativistic heat capacity of collective excitations in terms of electron density for different temperature values. (b) The relativistic quantum pressure of collective excitations in terms of electron temperature for different electron number density values. (c) The relativistic quantum pressure collapse region in electron density-temperature regimes. (d) The relativistic quantum pressure collapse region in electron density-temperature regimes in a wide angle. The increase in the thickness of curves indicate increase in the value of varied parameter above each panel. is seen that the plasmon heat capacity drops to zero at a given electron temperature. The later feature is due to plasmon black-out effect, referred to earlier. However, the heat capacity cut-off takes place at relatively higher electron densities for higher electron temperatures. The plasmon pressure of relativistic electron gas is shown in Fig. 8(b). The interesting feature in this plot is the plasmon pressure collapse at a given electron density for given temperature. This is analogous to the free electron gas relativistic degeneracy pressure collapse at Chandrasekhar mass limit which is caused by the gravitational crunch [40]. The region of plasmon pressure collapse is shown in Fig. 8(c). It is remarkable that the collapse parameter in current study coincides with that of the Chandrasekhar coordinates, that is, \(T=3.94323\times 10^{6}\)K at Compton electron number density \(n_{0}=5.86478\times 10^{29}\)cm\({}^{-3}\), which is typical electron number density of white dwarf stars. Note that the quantum jumping feature is also present in this plot. The plasmon pressure phase region is shown in a larger scale in Fig. 8(d). It is revealed that the pressure collapse border undertakes a polytropic phase change at a very high critical electron number density and temperature. The nature of such critical behavior is not revealed at present work and needs further investigations. Figure 9(a) shows the plasmon number density EoS using the non-relativistic model. It is remarked that similar feature as in the relativistic electron gas is present in which the plasmon density cut-off occurs beyond a critical density for given electron temperature. For instance the plasmon cut-off electron density at room temperature is approximately \(n_{0}=9.18765\times 10^{20}\)cm\({}^{-3}\) which is relatively lower than the typical metallic electron density. Therefore, most elemental metals reside in the plasmon black-out region. Few years ago, Glenzer et al. [96] have reported observations of the electron plasma oscillations in a solid density plasma with the peak electron number density around \(n_{0}=3\times 10^{23}\)cm\({}^{-3}\) and the equilibrium electron temperatures of 12eV (\(1.4\times 10^{5}\)K, which is relatively higher than the Fermi electron temperature for metals, by using collective X-ray scattering techniques. This shows that metals conduct plasmon at relatively higher temperatures. Our findings on plasmon black-out and plasmon pressure collapse may have a profound implications for the inertial confinement fusion (ICF), the EoS of warm dense matter (WDM) and evolution of stellar and other cosmological structures such as the mysterious dark matter. The variation of plasmon wavelength in low temperature regime is depicted in Fig. 9(b) using the non-relativistic model. The minimum plasmon wavelength at room temperature takes place at electron density of \(n_{0}=1.9034\times 10^{19}\)cm\({}^{-3}\) corresponding to the value of \(\lambda_{p}\simeq 3.91254\)nm which resides in the ultraviolet radiation spectrum. The latter aspect of the electron gas is closely related to the surface plasmon resonance effect []. The plasmon black-out and pressure collapse regions are shown respectively in Figs. 9(c) and 9(d) in the non-relativistic quantum electron gas model. The quantum jumping feature is pronounced and doe not depend on the relativistic considerations. At room temperature the plasmon black-out takes place at critical density, \(n_{0}=9.18765\times 10^{20}\)cm\({}^{-3}\). on the other hand, for metallic electron density \(n_{0}\simeq 10^{22}\)cm\({}^{-3}\) the plasmon black-out takes place below the critical temperature of \(T\simeq 3110\)K. The non-relativistic plasmon pressure collapse for electron number Figure 9: (a) The variation of non-relativistic plasmon number density as a function of total electron number density for various low temperature values. (b) The variations of non-relativistic plasmon wavelength with electron concentration for different value of electron temperatures. (c) The non-relativistic plasmon black-out region in low density-temperature regime. (d) The non-relativistic plasmon pressure collapse region in low density-temperature regime. The increase in the thickness of curves indicate increase in the value of varied parameter above each panel. density typical of metals \(n_{0}\simeq 10^{22}\)cm\({}^{-3}\) occurs below temperature \(T\simeq 628.251\)K. Note that the degeneracy pressure is quite different from the plasmon pressure and acts effectively under very strong external forces such as the gravity in extreme environment. In Fig. 10 we compare the predictions of non-relativistic and relativistic models. The Figure 10: (a) Comparison of plasmon EoS curves for relativistic (solid curve) and non-relativistic (dashed curve) models for different electron temperatures. (b) Comparison of plasmon wavelengths for relativistic (solid curve) and non-relativistic (dashed curve) models for different electron temperatures. (c) Comparison of plasmon black-out regions for relativistic (solid border) and non-relativistic (dashed border) models. (d) Comparison of plasmon pressure collapse regions for relativistic (solid border) and non-relativistic (dashed border) models. The increase in the thickness of curves indicate increase in the value of varied parameter above each panel. dashed/solid curves correspond to the non-relativistic/relativistic model. Figure 10(a) shows EoS for two different electron temperature. For the lower temperature \(T=10^{8}\)K the two models do not show significant difference whereas for \(T=10^{9}\)K there is slight difference. The plasmon wavelength simulation using both models predict almost the same result as shown in Fig. 10(b). It is remarked from Fig. 10(c) that deviation from relativistic effects in the non-relativistic model of plasmon black-out region only starts at density-temperature range much above that of the white dwarf stars. However for much higher electron densities the non-relativistic plasmon black-out region is larger than that for relativistic model. Figures 10(c) and 10(d) suggests that the non-relativistic model predicts relatively accurate results for the plasmon black-out and pressure collapse up to the white dwarf stars temperature-density regime. ## IV Electron-positron energy band structure The relativistic quantum model can be extended to include the positron species. We then have the following normalized coupled linear system \[i\hbar\frac{d\psi(t)}{dt}=\varepsilon\psi(t), \tag{17a}\] \[\left(\sqrt{1-\Delta}\right)\Psi_{e}(\mathbf{r})+\Phi(\mathbf{r} )+E\Psi_{e}(\mathbf{r})=0,\] (17b) \[\left(\sqrt{1-\Delta}\right)\Psi_{p}(\mathbf{r})-\Phi(\mathbf{r} )+(E+2)\Psi_{p}(\mathbf{r})=0,\] (17c) \[\Delta\Phi(\mathbf{r})-\Gamma\Psi_{e}(\mathbf{r})+\sigma\Gamma \Psi_{e}(\mathbf{r})=0, \tag{17d}\] where \(\sigma=n_{p0}/n_{e0}\) is the fractional positron-to-electron density ratio and it has a dependence on the pair production rate with the ambient plasma temperature. The system (17) admits the following dispersion relation \[\mathrm{E}_{-} =\frac{2k^{2}\left(\sqrt{1+k^{2}}-1\right)+\left(1+\sigma\right) \Gamma-\sqrt{4k^{4}+4k^{2}\left(1-\sigma\right)\Gamma+\left(1+\sigma\right)^{2 }\Gamma^{2}}}{2k^{2}}, \tag{18a}\] \[\mathrm{E}_{+} =\frac{2k^{2}\left(\sqrt{1+k^{2}}-1\right)+\left(1+\sigma\right) \Gamma+\sqrt{4k^{4}+4k^{2}\left(1-\sigma\right)\Gamma+\left(1+\sigma\right)^{2 }\Gamma^{2}}}{2k^{2}}. \tag{18b}\] Unless the exact dependence of the parameter \(\sigma\) is known to the temperature, current model can not predict realistic results. Figure 11 shows the band structure of relativistic quantum electron-positron pair plasmon excitations in terms of different parameters. The dashed curve indicates a single branch in the absence of collective interactions. It is seen in Fig. 11(a) that another dispersion branch which falls below the Fermi level appears which due to the presence of positron species. The positron band is analogous to the hole band in semiconductor band structure and the negative energies of the positron band indicates the presence of pair production much below the Fermi sea. There are stable pair plasmon excitations below the Fermi level due to the presence of positrons. However, these excitations are strongly damped at higher wavenumber values due to collisionless Landau effect. Figure 11(b) shows the bans structure profile at elevated electron density. The electron positron is strongly damped at higher wavenumber values due to the presence of positron band. The electron positron is strongly damped at higher wavenumber values due to collisionless Landau effect. Figure 11(b) shows the bans structure profile at elevated electron density. The electron positron is strongly damped at higher wavenumber values due to collisionless Landau effect. density. It is remarked that the positron band departs from the bottom of Fermi sea leading to the increased positron energies. Moreover, Fig. 11(c) reveals that increase in fractional positron density strongly affects the positron energies at large wavelength limit, increasing the energy of collective positron oscillations. Finally, Fig. 11(d) shows that further increase of electron density leads to elevation of both negative and positive dispersion branch energies shifting the positron plasmon conduction valley to higher wavenumber values. ## V Conclusion We studied the effect of collective energy band structure on various thermodynamic parameters in the framework of non-relativistic and relativistic quantum models. The effective Schrodinger-Poisson and square-root Klein-Gordon-Poisson models are Fourier analyzed and the energy band structure describing the collective oscillation in an electron gas of arbitrary degeneracy is obtained. Both models predict novel features of plasmon black-out and pressure collapse due to the plasma electron density cut-off at large density and low temperatures. The later is because electron excitation probability to excite beyond the plasmon band gap reduces at high density and low temperature regimes. Using the same model we studies the influence of positron species on the energy band structure which show appearance of low lying distinct positron band below the Fermi electron sea. Current findings may direct consequences for inertial confinement scheme and equation of state (EoS) of warm dense matter (WDM). ## VI Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2305.00969
CryCeleb: A Speaker Verification Dataset Based on Infant Cry Sounds
This paper describes the Ubenwa CryCeleb dataset - a labeled collection of infant cries - and the accompanying CryCeleb 2023 task, which is a public speaker verification challenge based on cry sounds. We released more than 6 hours of manually segmented cry sounds from 786 newborns for academic use, aiming to encourage research in infant cry analysis. The inaugural public competition attracted 59 participants, 11 of whom improved the baseline performance. The top-performing system achieved a significant improvement scoring 25.8% equal error rate, which is still far from the performance of state-of-the-art adult speaker verification systems. Therefore, we believe there is room for further research on this dataset, potentially extending beyond the verification task.
David Budaghyan, Charles C. Onu, Arsenii Gorin, Cem Subakan, Doina Precup
2023-05-01T17:56:32Z
http://arxiv.org/abs/2305.00969v7
# CryCeleb: A Speaker Verification Dataset Based on Infant Cry Sounds ###### Abstract This paper describes the Ubenwa CryCeleb dataset - a labeled collection of infant crises, and the accompanying CryCeleb 2023 task - a public speaker verification challenge based on infant cry sounds. We release for academic usage more than 6 hours of manually segmented cry sounds from 786 newborns to encourage research in infant cry analysis. Infant Cry Analysis, Speaker Verification ## I Introduction Clinical research on the analysis of infant crises goes back to the 1960s [1]. These days, machine learning techniques are demonstrating promising results in cry-based detection of reasons of cry (hunger, pain, etc) and, more importantly, health pathologies, such as neurological injury [2, 3, 4]. In many practical applications, a cry analysis system should be able to accurately identify the infant associated with the cry, e.g., monitoring solutions in hospitals or households with multiple babies. Training such a model requires infant cry data, with multiple recordings per infant, labeled with infant identities. Given the complexity of data collection from newborns, such resources are extremely scarce. In this work, we present the Ubenwa CryCeleb dataset, a first-of-its-kind collection of infant crises labeled with individual infant identities. Comprising 786 infants and 6.5 hours of cry expirations, our aim is to foster research in cry verification and, more broadly, to advance the field of infant cry analysis. The dataset is available online1 under Creative Commons Attribution NonCommercial NoDerivatives 4.0 International license. Footnote 1: [https://huggingface.co/datasets/Ubenwa/CryCeleb2023](https://huggingface.co/datasets/Ubenwa/CryCeleb2023) ## II Data Preparation The original recordings were made either within an hour of birth or upon discharge from the hospital (typically within 24 hours of birth up to a few days). The cries were collected by medical personnel using the Ubenwa study application [5] on the Android mobile phone provided for this task. The samples were collected between 2020 and 2022. Each recording was then manually segmented by a human annotator into 'expiration', 'inspiration' or 'no cry present' segments. The CryCeleb dataset consists solely of the expiration segments, which we refer to as cry sounds. Inspirations (breath) are excluded as they are generally too short, hard to detect, and less likely to convey information about the vocal tract. Also, we manually removed any cry sounds containing personally identifiable information (PII), such as background human speech. ## III Metadata and Descriptive Statistics This section summarizes the information about audio files included in the dataset and the associated metadata. Table I provides general statistics of the dataset. The audio folder is accompanied by a metadata.csv file with fields summarized in Table II. The 26093 rows of this file provide complete information about the cry audio files. Figures 1 and 2 provide some statistics about the cry sounds and infants included in the database. Most of the cry recordings are quite short (0.5 - 1.0 seconds) with only about 0.3% of cry sounds longer than 4 seconds. At the same time, there are multiple cry sounds corresponding to each infant. It should be noted, however, that cry sounds (expirations) collected within one recording period, tend to have similar acoustic characteristics. \begin{table} \begin{tabular}{c|c} \hline Number of cry sounds (expirations) & 26093 \\ Number of original recordings & 1372 \\ Number of infants & 786 \\ Total cry time (minutes) & 391 \\ \hline \end{tabular} \end{table} TABLE I: Summary statistics. \begin{table} \begin{tabular}{c|c} \hline **Field** & **Description** \\ \hline baby\_id & Unique infant identifier. \\ period & Time of recording \\ & (’B’ for birth or ’D’ for discharge). \\ duration & Length of cry sound in seconds. \\ split & Split for the CryCeleb2023 challenge. \\ chronological\_index & Chronological ordering of cry sounds \\ & in the original recording. \\ file\_name & Path to cry sound. \\ file\_id & Cry sound unique identifier. \\ \hline \end{tabular} \end{table} TABLE II: Metadata fields. ## IV CryCeleb 2023 Challenge CryCeleb 2023 is a machine learning competition where contestants aim to develop a system capable of determining whether two distinct cry recordings originate from the same infant (see Figure 3). The verification system should be capable of analyzing any pair of cries and assigning a similarity score to determine if the two sounds belong to the same baby, with an ideal system always assigning higher scores to positive pairs (two cry sounds from the same infant) than to negative pairs. The advantage of developing this verification system, as opposed to a classifier for cries, lies in its open-set nature, meaning it's not confined to the infants encountered during training. To enable decision-making, a threshold can be applied to the scores generated by the system. If a score is greater than the threshold, it will indicate that the system accepts the two cries as belonging to the same infant. Submissions are ranked using the Equal Error Rate (EER). The EER is the point on the ROC curve at which the false acceptance rate (FAR) equals the false rejection rate (FRR). Visually, it's where the ROC curve intersects the \(y{=}1{-}x\) diagonal. Given a list of scores and the corresponding true labels, one finds the EER by sliding the threshold across the sorted scores until the FAR equals the FRR. The lower the EER, the better. For the CryCeleb2023 challenge, we have partitioned all infants into three sets: train (586 infants), dev (40 infants), and test (160 infants). All infants in the dev and test sets have recordings from both the birth (B) and discharge (D) periods. This is not true for all infants in the train set. dev_pairs.csv is the cross-product of the birth and discharge recordings of the dev infants, meaning all possible combinations of birth and discharge recordings are paired together. Participants can validate their verification systems using these labeled pairs. Similarly, test_pairs.csv is the \(B\times D\) cross product for infants in the test set. These infants are anonymized and the pairs will be used for evaluating submissions. Each verification pair in both dev and test sets comprises one birth and one discharge recording. For instance, dev pair XB_YD represents infant X's birth recording and Y's discharge recording. To calculate the score for this pair, participants must use cry sounds from the folders audio/dev/X/B and audio/dev/Y/D. No other cry sounds are allowed for calculating the score for this pair. Pairing different recordings rather than cry sounds from the same recording is more representative of real-world applications for such a verification system, which may involve verifying an infant over multiple days. Additionally, we observed that verifying separate segments from the same recording is easier, possibly because an infant exhibits consistent traits within a single crying "episode" but not across different episodes. This indicates that factors beyond the infant's identity can also influence the cry's sound. It's important to emphasize that the dev and test infants were not chosen randomly. Instead, they were randomly sampled \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **Split** & **\# of +ive pairs** & **\# of -ive pairs** & **Total \# of pairs** \\ \hline dev & 40 & 1540 & 1580 \\ test & 160 & 25440 & 25600 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Number of pairs in dev and test. Fig. 1: Histogram of cry sound durations. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Split**} \\ **Time(s) of Recording(s)** & _train_ & _dev_ & _test_ \\ \hline Both birth and discharge & 348 & 40 & 160 \\ \hline Only birth & 183 & 0 & 0 \\ \hline Only discharge & 55 & 0 & 0 \\ \cline{2-4} & 586 & 40 & 160 \\ \hline \end{tabular} \end{table} TABLE III: Number of infants per number of cry sounds. Fig. 3: Cry verification system. Fig. 2: Number of infants per number of cry sounds. from the top 200 infants with the highest cosine similarities between their birth and discharge embeddings, as calculated using the initial non-fine-tuned baseline model described in Section V and the first row of Table V. We opted for these relatively easier pairs due to the difficulty in recognizing an infant in an unseen recording within this dataset. By selecting easier-to-verify pairs, our aim is to add variance to the leaderboard and make the challenge more engaging. ## V Baseline We consider two baselines based on ECAPA-TDNN [6]. Table V summarizes the performance of two baselines on development and test set and the following section provides some more details about these systems. First, the "naive" baseline is the model pre-trained using a large adult speaker verification corpus - VoxCeleb [7] without any adaptation on cry data. We refer to the open-source SpeechBrain implementation [8] for further details with the model available in Hugging Face [9]. This model yields 37.92% and 38.12% EER on the dev and test pairs respectively. Second, the VoxCeleb model is fine-tuned on CryCeleb's training data, specifically focusing on the 348 infants with both birth and discharge recordings (Table III, top left). By limiting the dataset to this subset, we can train the model on all birth recordings while reserving the discharge recordings for validation. This approach enables us to assess the model's ability to generalize patterns learned from birth recordings to discharge recordings, in some sense simulating the verification setting. Alternatively, we could have fine-tuned the model on both birth and discharge recordings from the 348 infants or even expanded it to all recordings from the 586 train infants. The former option introduces more data but removes the ability to validate the classification performance. The latter allows for even more data, however, it also increases the number of classes, which could hinder the model's learning. The model is trained for 950 epochs using 3-5 second random chunks from concatenated cry sounds at each iteration. The best 5 epochs, determined by validation accuracy, are saved, and these 5 checkpoints are then evaluated on the dev pairs using the EER (verification task). The checkpoint with the lowest EER is chosen as our final fine-tuned model. The fine-tuned model achieves an EER of 22.50% on the dev set and 29.37% on the test set. It is open-sourced2 along with code3 that can be used to reproduce these results. Footnote 2: [https://huggingface.co/Ubenwa/ecapa-voxceleb-ft-cryceleb](https://huggingface.co/Ubenwa/ecapa-voxceleb-ft-cryceleb) Footnote 3: [https://github.com/Ubenwa/cryceleb2023](https://github.com/Ubenwa/cryceleb2023) Figures 4 and 5 present histograms of scores for positive pairs (orange) and negative pairs (blue), with the y-axis normalized separately for each color. The red vertical line indicates the threshold where the EER is achieved. First, we observe that fine-tuning the ECAPA model leads to improved verification performance, as evidenced by the lower EER and more visually distinct distributions. Second, we notice that the scores for negative pairs in the tuned model form a bell-shaped distribution centered around zero. This is intuitively more reasonable compared to the naive ECAPA model, where the most common score for a negative pair is 0.7.
2305.09795
The Value of Competing Energy Storage in Decarbonized Power Systems
As the world seeks to transition to a sustainable energy future, energy storage technologies are increasingly recognized as critical enablers. However, the macro-energy system assessment of energy storage has often focused on isolated storage technologies and neglected competition between them, thus leaving out which energy storage to prioritise. The article applies a systematic deployment analysis method that enables system-value evaluation in perfect competitive markets and demonstrates its application to 20 different energy storage technologies across 40 distinct scenarios for a representative future power system in Africa. Here, each storage solution is explored alone and in competition with others, examining specific total system costs, deployment configuration, and cost synergies between the storage technologies. The results demonstrate the significant benefits of optimizing energy storage with competition compared to without (+10% cost savings), and highlight the relevance of several energy storage technologies in different scenarios. This work provides insights into the role of energy storage in decarbonizing power systems and informs future research and policy decisions. There is no one-size-fits-all energy storage, but rather an ideal combination of multiple energy storage options designed and operated in symbiosis.
Maximilian Parzen, Davide Fioriti, Aristides Kiprakis
2023-05-16T20:35:54Z
http://arxiv.org/abs/2305.09795v2
# The Value of Competing Energy Storage in Decarbonized Power Systems ###### Abstract As the world seeks to transition to a sustainable energy future, energy storage technologies are increasingly recognized as critical enablers. However, the macro-energy system assessment of energy storage has often focused on isolated storage technologies and neglected competition between them, thus leaving out which energy storage to prioritise. The article applies a systematic deployment analysis method that enables system-value evaluation in perfect competitive markets and demonstrates its application to 20 different energy storage technologies across 40 distinct scenarios for a representative future power system in Africa. Here, each storage solution is explored alone and in competition with others, examining specific total system costs, deployment configuration, and cost synergies between the storage technologies. The results demonstrate the significant benefits of optimizing energy storage with competition compared to without (+10% cost savings), and highlight the relevance of several energy storage technologies in different scenarios. This work provides insights into the role of energy storage in decarbonizing power systems and informs future research and policy decisions. There is no one-size-fits-all energy storage, but rather an ideal combination of multiple energy storage options that are designed and operated in symbiosis. keywords: Energy storage, Energy modelling, Technology evaluation, Variable renewable energy + Footnote †: journal: arXiv. Not peer-reviewed BauBusiness as UsualEPEnergy to PowerLCOSLevelized Cost of Storage ## 1 Introduction As the world looks to decarbonise its power systems in order to mitigate the impacts of climate change, power modeling scenarios have made it increasingly clear that energy storage will play a critical role in the transition to a more sustainable future [1; 2; 3; 4; 5]. The rise of renewable energy sources such as solar and wind power has presented a significant challenge for the electricity grid, which must balance the variable and intermittent nature of these sources with the electricity demand. Energy storage technologies provide a solution to this challenge by allowing excess renewable energy to be stored and used when needed, effectively decoupling the generation and consumption of electricity while adding system-value. Here, as in [6], the system-value of energy storage refers to the broader economic benefits that storage can provide to the power system beyond its immediate application. These benefits include the displacement of firm generation and network infrastructure, greater renewable energy utilisation, and the reduction of transmission and distribution losses, which often reduces the reliance on fossil fuels and lowers carbon emissions. In this context, energy storage, with its system-value provision, is a key enabler of transitioning to a cleaner, more sustainable energy system worldwide. According to [7], assessing the competitiveness or suitability of energy storage in larger power systems with well-known Levelized Cost of Storage (LCOS) methods as applied in [8; 9; 10; 11] are less suitable compared to system-value assessment methods as applied [1; 4; 12; 13; 14]. However, all these system-value assessments explore isolated storage technologies that do not consider any competition with other storage technologies. For instance, the inspiring work in [1] assessed a single generic energy storage in two representative decarbonised power systems. Through a design-space exploration of the generic storage, they identified what energy capacity costs are required to replace firm generation, which are the most critical storage performance parameters and sizing characteristics that contribute to the system-value. However, it is also known that adding more technology options to models often results in synergies. These synergies reduce the significantly total system costs, which are defined as the sum of all operational and investment costs, raising at least questions of the validity of the previously found results of single energy storage scenarios [7]. Expanding on this knowledge, [7] introduces and demonstrates a systematic deployment analysis method that enables system-value evaluation in perfect competitive markets but demonstrates the method by ignoring uncertainty and only considering a limited amount of storage technologies, namely hydrogen and lithium-ion energy storage. In this article, we assess multiple energy storage with the newly suggested systematic deployment analysis, also addressing uncertainty. In total, we assess the system-value of 20 energy storage (see Figure 2) with and without competition across 40 distinct scenarios for a representative future power system in Africa. We use a global coverage open energy system model suitable for investment and operational co-optimization, including grid infrastructure and detailed operating decisions and constraints [15]. Further, we apply this model to its already validated Nigerian power system [15], configure it with high temporal resolution (8760h) and a spatial interconnected 10-node system to keep some of the underlying grid and environmental information within the simplification. Within this model, we integrate for the first time 20 storage technologies, which data is collected and expanded from Pacific Northwest National Laboratory (PNNL) (see Table 2, and Methods 6.1). We explore two unanswered questions: how significant is the system-benefit from optimizing energy storage with competition compared to without, and which energy storage is optimization relevant considering uncertainty. To answer the research questions, we focused on two scenario trees as illustrated in Figure 1. The first scenario tree, defined as'single storage' scenario, involves optimizing each of the energy storage solutions in isolation, assuming business-as-usual costs. This scenario set includes 20 optimization runs and excludes any competition between the different storage solutions. By doing so, it is possible to investigate the specific total system costs (\(\copy\)/MWh) and deployment configuration (GW for charger/discharge or GWh for store). The second scenario tree also involves 20 optimization runs, but in this case, all energy storage solutions can be optimized within each scenario. This approach allows for perfect competition and cost synergies between the different technologies. To facilitate this, this article uses the here coined 'lonely optimist' approach, where one storage option has optimistic capital costs while the others have pessimistic assumptions. This extreme parameterization enables us to suggest which energy storage solutions provide system-value and which can potentially be neglected - at least within the modelled power system conditions. By applying the new systematic deployment analysis from [7], this manuscript suggests that optimizing scenarios with multiple energy storage, compared to scenarios with single energy storage, can lead to significant system benefits between \(3-29\%\). Considering the extreme parameterization, it was also found that 9 out of 20 storage technologies are optimization-relevant, often providing system benefits due to synergies in storage design and operation. The often praised Lithium energy storage [8; 9] was only found highly competitive in a single scenario with optimistic cost assumption of (\(\copy\)112/KWh and \(\copy\)24/kW). In contrast, the often studied hydrogen storage was indeed consistently optimization relevant as well as the sand-based thermal energy storage. However, other technologies added competitive pressure, including gravity-brick, underground and above-ground water-based gravity and pump-heat energy storage, compressed-air and nickel-based electricity storage. Therefore, the system-value technology assessments with multiple energy storage technologies can be considered as an advanced conceptual approach that could find more application in research and industry compared to approaches that ignore competition by isolated technology considerations. ## 2 Modelling single vs multiple energy storage The'system-value' of technologies can be defined as its market potential resulting from possible and probable least-cost scenarios in capacity expansion models (see Section 6). Figure 3 presents a range of optimised market potentials for various single-optimised energy storage technologies. Because only one storage is included in each optimization run, these scenarios represent a case that ignores any competition. It can be observed that the most and least optimised charger technology ranges between \(25-54GW\) for the gravity-brick (gravity) and compressed air energy storage (pair), the most and least optimised dischargers ranging between \(26-54GW\) for pump-heat (phes) and the compressed air energy storage (pair), and the most and least optimised stores ranging between \(0.18-1.46\)\(TWh\) for lead battery (lead) and hydrogen cavern (h2cavern) storage systems, respectively. These results are unrealistic as there are always multiple options available; nevertheless, they reveal that every energy storage technology can serve the energy system or, in other words, contain system-value. Further, observing Figure 3, one can see that the least-cost system model optimizes various energy storage ratios between charger, store and discharger depending on the technologies. As for most models [16], storage technologies are constrained such that perfect balancing is guaranteed, ensuring no mismatch between electricity supply and demand. However, there is a general trade-off between storage, grid, and supply expansion, allowing for significantly different storage designs. Theoretically, creating a power system without any storage or grid is feasible if renewables can be massively overbuilt and curtailed. The addition of energy storage to power systems allows for smoothing out mismatches in time, while grid infrastructure helps reduce mismatches in space and to exploit better resource potentials [17; 18]. Since every storage technology has different capital costs and efficiencies in the component chain (charger, store, discharger), the design of storage technologies changes to exploit its role in the power system to achieve the minimum total system costs. It is important to note that the scenarios shown in the figure are only one possible outcome of the least-cost power system optimization model, and there may be many other factors that could influence the market potential of a technology. One important factor, making scenarios not only possible but also more probable, is the competition between storage technologies which is discussed next. Figure 4 shows that power systems with perfect storage competition (lonely optimist scenario) are, on average, significantly cheaper compared to those without storage competition (single storage scenario). First and most apparent, the competitive scenarios are 29% cheaper compared to single storage optimization scenarios. While the competitive scenario tree has few cost increases for some technologies initially, the cost gradient becomes relatively low after the optimistic 'phes' scenario with changes of less than \(0.1\%\). In contrast, one can observe continuous significant cost increases for single storage scenarios. Surprisingly, comparing both x-axes that are sorted according to total system costs, it was found that the order of cost-optimal storage systems for the power system is identical. This identical order implies that the storage that leads to the lowest total system in the single-optimized storage scenarios, is likely also the most valuable and important storage in the context of contributing to system benefits in the other scenario tree. Second and most important, the power systems benefits from synergies provided by a perfect competing storage market even under the worst cost assumptions. The total system of the most expensive storage scenario in the competitive situation is \(3\%\) (6.295/6.108) cheaper than the best storage scenario in the non-competitive scenario. This is remarkable because the'single storage' scenario assumes business-as-usual costs Figure 1: Illustration of scenario concept in this study. for the most favourable technology, while the most expensive 'lonely optimist' scenario considers optimistic costs for the least favourable storage technology (\(-30\%\) of BAU) while simultaneously for all other storage technologies pessimistic costs (\(+30\%\) of BAU). Thirdly, comparing the 'gravitywa' storage from both scenario trees, one can find significant cost savings when considering perfect competitive storage markets. When considering 'gravitywa' with Business as Usual (BAU) assumptions in both scenario trees, one can observe \(8\%\) cost saving or \(500\) million \(\in\) in absolute terms. Interesting, but less of an apple-to-apple comparison, setting the 'gravitywa' technology as optimistic, as given in the original lonely optimist scenario, the savings add up to \(13\%\). These results suggest that studies that assess the system-value with single optimized energy storage such as [1, 4, 12] miss significant benefits from synergies caused by co-optimizing multiple energy storage technologies. It is also likely that power systems with two or three modelled energy storage, such as [19, 20], could benefit from system cost reduction when including more of the technologies that were found highly optimization relevant, for instance, the gravity or sand based thermal energy storage. When assuming similar power system conditions as in the study, Figure 2: Illustration of energy storage technologies with abbreviations used in this study. the system cost can be up to \(5-13\%\) for fully decarbonized power systems. ## 3 Assessing technology importance The presented results in Figure 5 illustrate the market potential of 20 lonely optimist scenario optimizations with 2050 techno-economic assumptions explained in Section 6. Each scenario given on the x-axis requires a single optimization run, with all technologies listed on the y-axis treated as variables. The colour gradient, normalised to the maximum value across all runs, indicates the extent to which the technologies are deployed in each optimization result. For example, the concrete lonely optimist scenario assumes optimistic capital costs for concrete-based energy storage, while the others possess pessimistic values. The following paragraphs will discuss the frequency and magnitude of storage technologies that are deployed in the optimization scenarios. These results will provide a more comprehensive understanding of the relevance and importance of each technology in the least-cost power system. Beginning with the frequency technologies are optimized, it is observed that 9 out of 20 technologies are optimized to a significant degree, implying that not all technologies are relevant for least-cost power systems. These optimization irrelevant technologies, which are here defined as being optimized below \(1\%\) of the maximally optimized technology, include concrete, lead, liquid-air, vanadium, both salt-based, pump-hydro, and any of the four zinc-based energy storage. They can likely be excluded here without consequences from further parameter studies such as global sensitivity analysis. Conversely, sand-based ther Figure 3: Optimization results for single energy storage scenarios. The y-axis, x-axis, and marker size show the deployment required for a least-cost 2050 power system in Nigeria. The colour indicates the total system costs. mal and hydrogen cavern-based energy storage consistently provide system benefits with high certainty across all 20 scenarios examined. Further, several technologies, such as gravity-brick, underground water-based gravity, lithium ferrous phosphate (LFP), lithium nickel manganese cobalt (NMC), and pump-heat energy storage, could only compete under optimistic capital cost assumptions, while simultaneously all other technologies are attributed pessimistic values (see scenario design in Section 6.3). On the other hand, compressed-air, above-ground water-based gravity technologies can generally compete unless specific technologies are assumed with optimistic capital cost assumptions. For instance, the above-ground gravity storage is not optimized in a power system where gravity brick storage or Lithium LFP batteries possess optimistic capital cost assumptions. Similarly, compressed-air energy storage is not optimized when hydrogen cavern or sand energy storage is assumed to be optimistic. While analysing the frequency of energy storage optimization in various extreme parameterised scenarios is useful in determining its relevance, evaluating the magnitude is also crucial in understanding the technology's significance. Analysing each scenario's optimised magnitude, one can observe a wide range of deployed amounts per scenario and technology. In particular, gravity and sand-based energy storage have, on average, the highest deployed amounts indicating their potentially essential role in the power system. Exploring some extremes of the scenario tree, the most optimized charger is the thermal electrode charger for the sand energy storage with \(24GW\) as well as an average and minimum percentage of this value of \(79\%\) and \(37\%\). Similarly, the compressed air charger has a minimum, average, and maximum value of \(0\%\), \(1\%\) and \(5\%\), while the hydrogen cavern optimization results are \(1\%\), \(15\%\) and \(27\%\), respectively. The most optimized discharger is the lithium LFP battery inverter with \(25GW\). Note that the lithium battery was not the maximum charger component due to roundtrip efficiency of \(0.92\%\), which reduces the optimized amount from \(25GW\) to \(23GW\) such that charger and discharger are of equivalent size. Here, the relative minimum, average, and maximum values for the compressed air discharg are \(0\%\), \(2\%\) and \(7\%\), for the hydrogen cavern-based fuel cell \(1\%\), \(7\%\) and \(14\%\), respectively. The most optimized store is the thermal storage of the sand storage with \(877GWh\) with minimum and average relative values of \(16\%\) and \(27\%\). Similarly, the relative minimum, average, and maximum values for the compressed air discharg are \(0\%\), \(16\%\) and \(65\%\), for the hydrogen cavern-based fuel cell \(1\%\), \(7\%\) and \(67\%\), respectively. As a result, while sand and gravity storage plays an important role in the power system, the other relevant technologies also sometimes contribute significantly to the least-cost power systems. Comparing the results to other studies, one can confirm the observation from [13] that pumped heat energy storage can provide system benefits. However, unlike their study, liquid air energy storage is likely not optimization relevant even when optimizing multiple energy storage technologies. Interestingly, this article discovers that lithium energy Figure 4: Total system cost for energy storage scenario with (left) and without (right) competition. Scenarios are sorted according to the total system costs. ## 3 Assessing technology importance Figure 5: Optimized charger, store, and discharger capacity for the lonely optimist scenario in Nigeria. All technologies on the y-axis are available for the optimization scenario in each run. One column refers thereby to one scenario run. The x-axis shows the lonely optimist scenarios, which assume optimistic capital costs assumptions (-30%) for the mentioned technology while the others technologies on the y-axis are assumed to have pessimistic capital cost assumptions (+30%). storage might not be optimization relevant in many cases due to competition from other technologies, which challenges its previously overstated role in the power system decarbonisation for energy to power ratios above one hour [8; 21]. Finally, the results reveal that gravity and sand thermal energy storage are promising technologies that warrant further investigation and inclusion in system planning. ## 4 Technology design variation It was found that energy storage technologies span vast sizing designs by analyzing the sizing magnitude ratios between store-to-discharger components, also known as the energy-to-power ratio (Energy to Power (EP) ratio). One can observe EP ratios for the nine relevant storage technologies between \(4-7h\) for any gravity storage, \(6h\) for Lithium LFP, \(8-21h\) for hydrogen cavern, \(9-36h\) for sand-based, \(3-19\) days for compressed air and \(36\) days for pumped-heat energy storage. The results imply that for the given power system, the most critical storage categories are peak shifters (roughly \(<8h\)), storage that can balance mismatches also over one or multiple nights (roughly \(9-36h\)) and energy storage that balance through seasonal effects (roughly \(7-36d\)). Different to [18], the results suggest different sizing patterns for hydrogen energy storage. Compared to EP-ratios of roughly \(14-21\) days, this article finds that hydrogen storage are mostly sized to balance mismatches for one or two nights. The role of the weekly storage took the compressed-air and pumped-heat storage, which were generally not primarily optimized, reflecting that synoptic or seasonal mismatches are not as significant as predicted for the modelled power system close to the equator. While there is an extensive design space for energy storage [1; 14], the resulting charger-to-discharger ratios suggest that there is a general tendency that the power system benefits from oversizing the discharg components. Figure 2 shows that some technologies are sizing-constrained because their charger and discharger are the same components. Moreover, for sizing-unconstrained technologies such as sand-based and hydrogen cavern energy storage, the results suggest charger-to-discharger ratios between \(0.28-0.61\), respectively. Only for the single case for which the pumped-heat energy storage is significantly optimized this sizing tendency is reversed such that the pumped-heat storage is sized with a charger-to-discharger ratio of \(2.33\), meaning that oversizing the charger is beneficial to the power system. Similar results were found in [7] for a European transmission system optimization that always oversizes, if possible, the discharger component by, on average, a factor of two or three. ## 5 Discussion The importance of system-value analysis for energy storage is increasing as it allows decision-makers to evaluate the overall impact and value of multiple energy storage options within a power system. In our study, traditional system-value analysis that considers only single energy storage options in power system models may overlook significant benefits that can be obtained by designing and operating multiple energy storage options in symbiosis. Our analysis shows that scenarios with multiple energy storage options can result in total power system cost savings of up to \(3-29\%\) compared to those with only a single energy storage option. However, it is worth noting that not all energy storage options contribute equally to achieving these system benefits. Of the 20 energy storage options that were analysed, 11 are neither significantly nor frequently deployed in scenarios that considered extreme cost uncertainty, making them non-competitive. The implications of our study for decision-makers are significant. By applying system-value analysis, investment decision-makers in industry, research, and governments can better evaluate and prioritise energy storage technologies based on their overall value in the system rather than solely on cost reduction as approached in [21]. Our findings suggest that certain energy storage technologies may not be worth investing in, while others provide good investment opportunities since they consistently provide system benefits even under high-cost uncertainty. Understanding which energy storage options provide the most frequent and significant benefits to a given power system can help decision-makers focus their limited research and deployment funds on the most viable options, potentially saving society billions of hidden costs. Moreover, our analysis can help manufacturers and project developers design energy storage systems most valuable to a power system. Different to [1; 14; 12], by taking into account the benefits of multiple energy storage options, one can can derive more realistic and practical design recommendations that consider the competition and synergies between different storage options. Our study finds that energy storage technologies should be heterogeneously sized to exploit the individual system conditions and that energy-to-power ratios can vary significantly between technologies. Manufacturers can use this information to prioritise designing technologies most likely valuable to future system configurations. In contrast, project developers can apply the methods in the study to make more informed decisions about where and how to deploy energy storage systems. However, there are limitations to our study, and it is vital to continue to improve methods, models, and data to ensure more informed decision-making in the future. Incorporating technology readiness levels, implementing realistic technology restrictions considering environmental and social limits, expanding the list of energy storage technologies, analysing various other power systems and considering competing flexibilities from other sectors such as transport and industry load shifting potentials are some areas that are more discussed in the limitation Section 6.5, which can be explored further. Another critical point is that technology assessments should be ideally performed globally, which requires global bottom-up model efforts such as provided in [15; 22]. Nonetheless, our study provides valuable insights into energy storage technologies. By incorporating these insights into decision-making processes, one can improve the overall system-value of energy storage and accelerate the transition to a cleaner, more sustainable and affordable energy future. To achieve this, actors in the field should avoid creating new models and instead focus on improving existing models. An open and inclusive community that promotes open research, software and data can help us progress step by step towards more informed decision-making for energy storage. ## 6 Methods ### Storage data collection We extracted from a \(2022\) Pacific Northwest National Laboratory (PNNL) study \(20\) energy storage technologies and prepared it for reuse in any model [23]. All included technologies are listed in Figure 2. The report provided techno-economic information for various storage reference sizes for \(2021\) and \(2030\). We focused on assumptions for the largest scale applications that range between \(10-1000MW\) and \(10-24h\) energy-to-power ratio. A linear extrapolation was not applicable for the \(2050\) compiled data (see Table 2) as values would turn negative. To cover more of the existing non-linearity in cost developments [24], we created a piecewise linear approximation based on a geometric series for the years between \(2034-2059\) with data points in 5-year steps. To explore the data, we build an interactive web application available at [https://pz-max-energy-storage-data-explorer-app-o5iwg.streamlit.app/](https://pz-max-energy-storage-data-explorer-app-o5iwg.streamlit.app/). The original data, as well as the processing to clean and extrapolate, is integrated into an open-source tool _technology-data_' with [https://github.com/PyPSA/technology-data/pull/67](https://github.com/PyPSA/technology-data/pull/67). ### Model and parameters This study used the PyPSA-Earth model described and formulated in [15]. The model is limited to the geographical scope of the representative power system in Nigeria because its 2021 representation is already validated and described in [15]. We apply techno-economic assumptions given in Table 1 to represent a 2050 decarbonised power system. Further, to reduce the computational requirements, we clustered the original, open available transmission network to 10 nodes (see Figure 6). Each node captures an area for calculating renewable potential and demand. We contributed open-source code in [https://github.com/pypsa-meets-earth/pypsa-earth/pull/567](https://github.com/pypsa-meets-earth/pypsa-earth/pull/567) that makes adding new energy storage technologies to energy models simple. For instance, instead of adding new code for each added technology in several Python scripts, it is now possible to only add new data, and the model will automatically add these technologies. As illustrated in Figure 2, energy storage with unconstrained design is modelled such that the model can independently optimize any functional component (charger, discharger, store). In contrast, design-constrained technologies such as the Lithium-battery are modelled so that the charger and discharger are constrained to be equal and share costs, representing the battery inverter. Moreover, this article excludes self-discharge losses, which were found to have negligible impact on the model outputs as our optimised technologies predominately store energy below 18 days [14]. All required data for the energy storage technologies is described in Table 2. ### Explored scenarios Two scenario trees are explored for the \(2050\) fully decarbonized power system with cost-optimal grid expansion in the model region, Nigeria. All scenarios consider a \(2013\) based weather year and demand profile. The latter is scaled to align with 2050 predictions as in [15]. Era5 reanalysis data is used to derive the renewable potential calculation, and 'Shared Socioeconomic Pathways' [25] are used for the 2050 hourly demand prediction (more details in [15]). Further, the scenarios include minimal variable operating and maintenance costs of \(0.5\epsilon/MWh\) to avoid unintended storage cycling [26] to reduce the risk of model distortions. The'single storage scenario' tree's attributes include optimising each energy storage solution in isolation, assuming \(2050\) bau cost assumptions given in Table 2. In contrast, the 'lonely optimist scenario' tree can optimize all storage technologies simultaneously, assuming that always one storage technology is optimistic (\(70\%\) of bau capital costs) while the others are pessimistic (\(130\%\) of bau capital costs). ### System-value measurement According to Section 2, the concept of'system-value' for technologies is defined as the market potential arising from the possible and probable least-cost scenarios in capacity expansion models. This definition originates from [7], which introduces the'market potential method,' comprising two distinct components. The first component, the'market potential indicator,' evaluates the total optimized technology size, such as an energy storage system's energy or power capacity. The second component, the'market potential criteria,' seeks to support the decision-making process in the design of storage technologies by examining possible and probable scenarios. As per the criteria, only an optimized energy storage system can provide system-value in a least-cost power system optimization. The importance of technologies according to the system-value increases with its optimized capacity, and its provision of system-value is reinforced and more confident with its repeated optimization across multiple probable and possible scenarios. Notably, the total system costs, including any operation and investment costs, only indirectly impact the system-value assessment for technologies. Decision-makers might use it to define probable scenarios. What is likely most interesting for technology innovators, manufacturers, and regulators is the amount of a particular technology required to be deployed to benefit the power systems. ### Limitation While our study provides valuable insights into the energy storage technologies that are most likely to provide system benefits, several limitations concerning the model and the data should be considered when interpreting our results and making decisions based on them. First, our analysis assumed that all energy storage technologies have the same uncertainty range, which may be unrealistic. Incorporating a sense of technology readiness level could provide better signals on uncertainty ranges, as suggested in previous research [27]. Second, our analysis could benefit from considering the feasibility of implementing certain energy storage technologies in all regions. For example, hydrogen cavern-based energy storage may not be feasible in some areas. Future research could include technology restrictions that account for such limitations, similar to renewable energy limitations [28]. Third, our study included only some possible energy storage technologies. Additional research could identify and evaluate other technologies that could potentially provide system benefits. Fourth, our analysis did not consider competing flexibilities, such as those introduced by the transport sector or industry load-shifting potentials, which could challenge the system-value of energy storage [20]. Finally, to consider better uncertainty, one could consider multiple weather years and multi-year energy storage [19], apply global sensitivity analysis with Monte-Carlo [29] and perform near-optimal solution explorations [16]. ## Acknowledgements This research was supported by UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/P007805/1 and (EP/V042955/1). We would like to thank the PyPSA.org and PyPSA meets Earth teams. ## References * [1] S. G. G. Credit Authorship Contribution Statement M.P., D.F. conceptualised the study; M.P., A.K. administrated the project; M.P., D.F. contributed to the software development, M.P. performed the validation and figure production, A.K. acquired the funding; M.P. contributed to writing and revising the manuscript. Declaration of Interests The authors declare no competing interests. ## Supplemental material I Energy storage and power system assumptions. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Technology**} & \multirow{2}{*}{ \begin{tabular}{c} **Investment** \\ **(€/kW) or** \\ **(€/kWh)** \\ \end{tabular} } & **Fixed O&M** & **Lifetime** & **Efficiency** & **Source** \\ & & **(€/kWh)** & **(\%/year)** & **(years)** & **(\%)** & **Source** \\ \hline Onshore Wind & 963 & 1.2 & 30 & & [30] \\ Offshore Wind & 1487 & 2.0 & 30 & & [30] \\ Solar PV (utility-scale) & 265 & 2.5 & 40 & & [30] \\ Solar PV (rooftop) & 475 & 1.6 & 40 & & [30] \\ Reservoir hydro & 2208 & 1.0 & 80 & 0.9 & [31] \\ Run of river & 3312 & 2.0 & 80 & 0.9 & [31] \\ HVDC overhead & 432 & 2.0 & 40 & & [32] \\ \hline \hline \end{tabular} \end{table} Table 1: Infrastructure investment cost assumptions per technology for 2050. All costs are given in real 2015 money. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \begin{tabular}{c} **Investment** \\ **(\(\in\)/kW) or** \\ **(\(\in\)/kWh)** \\ \end{tabular} & \begin{tabular}{c} **Fixed O@M** \\ **(\%/year)** \\ \end{tabular} & \begin{tabular}{c} **Lifetime** \\ **(years)** \\ \end{tabular} & \begin{tabular}{c} **Efficiency** \\ **(\%)** \\ \end{tabular} & \begin{tabular}{c} **Source** \\ **Source** \\ \end{tabular} \\ \hline Compressed-Air-Adiabatic-bicharger & 946 & 0.9 & 60 & 0.72 & [23] \\ Compressed-Air-Adiabatic-store & 5 & 0.4 & 60 & & [23] \\ Concrete-charger & 106 & 1.1 & 35 & 0.99 & [23] \\ Concrete-discharge & 427 & 0.3 & 35 & 0.43 & [23] \\ Concrete-store & 19 & 0.3 & 35 & & [23] \\ Gravity-Brick-bicharger & 415 & 1.5 & 41 & 0.93 & [23] \\ Gravity-Brick-store & 131 & & 41 & & [23] \\ Gravity-Water-Aboveground-bicharger & 365 & 1.5 & 60 & 0.9 & [23] \\ Gravity-Water-Aboveground-store & 102 & & 60 & & [23] \\ Gravity-Water-Underground-bicharger & 905 & 1.5 & 60 & 0.9 & [23] \\ Gravity-Water-Underground-store & 80 & & 60 & & [23] \\ HighT-Molten-Salt-charger & 107 & 1.1 & 35 & 0.99 & [23] \\ HighT-Molten-Salt-discharge & 428 & 0.3 & 35 & 0.44 & [23] \\ HighT-Molten-Salt-store & 78 & 0.3 & 35 & & [23] \\ Hydrogen-charger & 190 & 0.7 & 30 & 0.7 & [23] \\ Hydrogen-discharge & 179 & 0.6 & 30 & 0.49 & [23] \\ Hydrogen-store & 4 & 0.4 & 30 & & [23] \\ Lead-Acid-bicharger & 111 & 2.5 & 12 & 0.88 & [23] \\ Lead-Acid-store & 282 & 0.3 & 12 & & [23] \\ Liquid-Air-charger & 451 & 0.4 & 35 & 0.99 & [23] \\ Liquid-Air-discharge & 317 & 0.5 & 35 & 0.55 & [23] \\ Liquid-Air-store & 135 & 0.3 & 35 & & [23] \\ Lithium-Ion-LFB-bicharger & 69 & 2.2 & 16 & 0.92 & [23] \\ Lithium-Ion-LFP-store & 160 & 0.0 & 16 & & [23] \\ Lithium-Ion-NMC-bicharger & 69 & 2.2 & 13 & 0.92 & [23] \\ Lithium-Ion-NMC-store & 182 & 0.0 & 13 & & [23] \\ LowT-Molten-Salt-charger & 139 & 1.1 & 35 & 0.99 & [23] \\ LowT-Molten-Salt-discharge & 559 & 0.3 & 35 & 0.54 & [23] \\ LowT-Molten-Salt-store & 48 & 0.3 & 35 & & [23] \\ Ni-Zn-bicharger & 69 & 2.2 & 15 & 0.9 & [23] \\ Ni-Zn-store & 202 & 0.2 & 15 & & [23] \\ Pumped-Heat-charger & 723 & 0.4 & 33 & 0.99 & [23] \\ Pumped-Heat-discharge & 507 & 0.5 & 33 & 0.63 & [23] \\ Pumped-Heat-store & 7 & 0.2 & 33 & & [23] \\ Pumped-Storage-Hydro-bicharger & 1397 & 1.0 & 60 & 0.89 & [23] \\ Pumped-Storage-Hydro-store & 57 & 0.4 & 60 & & [23] \\ Sand-charger & 137 & 1.1 & 35 & 0.99 & [23] \\ Sand-discharge & 548 & 0.3 & 35 & 0.53 & [23] \\ Sand-store & 5 & 0.3 & 35 & & [23] \\ Vanadium-Redox-Flow-bicharger & 111 & 2.5 & 12 & 0.81 & [23] \\ Vanadium-Redox-Flow-store & 207 & 0.2 & 12 & & [23] \\ Zn-Air-bicharger & 129 & 2.4 & 25 & 0.79 & [23] \\ Zn-Air-store & 156 & 0.2 & 25 & & [23] \\ Zn-Br-Flow-bicharger & 36 & 1.8 & 10 & 0.83 & [23] \\ Zn-Br-Flow-store & 357 & 0.2 & 10 & & [23] \\ Zn-Br-Nonflow-bicharger & 129 & 2.4 & 15 & 0.89 & [23] \\ Zn-Br-Nonflow-store & 207 & 0.2 & 15 & & [23] \\ \hline \hline \end{tabular} \end{table} Table 2: Electricity storage overnight investment cost assumptions per technology for 2050. Derived with geometric series applied on 2021 and 2030 PNNL data. All costs are given in real 2015 money. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \begin{tabular}{c} **Investment** \\ **(\(\in\)/kW) or** \\ **(\(\in\)/kWh)** \\ \end{tabular} & \begin{tabular}{c} **Fixed O@M** \\ **(\%/year)** \\ \end{tabular} & \begin{tabular}{c} **Lifetime** \\ **(years)** \\ \end{tabular} & \begin{tabular}{c} **Efficiency** \\ **(\%)** \\ \end{tabular} & \begin{tabular}{c} **Source** \\ **Source** \\ \end{tabular} \\ \hline Compressed-Air-Adiabatic-bicharger & 946.18 & 0.9 & 60 & 0.72 & [23] \\ Compressed-Air-Adiabatic-store & 5.448 & 0.4 & 60 & & [23] \\ Concrete-charger & 183.635 & 1.1 & 35 & 0.99 & [23] \\ Concrete-discharge & 734.543 & 0.3 & 35 & 0.41 & [23] \\ Concrete-store & 28.893 & 0.3 & 35 & & [23] \\ Gravity-Brick-bicharger & 415.57 & 1.5 & 41 & 0.93 & [23] \\ Gravity-Brick-store & 184.331 & & 41 & & [23] \\ Gravity-Water-Aboveground-bicharger & 365.63 & 1.5 & 60 & 0.9 & [23] \\ Gravity-Water-Aboveground-store & 142.417 & & 60 & & [23] \\ Gravity-Water-Underground-bicharger & 905.158 & 1.5 & 60 & 0.9 & [23] \\ Gravity-Water-Underground-store & 112.097 & & 60 & & [23] \\ HighT-Molten-Salt-charger & 183.528 & 1.1 & 35 & 0.99 & [23] \\ HighT-Molten-Salt-discharge & 734.115 & 0.3 & 35 & 0.42 & [23] \\ HighT-Molten-Salt-store & 110.714 & 0.3 & 35 & & [23] \\ Hydrogen-charger & 1208.632 & 0.5 & 30 & 0.7 & [23] \\ Hydrogen-discharge & 1177.152 & 0.5 & 30 & 0.49 & [23] \\ Hydrogen-store & 4.779 & 0.4 & 30 & & [23] \\ Lead-Acid-bicharger & 147.643 & 2.4 & 12 & 0.88 & [23] \\ Lead-Acid-store & 360.824 & 0.2 & 12 & & [23] \\ Liquid-Air-charger & 500.869 & 0.4 & 35 & 0.99 & [23] \\ Liquid-Air-discharge & 351.674 & 0.5 & 35 & 0.52 & [23] \\ Liquid-Air-store & 183.974 & 0.3 & 35 & & [23] \\ Lithium-Ion-LFB-bicharger & 94.181 & 2.1 & 16 & 0.91 & [23] \\ Lithium-Ion-LFP-store & 316.769 & 0.0 & 16 & & [23] \\ Lithium-Ion-NMC-bicharger & 94.181 & 2.1 & 16 & 0.91 & [23] \\ Lithium-Ion-NMC-store & 361.858 & 0.0 & 16 & & [23] \\ LowT-Molten-Salt-charger & 148.856 & 1.1 & 35 & 0.99 & [23] \\ LowT-Molten-Salt-discharge & 595.425 & 0.3 & 35 & 0.52 & [23] \\ LowT-Molten-Salt-store & 68.283 & 0.3 & 35 & & [23] \\ Ni-Zn-bicharger & 94.181 & 2.1 & 15 & 0.89 & [23] \\ Ni-Zn-store & 337.129 & 0.2 & 15 & & [23] \\ Pumped-Heat-charger & 802.648 & 0.4 & 30 & 0.99 & [23] \\ Pumped-Heat-discharge & 563.561 & 0.5 & 30 & 0.61 & [23] \\ Pumped-Heat-store & 29.319 & 0.1 & 30 & & [23] \\ Pumped-Storage-Hydro-bicharger & 1397.128 & 1.0 & 60 & 0.89 & [23] \\ Pumped-Storage-Hydro-store & 57.074 & 0.4 & 60 & & [23] \\ Sand-charger & 151.781 & 1.1 & 35 & 0.99 & [23] \\ Sand-discharge & 607.125 & 0.3 & 35 & 0.51 & [23] \\ Sand-store & 7.883 & 0.3 & 35 & & [23] \\ Vanadium-Redox-Flow-bicharger & 147.857 & 2.4 & 12 & 0.81 & [23] \\ Vanadium-Redox-Flow-store & 311.66 & 0.2 & 12 & & [23] \\ Zn-Air-bicharger & 129.023 & 2.4 & 25 & 0.77 & [23] \\ Zn-Air-store & 192.847 & 0.2 & 25 & & [23] \\ Zn-Br-Flow-bicharger & 129.023 & 2.4 & 10 & 0.81 & [23] \\ Zn-Br-Flow-store & 470.192 & 0.3 & 10 & & [23] \\ Zn-Br-Nonflow-bicharger & 129.023 & 2.4 & 15 & 0.87 & [23] \\ Zn-Br-Nonflow-store & 273.108 & 0.2 & 15 & & [23] \\ \hline \hline \end{tabular} \end{table} Table 3: Electricity storage overnight investment cost assumptions per technology for 2021. Derived from original PNNL data. All costs are given in real 2015 money. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \begin{tabular}{c} **Investment** \\ **(\(\in\)/kW) or** \\ **(\(\in\)/kWh)** \\ \end{tabular} & \begin{tabular}{c} **Fixed O@M** \\ **(\%/year)** \\ \end{tabular} & \begin{tabular}{c} **Lifetime** \\ **(years)** \\ \end{tabular} & \begin{tabular}{c} **Efficiency** \\ **(\%)** \\ \end{tabular} & \begin{tabular}{c} **Source** \\ **Source** \\ \end{tabular} \\ \hline Compressed-Air-Adiabatic-bicharger & 946 & 0.9 & 60 & 0.72 & [23] \\ Compressed-Air-Adiabatic-store & 5 & 0.4 & 60 & & [23] \\ Concrete-charger & 144 & 1.1 & 35 & 0.99 & [23] \\ Concrete-discharge & 576 & 0.3 & 35 & 0.43 & [23] \\ Concrete-store & 24 & 0.3 & 35 & & [23] \\ Gravity-Brick-bicharger & 415 & 1.5 & 41 & 0.93 & [23] \\ Gravity-Brick-store & 157 & & 41 & & [23] \\ Gravity-Water-Aboveground-bicharger & 365 & 1.5 & 60 & 0.9 & [23] \\ Gravity-Water-Aboveground-store & 121 & & 60 & & [23] \\ Gravity-Water-Underground-bicharger & 905 & 1.5 & 60 & 0.9 & [23] \\ Gravity-Water-Underground-store & 95 & & 60 & & [23] \\ HighT-Molten-Salt-charger & 144 & 1.1 & 35 & 0.99 & [23] \\ HighT-Molten-Salt-discharge & 576 & 0.3 & 35 & 0.44 & [23] \\ HighT-Molten-Salt-store & 94 & 0.3 & 35 & & [23] \\ Hydrogen-charger & 312 & 0.7 & 30 & 0.49 & [23] \\ Hydrogen-discharge & 414 & 0.5 & 30 & 0.7 & [23] \\ Hydrogen-store & 4 & 0.4 & 30 & & [23] \\ Lead-Acid-bicharger & 128 & 2.4 & 12 & 0.88 & [23] \\ Lead-Acid-store & 320 & 0.2 & 12 & & [23] \\ Liquid-Air-charger & 475 & 0.4 & 35 & 0.99 & [23] \\ Liquid-Air-discharge & 334 & 0.5 & 35 & 0.55 & [23] \\ Liquid-Air-store & 159 & 0.3 & 35 & & [23] \\ Lithium-Ion-LFB-bicharger & 81 & 2.1 & 16 & 0.92 & [23] \\ Lithium-Ion-LFP-store & 236 & 0.0 & 16 & & [23] \\ Lithium-Ion-NMC-bicharger & 81 & 2.1 & 13 & 0.92 & [23] \\ Lithium-Ion-NMC-store & 269 & 0.0 & 13 & & [23] \\ LowT-Molten-Salt-charger & 144 & 1.1 & 35 & 0.99 & [23] \\ LowT-Molten-Salt-discharge & 576 & 0.3 & 35 & 0.54 & [23] \\ LowT-Molten-Salt-store & 58 & 0.3 & 35 & & [23] \\ Ni-Zn-bicharger & 81 & 2.1 & 15 & 0.9 & [23] \\ Ni-Zn-store & 267 & 0.2 & 15 & & [23] \\ Pumped-Heat-charger & 761 & 0.4 & 33 & 0.99 & [23] \\ Pumped-Heat-discharge & 534 & 0.5 & 33 & 0.63 & [23] \\ Pumped-Heat-store & 11 & 0.2 & 33 & & [23] \\ Pumped-Storage-Hydro-bicharger & 1397 & 1.0 & 60 & 0.89 & [23] \\ Pumped-Storage-Hydro-store & 57 & 0.4 & 60 & & [23] \\ Sand-charger & 144 & 1.1 & 35 & 0.99 & [23] \\ Sand-discharge & 576 & 0.3 & 35 & 0.53 & [23] \\ Sand-store & 6 & 0.3 & 35 & & [23] \\ Vanadium- Redox-Flow-bicharger & 129 & 2.4 & 12 & 0.81 & [23] \\ Vanadium- Redox-Flow-store & 258 & 0.2 & 12 & & [23] \\ Zn-Air-bicharger & 129 & 2.4 & 25 & 0.79 & [23] \\ Zn-Air-store & 174 & 0.2 & 25 & & [23] \\ Zn-Br-Flow-bicharger & 81 & 2.1 & 10 & 0.83 & [23] \\ Zn-Br-Flow-store & 412 & 0.3 & 10 & & [23] \\ Zn-Br-Nonflow-bicharger & 129 & 2.4 & 15 & 0.89 & [23] \\ Zn-Br-Nonflow-store & 239 & 0.2 & 15 & & [23] \\ \hline \hline \end{tabular} \end{table} Table 4: Electricity storage overnight investment cost assumptions per technology for 2030. Derived from original PNNL data. All costs are given in real 2015 money. ## Supplemental material II Examples of other energy storage operations for selected scenarios. Figure 8: Storage operation in lonely optimist scenario with changing optimistic storage scenarios. The time-series is smoothed by a 12 hour rolling aggregation and shows only a selected set of technologies.
2302.02667
A Scalable and Efficient Iterative Method for Copying Machine Learning Classifiers
Differential replication through copying refers to the process of replicating the decision behavior of a machine learning model using another model that possesses enhanced features and attributes. This process is relevant when external constraints limit the performance of an industrial predictive system. Under such circumstances, copying enables the retention of original prediction capabilities while adapting to new demands. Previous research has focused on the single-pass implementation for copying. This paper introduces a novel sequential approach that significantly reduces the amount of computational resources needed to train or maintain a copy, leading to reduced maintenance costs for companies using machine learning models in production. The effectiveness of the sequential approach is demonstrated through experiments with synthetic and real-world datasets, showing significant reductions in time and resources, while maintaining or improving accuracy.
Nahuel Statuto, Irene Unceta, Jordi Nin, Oriol Pujol
2023-02-06T10:07:41Z
http://arxiv.org/abs/2302.02667v2
# A Scalable and Efficient Iterative Method for Copying Machine Learning Classifiers ###### Abstract Differential replication through copying refers to the process of replicating the decision behavior of a machine learning model using another model that possesses enhanced features and attributes. This process is relevant when external constraints limit the performance of an industrial predictive system. Under such circumstances, copying enables the retention of original prediction capabilities while adapting to new demands. Previous research has focused on the single-pass implementation for copying. This paper introduces a novel sequential approach that significantly reduces the amount of computational resources needed to train or maintain a copy, leading to reduced maintenance costs for companies using machine learning models in production. The effectiveness of the sequential approach is demonstrated through experiments with synthetic and real-world datasets, showing significant reductions in time and resources, while maintaining or improving accuracy. S 1 Sustainable AI, transfer learning, environmental adaptation, optimization, and model enhancement. ## 1 Introduction Machine learning has become widespread in many industries, with supervised algorithms automating tasks with higher precision and lower costs (Gomez et al., 2018; Gharibshah and Zhu, 2021; Shehab et al., 2022; Feizabadi, 2022). However, maintaining the performance of industrial machine learning models requires constant monitoring as the environment where they are deployed can change due to internal and external factors, such as new production needs, technological updates, novel market trends, or regulatory changes. Neglecting these changes can lead to model degradation and decreased performance. To prevent this, regular model monitoring is essential in any industrial machine learning application. Once served into production, models are frequently checked for any signs of performance deviation, which can occur just a few months after deployment. In case of deviation, models are either fully or partially retrained and substituted (Wu et al., 2020). However, this process can be time-consuming and costly, especially given the complex nature of modern model architectures that consume significant computational resources (Chen, 2019). Managing and updating multiple models is therefore a challenge for companies and long-term sustainability is one of the main difficulties faced by industrial machine learning practitioners today (Paleyes et al., 2022). To address this, differential replication through copying (Unceta et al., 2020a) has been proposed as a more efficient and effective solution to adapt models. This approach builds upon previous ideas on knowledge distillation (Hinton et al., 2015; Bucilua et al., 2006b), and allows for reusing a model's knowledge to train a new one that is better suited to the changing environment (Unceta et al., 2020b). It can therefore bring numerous benefits in terms of cost and optimization of resources. Differential replication allows for model adaptation by projecting an existing decision function onto a new hypothesis space that meets new requirements. Typically, this process involves using the label probabilities produced by the given decision function as soft targets to train a new model in the new hypothesis space (Liu et al., 2018; Wang et al., 2020). In the case of differential replication through copying, information about the original model's behavior is acquired via a hard-membership query interface and training is done using synthetic samples labeled by the original model. Theoretically, the copying problem can be viewed as a dual optimization problem where both the copy model parameters and the synthetic samples are optimized simultaneously. Previous practical implementations of this problem have simplified it, generating and labeling a large set of synthetic data points in a single pass and then using them to optimize the parameters. This approach has been successful in validating copying on several datasets, but it is memory-intensive and computationally expensive and requires pre-setting several hyperparameters. This article presents a novel approach to differential replication through copying that is based on an iterative scheme. The goal of performing a copy is to find the simplest model in the model copy hypothesis space that attains the maximum fidelity compared to the target model being copied. In the absence of data, this requires finding the best synthetic set for optimizing the model and the simplest model that guarantees perfect fidelity within the capacity of the copy model space. The proposed iterative formulation performs two steps at each iteration: (1) generating and selecting data for copying based on a compression measure of uncertainty and (2) learning the target copy model using the optimized dataset while controlling its complexity to achieve close-to-optimal results. This process allows for control over the amount of data/memory needed and the convergence speed to reach a steady performance. Results show that the proposed model requires \(85-93\%\) less data samples/memory and has an average \(80\%\) improvement in convergence speed compared to the single-pass approach, with no significant degradation in performance. The main contributions of this article are: (1) to the best of our knowledge, this is the first algorithm to specifically address the problem of copying as described in Unceta et al. (2020a); (2) the proposed formulation and algorithms allow for explicit control of mem ory requirements and convergence speed while maintaining accuracy; and (3) the resulting algorithm is accurate, fast, and memory-efficient, as validated by successful results. The algorithm proposed has two hyperparameters, but this article also presents an algorithm that automatically sets one of them and dynamically adapts it during the learning process, ensuring fast convergence to an accurate solution. The resulting algorithm overcomes some of the limitations of current copying methods and opens up opportunities for its use in various real-life applications. The rest of this paper is organized as follows. Section 2 addresses the issue of differential replication through copying, providing an overview of relevant methods, and introduces the single-pass approach as the simplest solution. Section 3 presents the sequential approach to copying. It starts by demonstrating the convergence of this approach to the optimal solution. It then introduce a sample removal policy based on an uncertainty measure. It ends by proposing a regularization term to prevent model forgetting. Section 4 empirically demonstrates the validity of the sequential approach through experiments on a large set of 58 UCI problems. The performance of the sequential copying framework is measured in terms of accuracy, convergence speed, and efficiency. The section ends with a discussion of the results. Finally, Section 5 summarizes the findings and outlines future research directions. ## 2 Background on copying The problem of _environmental adaptation_ introduced by (Unceta et al., 2020) refers to situations where a machine learning model that was designed under a set of constraints must fulfill new requirements imposed by changes in its environment. The model needs to adapt from a _source scenario_, \(s\) where it was trained, to a _target scenario_, \(t\), where it is being deployed. ### The problem of environmental adaptation Formally speaking, environmental adaptation is defined as follows: consider a task \(\mathcal{T}\) and a domain \(\mathcal{D}\). A trained model \(h\in\mathcal{H}_{s}\) is designed to solve \(\mathcal{T}\) in \(\mathcal{D}\) under a set of constraints \(\mathcal{C}_{s}\) and a compatible hypothesis space \(\mathcal{H}_{s}\). The problem of environmental adaptation arises when the original set of constraints \(\mathcal{C}_{s}\) evolves to a new set \(\mathcal{C}_{t}\). Under these circumstances, a potentially different hypothesis space \(\mathcal{H}_{t}\) has to be defined in the same domain \(\mathcal{D}\) and for the same task \(\mathcal{T}\), which is compatible with the new constraints set \(\mathcal{C}_{t}\)1. Unless the considered model \(h\) is compatible with the new set of constraints, it will be rendered as obsolete, i.e., \(h\) will no longer be a feasible solution. Hence, environmental adaptation refers to the need to adapt \(h\) to the constraints introduced by \(\mathcal{C}_{t}\), as shown in the equations of Table 1. Footnote 1: In some cases the source and target hypothesis spaces \(\mathcal{H}_{s}\) and \(\mathcal{H}_{t}\) may be the same. However, in the most general case, where the new set of constraints defines a new set of feasible solutions, they are not. This problem is different from _domain adaptation_ and _transfer learning_(Chen and Buhlmann, 2021; Chen et al., 2022; Raffel et al., 2020). Domain adaptation refers to adapting a model from one source domain \(\mathcal{D}_{s}\) to a related target domain \(\mathcal{D}_{t}\), due to a change in the data distributions. Environmental adaptation preserves the domain, but there is a change in constraints. Transfer learning requires reusing knowledge from solving one task \(\mathcal{T}_{s}\) to solve a related task \(\mathcal{T}_{t}\)(Pan and Yang, 2010). Environmental adaptation preserves the task. The environmental adaptation problem can be addressed through various methods, including re-training the existing model with a new set of constraints (Barque et al., 2018), using wrappers (Mena et al., 2019, 2020), edited or augmented data subsets (Song et al., 2008; Chen et al., 2020; Duan et al., 2018), teacher-student networks and distillation mechanisms (Bucilua et al., 2006; Hinton et al., 2015; Szegedy et al., 2016; Yang et al., 2019), label regularization (Muller et al., 2019; Yuan et al., 2020), label refinement (Bagherinezhad et al., 2018), or synthetic data generation (Bucilua et al., 2006; Zeng and Martinez, 2000). A comprehensive overview of all the different methods is available in (Unceta et al., 2020). Here, we focus in differential replication, a general solution to the environmental adaptation problem. Differential replication projects the decision boundary of an existing model to a new hypothesis space that is compatible with the target scenario. In the absence of access to the training dataset or model internals, differential replication through copying can be used to solve the environmental adaptation problem Unceta et al. (2020). In classification settings, copying involves obtaining a new classifier that displays the same performance and decision behavior as the original, without necessarily belonging to the same model family. In the following sections, we introduce the problem of differential replication through copying and explore potential approaches for implementation. ### Differential replication through copying Consider a classifier \(f_{\mathcal{O}}\in\mathcal{H}s\) trained on an unknown dataset with input space dimensionality \(d\) and output space cardinality \(n_{c}\). Thus, \(f\mathcal{O}:\mathbb{R}^{d}\to 0,1^{n_{c}}\). In the most restrictive case, \(f_{\mathcal{O}}\) is a hard decision classifier that outputs one-hot encoded label predictions, meaning that for any data point, it returns an \(n_{c}\)-output vector with all elements as zeros except for the target label position \(i\), which has a value of 1. The goal of copying is to obtain a new classifier \(f_{\mathcal{C}}\in\mathcal{H}t\) parameterized by \(\theta\in\Theta\) that mimics \(f\mathcal{O}\) across the sample space. In the empirical risk minimization framework, we can consider an empirical risk function that measures the discrepancy between two classifiers. We can then formulate the copying problem as a dual optimization of \(\theta\) and a set of synthetic data points \(S\) over which the empirical risk is evaluated, since we do not have access to any training data. This problem can be written as: \begin{table} \begin{tabular}{l l l l} Source Scenario & & & Target Scenario \\ for \(\mathcal{T}\) in \(\mathcal{D}\) & & & for \(\mathcal{T}\) in \(\mathcal{D}\) \\ \(\underset{\text{for }h\in\mathcal{H}_{s}}{\text{maximize}}\) & \(\mathsf{P}(y|x;h)\) & \(\underset{for\;h\in\mathcal{H}_{t}}{\text{maximize}}\) & \(\mathsf{P}(y|x;h)\) \\ subject to & \(\mathcal{C}_{s}\) & & subject to & \(\mathcal{C}_{t}\) \\ \end{tabular} \end{table} Table 1: The problem of environmental adaptation looks for the solution in the same domain and task when the feasibility constraints change. \[\underset{\theta,S}{\text{minimize}} \Omega(\theta)\] (1) subject to \[\|R^{\mathcal{F}}_{emp}(f_{\mathcal{C}}(\theta),f_{\mathcal{O}})-R^{ \mathcal{F}}_{emp}(f_{\mathcal{C}}(\theta^{\dagger}),f_{\mathcal{O}})\|<\varepsilon,\] for a defined tolerance \(\epsilon\) and a complexity measure \(\Omega(\theta)\) (e.g. the \(\ell_{p}\)-norm of the parameters). The empirical risk function \(R^{\mathcal{F}}emp\) measures the difference between the original model \(f\mathcal{O}\) and the optimized copy model \(f_{\mathcal{C}}\) and is referred to as the _empirical fidelity error_. \(\theta^{\dagger}\) is the solution to the following unconstrained optimization problem: \[\theta^{\dagger}=\underset{\theta,S}{\text{argmin}}\quad R^{ \mathcal{F}}_{emp}(f_{\mathcal{C}}(\theta),f_{\mathcal{O}}). \tag{2}\] The solution to Equation 1 is a combination of synthetic data and copy model parameters that minimize capacity and empirical risk. If \(f_{\mathcal{O}}\in\mathcal{H}_{t}\), the solution of the unconstrained problem (Equation 2) will always result in \(R^{\mathcal{F}}emp(f_{\mathcal{C}}(\theta^{\dagger}),f_{\mathcal{O}})=0\), as the labeling problem for any set of data points using the original hard-label classifier is separable in \(\mathcal{H}_{t}\). ### The single-pass approach The _single-pass_ approach (Unceta et al., 2020b) aims to find a sub-optimal solution to the copying problem by dividing it into two separate steps. Firstly, a synthetic set \(S^{*}\) is found and then, the copy parameters \(\theta^{*}\) are optimized using this set. The process is as follows: * **Step 1: Synthetic sample generation**. An exhaustive synthetic set \(S^{*}\) is created by randomly sampling from a probability density function \(P_{S}\) that covers the operational space of the copy. The operational space is the region of the input space where the copy is expected to mimic the behavior of the original model. The synthetic set can be expressed as: \[S^{*}=\{z_{j}|z_{j}\sim P_{\mathcal{S}},\;j=1,\ldots,N\}.\] (3) The simplest choice for \(P_{S}\) is a uniform distribution, or a normal or Gaussian distribution if the original data is normalized. (See (Unceta et al., 2020c) for more information on different options for \(P_{S}\).) * **Step 2: Building the copy**. The optimal parameter set for the copy is obtained by minimizing the empirical risk of the copied model over the synthetic set \(S^{*}\) obtained in Step 1: \[\theta^{*}=\underset{\theta}{\text{argmin}}\quad R^{\mathcal{F}}_{emp}(f_{ \mathcal{C}}(\theta),f_{\mathcal{O}})\Bigg{|}_{S=S^{*}}.\] (4) The single-pass approach is a simplified solution for the problem modeled in Equation 1. Step 1 generates data without any optimization, but it requires a large dataset. Step 2 focuses on solving the unconstrained version of the copying problem defined in Equation 2. This approach can be used when the classifier complexity can be directly modeled. However, in the general case, it requires setting many critical parameters and selecting an appropriate model to ensure that the unconstrained problem is a good approximation of the general setting described in Equation 1. To guarantee good performance, a sufficiently large synthetic dataset must be generated. The implementation of the single-pass approach has limitations. Firstly, the learning process using a one-shot approach with a single model may be limited by the available memory and unable to handle the full dataset. Secondly, keeping a large number of data samples in memory during training is resource-intensive and doesn't guarantee performance. In addition, blindly learning the entire synthetic dataset using a single model can result in inefficiencies. On the other hand, using an online learning strategy can reduce memory usage, but leads to slow convergence to the optimal solution, making the process time-consuming. To overcome these limitations, a new approach, using an alternating optimization scheme, is introduced. This approach provides a fast and memory-efficient solution to the copying problem, solving Equations 1 and 2. ## 3 The sequential approach We introduce two theorems in this section to show that the sequential approach converges to the single-pass approach when conditions are optimal in terms of both parameters and behavior. Next, we provide a practical implementation of the sequential approach and various optimizations to achieve low memory usage and fast convergence. Specifically, we propose that a perfect copy should be able to compress the synthetic data points in its parameters and suggest epistemic uncertainty as a reliable metric for data compression. Based on this, we develop a data selection strategy that filters samples based on their level of compression by the copy model. This leads to a reduced number of data points needed for each learning step. Finally, we introduce an automatic hyper-parameter tuning policy to ensure optimal implementation of the sequential approach in practice. We refer to the resulting algorithm as the _sequential approach_ to copying. ### An alternating optimization algorithm for copying We start by introducing a preliminary alternating optimization algorithm for solving Equation 1. This algorithm alternates between two optimization steps at each iteration \(t\): * **Step 1: Sample Optimization**. The optimal synthetic set at iteration \(t\) is selected by maximizing the empirical fidelity error for the previous model solution \(\theta_{t-1}^{*}\). That is, \[S_{t}^{*}=\arg\max_{S}\quad R_{emp}^{\mathcal{F},S}(f_{\mathcal{C}}(\theta),f_{ \mathcal{O}})\Bigg{|}_{\theta=\theta_{t-1}^{*}}\] (5) * **Step 2: Copy Parameter Optimization**. The optimal copy parameters \(\theta_{t}^{*}\) at iteration \(t\) over samples \(S_{t}^{*}\) are obtained by: \[\underset{\theta}{\text{minimize}} \Omega(\theta)\] (6) subject to \[\left\|R_{emp}^{\mathcal{F}}(f_{\mathcal{C}}(\theta),f_{\mathcal{O }})-R_{emp}^{\mathcal{F}}(f_{\mathcal{C}}(\theta^{\dagger}),f_{\mathcal{O}}) \right\|\Bigg{|}_{S=S_{t}^{*}}<\varepsilon.\] The algorithm starts each iteration \(t\) by selecting a set of synthetic data points that maximize the empirical risk. These are the points that the copy model from the previous iteration \(t-1\) did not model correctly. By reducing the empirical risk to zero, we can minimize the loss function. In Step 2, we minimize the empirical loss over \(S_{t}\) while keeping the copy model complexity as low as possible. The rest of this section focuses on solving Equations 5 and 6 under various assumptions. ### Step 1: Sample optimization We start by introducing a formal sequential sample generation scheme and examining its convergence properties. This will serve as the basis for constructing the final algorithm that solves Equation 5. To achieve this, we recast Equation 2 in a probabilistic context and prove two theorems showing that the sequential copying process converges in both parameters and behavior to the single-pass approach under optimal conditions. #### 3.2.1 The sequential framework Consider a sequence of finite sets \(S_{t}\) such that \[S_{t}\subseteq S_{t+1}\subseteq\cdots\subseteq S \tag{7}\] for \(t\in\mathbb{N}\), where \(|S|=\aleph^{0}\), which approaches the set \(S\) as \(t\) increases towards infinity. The sequential framework is based on the notion that, if \(S_{t}\) converges to \(S\), then the optimal copy parameters \(\theta_{t}^{*}\) obtained by optimizing over \(S_{t}\) will converge to \(\theta^{*}\), the optimal parameters over \(S\). This approximation can be iteratively obtained by drawing samples from a given probability density function. To prove this, we cast the unconstrained copying problem in probabilistic terms 2 Footnote 2: Observe that we can easily recover the empirical risk minimization framework considering probability density functions of the exponential family. Consider the empirical loss defined as \(\frac{1}{n}\sum_{i=1}^{n}\ell(a,b;\theta)\), then \[\arg\min_{\theta}\frac{1}{n}\sum_{i=1}^{n}\ell(a,b;\theta)=\arg\max_{\theta} \frac{1}{n}\sum_{i=1}^{n}e^{-\gamma\cdot\ell(a,b;\theta)}.\] and express it as the solution to the following empirical distributional problem: \[\theta^{*}=\arg\,\max_{\theta}\sum_{z\in S}\mathcal{P}(\theta|f_{\mathcal{O}} (z),f_{\mathcal{C}}(z))=\arg\,\max_{\theta}F(\theta),\] (8) where \(S=z|z\sim P_{\mathcal{S}}\) is a synthetic dataset of size \(|S|=\aleph^{0}\). Next, we present Theorem 1, which shows that the solution to Equation 8 for \(S\) can be approximated using the sequence of iterative values \(S_{t}\). **Theorem 1**: _Let \(S_{t}\subseteq S_{t+1}\subseteq\cdots\subseteq S\) be a sequence of subsets converging to \(S\). Then, the sequence of functions \(\left\{F_{t}\right\}_{t}\) defined as \(F_{t}(\theta)=\sum_{z\in S_{t}}\mathcal{P}(\theta|f_{\mathcal{C}}(z,\theta), f_{\mathcal{O}}(z))\), converges uniformly to \(F(\theta)=\sum_{z\in S}\mathcal{P}(\theta|f_{\mathcal{C}}(z,\theta),f_{ \mathcal{O}}(z))\)._ Theorem 2 proves the convergence of parameters in Theorem 1. **Theorem 2**: _Under the conditions of Theorem 1, the sequence of parameters \(\left\{\theta_{t}^{*}\right\}_{t}\) defined as \(\theta_{t}^{*}=\arg\,\max_{\theta\in\Theta}F_{i}(\theta)\), converges to \(\theta^{*}=\arg\,\max_{\theta\in\Theta}F(\theta)\), where \(\Theta\) is the complete set of parameters._ The full mathematical proof of convergence of the copy parameter sequence can be found in Appendix A. **Definition:** The _sequential approach_ learning algorithm is an iterative process that optimizes the copy parameters \(\theta_{t}\) incrementally over sets of synthetic data points \(S_{t},\;t=1\ldots T\). The preliminary version of the sequential approach is outlined in Algorithm 1. We begin by generating an initial synthetic set \(S_{0}\) of size \(n\) and optimizing the copy parameters accordingly. As stated in line 4, at each iteration \(t\) the new synthetic set \(S_{t}\) adds \(n\) samples to the previous set \(S_{t-1}\). The cardinality of \(S_{t}\) is therefore given by \(\left|S_{t}\right|=t\cdot n\) and grows linearly with \(t\). This linear growth policy was chosen as the simplest strategy to build the subsets in Equation 3.2.1. Other strategies can also be used. Line 5 describes the optimization of the algorithm considering the solution of the previous step. For demonstration, we test the sequential approach on a toy binary classification dataset, the _spirals_ problem. We first train a Gaussian kernel SVM on this dataset, achieving perfect accuracy. Then, we copy the learned boundary using a fully-connected neural network with three hidden ReLu layers of 64, 32, and 10 neurons, and a SoftMax output. We train it for 1000 epochs with a batch size of 32 samples. Figure 0(a)) shows the original decision boundary and data points. We compare the results of Algorithm 1 with those of the single-pass approach in both its unique-step and online implementations. Figure 0(b)) shows the average accuracy computed at different iterations for copies trained using different methods. The online and sequential approaches are trained by generating \(n=100\) new samples at each iteration. The grey dashed lines represent the accuracy achieved by single-pass copies trained in a one-shot for different values of \(n\). The reported values represent accuracy measured over the original test data points. The plot highlights differences in convergence speeds among the three approaches. The one-shot single-pass approach with \(n=500\) has an accuracy of \(0.85\), while the purely online single-pass model reaches an accuracy of around \(0.85\) after \(30\) iterations and \(3000\) samples. In comparison, the sequential approach has an accuracy of \(0.94\) at \(t=5\) with \(500\) samples, indicating faster convergence. However, both the single-pass and sequential approaches reach an accuracy of \(\approx 1\) after being exposed to \(1000\) samples. Table 2 compares the three approaches in terms of memory, accuracy, and computational time. The one-shot single-pass approach requires a large amount of data to achieve high accuracy and estimating the number of training data points can lead to under or over-training. The online single-pass approach alleviates memory allocation issues but has limited accuracy and is dependent on the value of \(n\). While it is guaranteed to converge to the optimal solution, this is an asymptotic guarantee. The sequential approach combines the benefits of both: it increases the number of points over time, so there's no need to estimate this parameter upfront, and stops copying once a desired accuracy is reached, as Figure 1: a) Decision boundary learned by a Gaussian kernel SVM trained on the _spirals_ dataset. b) Copy accuracy averaged over \(30\) independent runs for the sequential approach (orange) and the online single-pass approach (blue) with \(n=100\). The grey dashed lines depict the accuracy obtained using a one-shot single-pass approach for various values of \(n\). The shaded areas indicate a standard deviation from the mean. proven by Theorem 1's monotonic accuracy increase with iterations. The main drawback of this method is its computational cost. The sequential approach grows quadratically with the number of data points, unlike the linear growth of both the single-pass and pure online approaches. This makes the sequential approach highly time-consuming in its current implementation. This comes as no surprise given the construction of the sequence of the set \(\{S_{t}\}\). We discuss potential improvements to address this issue in what follows. #### 3.2.2 A model compression measure The purpose of a parametric machine learning model is to compress the data patterns relevant to a specific task. In the copying framework, the original model is considered the ground truth, and the goal is to imitate its decision behavior. Because this model produces hard classification labels, its decision boundary creates a partition of the feature space that results in a separable problem. Hence, any uncertainty measured during the copying process should only come from the model, not the data: a perfect copy should have no aleatoric uncertainty while compressing all the data patterns. **Assumption:** Any uncertainty measured during the copying process is epistemic3. Footnote 3: Epistemic uncertainty corresponds to the uncertainty coming from the distribution of the model parameters. It can be reduced as the number of training samples increases. This allows us to single out the optimal model parameters. Thus, any measurement of uncertainty comes from the mismatch between the model parameters and the desired parameters, i.e. a perfect copy will have zero epistemic uncertainty. It follows from this assumption that when the epistemic uncertainty over a given data point is minimum, then the data point is perfectly compressed by the model. This is, we can assume that a data point is perfectly learned by the copy when its uncertainty is zero. Estimating uncertainty in practice requires that we measure the difference between the predictive distribution and the target label distribution. This can be done using various ratios or predictive entropy. We refer the reader to (DeVries and Taylor, 2018; Senge et al., 2014; Nguyen et al., 2018) for full coverage of this topic. For the sake of simplicity, here we consider the simplest approach to measuring uncertainty. We relax the hard-output constraint for the copy. Instead, we impose that, for any given data point, the copy returns a \(n_{c}\)-output vector of class probability predictions, such that \begin{table} \begin{tabular}{c|c|c|c} & One-shot single-pass & Online single-pass & Sequential \\ \hline \multirow{2}{*}{Memory} & Fixed large value & \multirow{2}{*}{Fixed small value} & \multirow{2}{*}{Increase monotonically} \\ & Needs to be estimated & & \\ \hline \multirow{2}{*}{Accuracy} & High but \(N\)-dependent & Increase with iteration & \multirow{2}{*}{Increase with iteration} \\ & & Upper bound is \(N\)-dependent & \\ \hline Time & \(T\propto O(t\cdot N)\) & \(T\propto O(t\cdot N)\) & \(T\propto O(t^{2}\cdot N)\) \\ \hline \end{tabular} \end{table} Table 2: Comparison table that summarizes the principal differences between both three learning approaches: one-shot _single-pass_, online _single-pass_ and _sequential_. \[f_{\mathcal{C}}:\mathbb{R}^{d}\times\Theta\longrightarrow\left[0,1\right]^{n_{c}}.\] For the sake of simplicity, we model the _uncertainty_, \(\rho\), of the copy for a given data point \(z\) as the normalized euclidean norm of the distance between the \(n_{c}\)-vectors output by the original model and the copy, such that \[\rho(z,\theta)=\frac{\|f_{\mathcal{C}}(z,\theta)-f_{\mathcal{O}}(z)\|_{2}}{ \sqrt{n_{c}}}\in[0,1]. \tag{9}\] A small \(\rho\) value indicates that the copy has low uncertainty (or strong confidence) in the class prediction output for \(z\) and that \(z\) is properly classified. Conversely, a large \(\rho\) value indicates that, despite the copy model having strong confidence in the class predicted for \(z\), this prediction is incorrect. Or alternatively, despite the prediction being correct, there is a large dispersion among the output class probabilities. This uncertainty measure can be used to evaluate how well individual data points are compressed by the copy model in the sequential framework. In the limit, this measure can also be used to assess how well the copy model replicates the original decision behavior. Hence, we can introduce a new loss function such that for a given set of data points \(S_{t}\) we define the empirical risk of the copy as \[R^{\mathcal{F}}_{emp}(f_{\mathcal{C}}(\theta_{j}),f_{\mathcal{O}})=\frac{1}{ \left|S_{t}\right|}\sum_{z\in S_{t}}\rho^{2}(z,\theta). \tag{10}\] The results of using the loss function are displayed in Figure 2. The plot illustrates the average value of \(\rho_{t}\) per iteration over all data points and runs for the different copying approaches introduced in Figure 1b). The value of \(\rho_{t}\) is significantly low for the sequential learning approach at iteration \(t=15\). Conversely, the single-pass and online learning methods exhibit higher uncertainty levels throughout the process. This difference can be attributed to the fact that the sequential approach continuously exposes the model to the same data points, reducing uncertainty iteration by iteration. The other two methods, however, only expose the model once to the same data, causing higher uncertainty levels. The one-shot single-pass approach exposes the model to the whole dataset at once, while the sequential approach never exposes the model to the same data points more than once. As a result, the sequential approach has higher redundancy and reduces uncertainty faster when compared also to the online method, where the learner is never exposed to the same data points more than once. #### 3.2.3 A sample selection policy for copying Including an uncertainty measure in the training algorithm enables us to assess the degree of compression for each data point. The higher the compression, the better the copy model is at capturing the pattern encoded by each data point for the given task. This estimation can be used for selecting those points that contribute the most to the learning process. By filtering out the rest of the samples, we can reduce the number of resources consumed when copying. Hence, we enforce a policy that uses uncertainty as a criterion for sample selection. At each iteration, data points with an uncertainty lower than a threshold \(\delta\) are removed from the learning process (refer to Algorithm 2). The procedure starts with building a new sequential set by randomly sampling \(n\) points and adding them to \(S_{t-1}\) in line 1. Then, in line 2, the uncertainty measure is used to select points with \(\rho_{j}\leq\delta\), forming the filtered set \(S_{t}\) that is used to optimize the copy parameters. Figure 3 presents the results for the sequential training with \(n=100\) and different values of the sample selection parameter \(\delta\) in the _spirals_ dataset. For comparison, results for the online single-pass approach and the pure sequential approach are also shown. Figure 3a) displays the average accuracy at each iteration. The results demonstrate that sequential training with sample selection performs better than online training, but falls short of the pure sequential setting. Figures 3b) and c) show the change in \(\rho\) and number of synthetic data points used for training over increasing iterations, respectively. The online single-pass approach shows a constant uncertainty, while the pure sequential approach approaches 0. Contrary to what one might expect, the sample selection policy leads to an overall increase in uncertainty, while also reducing the number of data points used for training. Eventually, the number of samples \(Nn\) converges to a fixed value after a few iterations. We consider all three plots in Figure 3 at once. During the first iterations, when copies are still not sufficiently tuned to the data, there is no filtering effect. The number of points \(Nn\) grows almost equally for all the settings for a few iterations, as shown in Figure 3c). Figure 2: Average uncertainty across the entire dataset for both the sequential (orange) and online single-pass (blue) copying approaches, averaged over 30 independent runs. The uncertainty achieved by the one-shot single-pass approach, with different values of \(n\), is represented by dashed lines. The shaded area indicates a one standard deviation deviation from the mean. During this time, accuracy increases gradually, as shown in Figure 2(a)), and \(\rho\) decreases, as displayed in Figure 2(b)), as expected for a model that is compressing information. The differences arise after a few iterations when the filtering effect begins and the number of data points decreases in settings where the sample selection policy is enforced. At this stage, the copy models have a confidence level close to \(\delta\), which allows the removal of samples. As a result, accuracy stops growing and becomes flat, while average uncertainty starts to rise. With regard to the number of points, the curves quickly stabilize by \(t=18\). The model with the lowest threshold, \(\delta=10^{-10}\), accumulates the larger number of points (\(\approx 250\)). This is less than \(10\%\) of the number of points required by the pure sequential method, but still reaches a reasonable accuracy. On the other hand, models trained with \(\delta=10^{-8}\) and \(\delta=10^{-6}\) have lower accuracy compared to the online single-pass setting. This algorithm addresses the problem in Equation 5 but with limitations. The uncertainty metric \(\rho\) is linked to the empirical fidelity error \(\mathcal{R}_{emp}^{FC}\). The points with high uncertainty also have the highest empirical fidelity error. However, removing points from set \(S_{t}\) violates the assumptions in Theorems 1 and 2. When the copy model has high confidence in the synthetic data, sample removal changes its behavior from sequential (adding points incrementally) to online (fixed number of points per iteration). For parametric models, this shift results in _catastrophic forgetting_, as shown in Figure 3 where sequential models start to act like online models. The removal of samples increases uncertainty in the copy models, causing them to forget prior information. To address this, we introduce several improvements in the implementation of Step 2 in the alternating optimization algorithm. ### Step 2: Optimizing copy model parameters Let us briefly turn our attention to Equation 6. This equation models the challenge of controlling the capacity of the copy model while minimizing the loss function. To tackle this issue, we introduce a capacity-enhancement strategy. For this purpose, it's worth noting that if we assume that the copy model has enough capacity, then Equation 6 can be simplified for a given iteration \(t\) as follows \[\underset{\theta}{\text{minimize}} \Omega(\theta)\] (11) subject to \[\left\|R_{emp}^{\mathcal{F}}(f_{\mathcal{C}}(\theta),f_{\mathcal{O }})\right\|_{S=S_{t}^{*}}<\varepsilon.\] Having the above simplification in mind, our proposed scheme, outlined in Algorithm 3, aims to control the copy model's capacity while minimizing the loss function. It itera tively solves the parameter optimization problem in stages, ensuring that the empirical risk decreases at each step. At iteration \(t\), the copy model starts with a small capacity \(\Omega\) and solves the optimization problem for increasing upper bounds \(\epsilon_{k}\) of the target value \(\varepsilon\). For increasing values of \(k\), we increase the capacity and reduce the upper bound. The larger the capacity, the more the empirical risk decreases and the smaller the complexity of the approximation to the target value. Figure 3: a) Accuracy and b) uncertainty per iteration for the sequential approach with different uncertainty thresholds, averaged over 30 independent runs. Results for the online setting are also shown in blue for comparative purposes at every iteration. c) Number of data points used at each iteration. The \(y\)-axis is restricted to 400 in order to make curves observable. The number of points of the sequential learning (orange line) grows linearly until \(30\cdot 100=3000\). In practice, we train copies using the loss function in Equation 10. Using stochastic subgradient optimization models (neural networks with "Stochastic Gradient Descent"), we control model complexity with an early stopping criterion. Thus, delaying early stopping increases model complexity. To improve convergence, hyperparameters are updated at each iteration. #### 3.3.1 Forcing the model to remember With the previous scheme in mind, we revisit the argument in Section 3.2.3. As noted, the sample removal policy conflicts with some of the assumptions made in the theorems introduced in Section 3.2, leading to a forgetting effect in Step 2. To restore the convergence properties, we first point out that, as shown by Theorem 2, parameters converge to their optimal value in the sequential approach. This implies that the difference between two consecutive terms in the sequence also converges to zero, such that \[\big{|}\big{|}\theta_{t+1}^{*}-\theta_{t}^{*}\big{|}\big{|}\longrightarrow 0. \tag{12}\] The asymptotic invariant in Equation 12 can be forced to preserve the compressed model obtained from previous iterations even after filtering the data points. We add a regularization term to the loss function at iteration \(t\), which minimizes the left term \[\mathcal{L}_{t}=R_{emp}^{\mathcal{F}}(f_{\mathcal{C}}(\theta_{t}),f_{ \mathcal{O}};\Omega)+\lambda\cdot\|\theta_{t}-\theta_{t-1}^{*}\| \tag{13}\] This regularization term originates from the derived theorems and can also be found in the literature under the name of Elastic Weight Consolidation (EWC), though derived heuristically from a different set of assumptions (Kirkpatrick et al., 2017). Algorithm 4 outlines the implementation of this strategy. ``` 1:Input: **Sample Set \(S\)**, **float \(\varepsilon\)**,**Classifier \(f_{\mathcal{O}}\) 2:Output: Copy parameters at step \(k\) 3:\(k\gets 0\), \(\epsilon_{k}\gets C\), \(C>>\varepsilon\)\(\triangleright\)\(C\) takes an arbitrarily large value. 4:while\(R_{emp}^{\mathcal{F}}(f_{\mathcal{C}}(\theta_{k}),f_{\mathcal{O}};\Omega)\geq\varepsilon\)do 5:\(\theta_{k}^{*}\leftarrow\underset{\theta}{\arg\min}\;R_{emp}^{\mathcal{F}}(f_{ \mathcal{C}}(\theta),f_{\mathcal{O}};\Omega)\) subject to \(R_{emp}^{\mathcal{F}}(f_{\mathcal{C}}(\theta),f_{\mathcal{O}};\Omega)\geq \epsilon_{k}\bigg{|}_{\theta_{k}^{0}=\theta_{k-1}^{*}}\) 6:\(k\gets k+1\) 7:\(\epsilon_{k}\leftarrow\min(\epsilon_{k-1}/2,\varepsilon)\)\(\triangleright\) Reduce the value of epsilon. 8: Increase \(\Omega\) 9:endwhile 10:return\(\theta_{k}^{*}\) ``` **Algorithm 3** Empirical risk minimization implementation Continuing with the spirals example, the experimental results for the sequential copying process with four different \(\lambda\) values and a threshold \(\delta=10^{-8}\) are shown in Figure 4 (see also Figure 3 for comparison). As before, Figure 3(a)) reports the change in accuracy for increasing iterations, while Figures 3(b)) and c) present the results for the uncertainty \(\rho\) and the number of data points used \(nN\). Again, we can combine the information from all three panels to better understand the behavior displayed by the copy model when _forced to remember_. Initially, the model has not been trained, so the number of points increases while the uncertainty decreases. Once the removal threshold \(\delta\) is reached, the regularization term becomes relevant. For low \(\lambda\) values, the \(\rho\)-term of Equation 12 dominates the cost function, and the optimization process learns the new data points. As a result, the copy parameters adapt to new data points and the model exhibits forgetting. Conversely, for _large_\(\lambda\) values, the regularization term will dominate the optimization. The model is therefore forced to retain more data points to ensure the \(\rho\) term can compete with the regularization term. In this setting, the number of data points still grows, but in a sub-linear way. The most desirable behavior is a constant number of points during the training process. For this particular example, we observe this behavior for \(\lambda=0.05\) and \(\lambda=0.1\). #### 3.3.2 Automatic Lambda In the previous section, we observed an increment in the number of data points when using large lambda values, to compensate for the memorization effect introduced by the regularization term. This is to consolidate the previous knowledge acquired by the model. In contrast, small lambda values promote short-lived data compression. This is desirable at the beginning of the learning process or when the data distribution suffers a shift. To retain the best of both regimes, we propose a heuristic to dynamically adapt the \(\lambda\) value to the needs of the learning process. Our underlying intuition is that, whenever the amount of data required increases, the memory term prevents the copy from adapting to new data. This signals that the value of \(\lambda\) must decrease. Equivalently, when we observe a decrement in the number of data points, this means that the model can classify most of them correctly. This indicates that we must stabilize it to avoid unnecessary model drift due to future disturbances in the data. Hence, we must increase the \(\lambda\) parameter. Thus, considering the set of data at iteration \(t\), \(S_{t}\), we force the described behavior by automatically updating the value of \(\lambda\) parameter as follows \[\lambda=\left\{\begin{array}{ll}\lambda/2&\mbox{if }\quad|S_{t}|\geq|S_{t-1}|,\\ 1.5\cdot\lambda&\mbox{otherwise.}\end{array}\right.\] The updated optimization process, including the modifications discussed, is presented in Algorithm 5, the final algorithm proposed in this article. Our implementation models data trends by computing the difference between the number of points in the previous iteration and the number of points after data filtering. The filtering process occurs at the beginning of each iteration after generating a new set of \(n\) samples. Figure 4: a) Copy accuracy averaged over 30 independent runs for four different lambda values: \(\lambda=0.005,0.05,0.1,0.25\) with the same drooping threshold \(\delta=10^{-8}\). b) Average uncertainty at each iteration. c) Number of data points used at each iteration. Shaded region shows \(\pm\sigma\) As before, we show how this improvement works for the _spirals_ example. We repeat the experiments using Algorithm 5 with \(n=100\) and three different dropping thresholds: \(\delta=10^{-6},10^{-8}\) and \(10^{-10}\) as we did in Subsection 3.2.3 where the dropping procedure was first introduced. Recall that, as discussed in Figure 3 only the setting corresponding to the smallest \(\delta\) value managed to perform better than the online approach, yet still performed worse than the pure sequential implementation. The remaining thresholds lost too many data points, and their performance was worse than the online approach when the number of data points used was similar. The results obtained when implementing the full Algorithm 5 are depicted in Figure 5. The accuracy level obtained using the automatic regularization term is comparable with the desired optimal _pure sequential_ case, where the model keeps all data points. Even for the most conservative approach, where we use a very small dropping threshold, the number of points used after 30 iterations is \(1/3\) smaller than those required for the pure sequential setting. Moreover, even when deliberately forcing a very volatile setup by using a large value of delta, \(\delta=10^{-6}\), the obtained results exhibit an accuracy larger than \(0.95\) for an almost constant number of data points equal to \(200\). ## 4 Experiments We present the results of our experimental study on the sequential copy approach applied to a set of heterogeneous problems. Our results are analyzed using various performance metrics and compared to the single-pass approach in both one-shot and online settings. Before presenting the results, we provide a clear and reproducible description of the data and experimental setup. ### Experimental set-up **Data.** We use 58 datasets from the UCI Machine Learning Repository database (Dheeru and Karra Taniskidou, 2017) and follow the experimental methodology outlined in (Unceta et al., 2020a). For a more detailed explanation of the problem selection and data preparation process, we refer the reader to the mentioned article. **Pre-processing.** We convert all nominal attributes to numerical and rescale variables to have zero mean and unit variance. We split the pre-processed data into stratified 80/20 training and test sets. We sort the datasets in alphabetical order and group them in sets of 10, and assign each group one of the following classifiers: AdaBoost (_adaboost_), artificial neural network (_ann_), random forest (_rfc_), linear SVM (_linear_svm_), radial basis kernel SVM (_rbf_svm_) and gradient-boosted tree (_xgboost_). We train all models using a 3-fold cross-validation over a fixed parameter grid. A full description of the 58 datasets, including general data attributes, and their assigned classifier, can be found in Table 3. **Models.** We copy the resulting original classifiers using a fully-connected neural network with three hidden layers, each consisting of 64, 32, and 10 _ReLu_ neurons and a SoftMax Figure 5: a) Copy accuracy averaged over 30 independent runs for three different drooping thresholds: \(10^{-6}\), \(10^{-8}\), and \(10^{-10}\). b) Average uncertainty at each iteration. c) Number of data points used at each iteration. Shaded region shows \(\pm\sigma\) output layer. We use no pre-training or drop-out and initialize the weights randomly. We implement the sequential copy process as described in Algorithm 5 above. Parameter setting.We train these models sequentially for 30 iterations. At each iteration \(t\), we generate \(n=100\) new data points by randomly sampling a standard normal distribution \(\mathcal{N}(0,1)\). We use Algorithm 5, discarding any data points for which the instantaneous copy model has an uncertainty \(\rho\) below a defined threshold \(\delta\). We adjust the weights at each iteration using the _Adam_ optimizer with a learning rate of \(5\cdot 10^{-4}\). For each value of \(t\), we use 1000 epochs with balanced batches of 32 data points. We use the previously defined normalized uncertainty average as the loss function and evaluate the impact of the \(\delta\) parameter by running independent trials for \(\delta\in\{5\cdot 10^{-4},10^{-4},5\cdot 10^{-5},10^{-5},5\cdot 10^{-6},10^{-6},5 \cdot 10^{-7},10^{-7},5\cdot 10^{-8},10^{-8},10^{-9},10^{-10}\}\). Additionally, we allow the \(\lambda\) parameter to be updated automatically, starting from a value of 0.5. Hardware.We perform all experiments on a server with 28 dual-core AMD EPYC 7453 at 2.75 GHz, and equipped with 768 Gb RDIMM/3200 of RAM. The server runs on Linux 5.4.0. We implement all experiment in Python 3.10.4 and train copies using TensorFlow 2.8. For validation purposes, we repeat each experiment 30 times and present the average results of all repetitions. ### Metrics and performance plots We evaluate the copy performance using three metrics, including copy accuracy (\(\mathcal{A}_{\mathcal{C}}\)). The copy accuracy is calculated as the accuracy of the copy on the original test set4. Given a model \(f\), we define the copy accuracy as the fraction of correct predictions made by this model on a data set of labelled pairs, \(\mathcal{D}=(x_{j},y_{j})\), as: Footnote 4: Computing this value requires the original test data to be known and accessible. We consider this assumption to be reasonable, as it provides a lower bound for all the other metrics. \[\mathcal{A}_{\mathcal{C}}^{f}=\frac{1}{N}\sum j=1^{N}\mathbb{I}[f(x_{j})==y_{j }], \tag{14}\] where \(N\) is the number of samples and \(\mathbb{I}[\text{cond}]\) is the indicator function that returns 1 if the condition is true, i.e., when the model predicts the right outcome. We use the definition above to compare the results obtained when using the sequential approach, \(\mathcal{A}_{\mathcal{C}}^{\text{seq}}\), with those obtained when training copies based on the single-pass approach, \(\mathcal{A}_{\mathcal{C}}^{\text{single}}\). Another performance metric is the area under the normalized convergence accuracy curve (\(conv\)). This metric measures the convergence speed of the sequential approach and is defined as the area under the curve of the copy accuracy increment per iteration, normalized by the maximum copy accuracy. The \(conv\) value is in the range of 0 to 1 and represents the fraction of time required for the system to reach a steady convergence state. For \(T\) iterations, the value of \(conv\) is defined as follows \[conv=\frac{1}{T}\frac{\int_{0}^{T}\mathcal{A}_{\mathcal{C}}^{\text{seq}}(t)\; dt}{\max_{t\in[0,T]}\mathcal{A}_{\mathcal{C}}^{\text{seq}}(t)}, \tag{15}\] where \(\mathcal{A}_{\mathcal{C}}^{\text{seq}}(t)\) corresponds to the copy accuracy increment iteration by iteration. Intuitively, _conv_ metric measures the time required for the system to reach a steady convergence state. Given \(T\) iterations of the sequential approach, a convergence speed of _conv_ means that the algorithm reaches the steady state at step \(2(1-\textit{conv})T\). A _conv_ value of 90% tells us that we can reach convergence as fast as \(0.2T\). This is, the system requires only 20% of the allocated time to converge. Finally, we also introduce the efficiency metric, _eff_. This metric evaluates the computational cost of the copying process in terms of the number of synthetic data points used for training. We compute it by comparing the actual number of points used in the sequential approach with the theoretical number of points that would be used if no sample removal \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & Classes & Samples & Features & Original & \(\mathcal{A_{O}}\) \\ \hline abalone & 3 & 4177 & 8 & _adaboost_ & 0.545 \\ acute-inflammation & 2 & 120 & 6 & _adaboost_ & 1.0 \\ acute- nephritis & 2 & 120 & 6 & _adaboost_ & 1.0 \\ bank & 2 & 4521 & 16 & _adaboost_ & 0.872 \\ breast-cancer-wise-diag & 2 & 569 & 30 & _adaboost_ & 0.921 \\ breast-cancer-wisc-prog & 2 & 198 & 33 & _adaboost_ & 0.7 \\ breast-cancer-wisc & 2 & 699 & 9 & _adaboost_ & 0.914 \\ breast-cancer & 2 & 286 & 9 & _adaboost_ & 0.69 \\ breast-tissue & 6 & 106 & 9 & _adaboost_ & 0.545 \\ chess-krypg & 2 & 3196 & 36 & ann & 0.995 \\ congressional-voting & 2 & 435 & 16 & ann & 0.609 \\ conn-bench-sonar-mines-rocks & 2 & 208 & 60 & ann & 0.833 \\ connect-4 & 2 & 67557 & 42 & ann & 0.875 \\ contrac & 3 & 1473 & 9 & ann & 0.573 \\ credit-approval & 2 & 690 & 15 & ann & 0.79 \\ cylinder-bands & 2 & 512 & 35 & ann & 0.777 \\ echocardiogram & 2 & 131 & 10 & ann & 0.815 \\ energy-yl & 3 & 768 & 8 & ann & 0.974 \\ energy-yl & 3 & 768 & 8 & ann & 0.922 \\ fertility & 2 & 100 & 9 & _random_forest_ & 0.9 \\ haberman-survival & 2 & 306 & 3 & _random_forest_ & 0.613 \\ heart-hungarian & 2 & 294 & 12 & _random_forest_ & 0.763 \\ hepatitis & 2 & 155 & 19 & _random_forest_ & 0.742 \\ lipid-indian-liver & 2 & 583 & 9 & _random_forest_ & 0.615 \\ ionosphere & 2 & 351 & 33 & _random_forest_ & 0.944 \\ iris & 3 & 150 & 4 & _random_forest_ & 0.933 \\ magic & 2 & 19020 & 10 & _inner_sum_ & 0.801 \\ mammographic & 2 & 961 & 5 & _random_forest_ & 0.803 \\ miniboone & 2 & 130064 & 50 & _random_forest_ & 0.936 \\ mdoc-holi-spice & 3 & 3190 & 60 & _random_forest_ & 0.944 \\ mushroom & 2 & 8124 & 21 & _inner_sum_ & 0.979 \\ musk-1 & 2 & 476 & 166 & _inner_sum_ & 0.812 \\ musk-2 & 2 & 6598 & 166 & _inner_sum_ & 0.958 \\ oocytes\_mermicus\_nucleus\_4d & 2 & 1022 & 41 & _inner_sum_ & 0.771 \\ oocytes\_trisopters\_nucleus\_2f & 2 & 912 & 25 & _inner_sum_ & 0.803 \\ paridransons & 2 & 195 & 22 & _inner_sum_ & 0.923 \\ pima & 2 & 768 & 8 & _inner_sum_ & 0.721 \\ pittsburg-bridgs\_MATERIAL & 3 & 106 & 7 & _inner_sum_ & 0.909 \\ pittsburg-bridgs-REL-L & 3 & 103 & 7 & _inner_sum_ & 0.667 \\ pittsburg bridges-T-OR-D & 2 & 102 & 7 & _rly_sum_ & 0.857 \\ planning & 2 & 182 & 12 & _rly_sum_ & 0.703 \\ risingorn & 2 & 7400 & 18 & _rly_sum_ & 0.983 \\ seeds & 3 & 210 & 7 & _rly_sum_ & 0.881 \\ spambase & 2 & 4601 & 57 & _rly_sum_ & 0.926 \\ statlog-australian-credit & 2 & 690 & 14 & _rly_sum_ & 0.681 \\ statlog-german-credit & 2 & 1000 & 24 & _rly_sum_ & 0.765 \\ statlog-heart & 2 & 270 & 13 & _rly_sum_ & 0.852 \\ statlog-image & 7 & 2310 & 18 & _rly_sum_ & 0.952 \\ statlog-vehicle & 4 & 846 & 18 & _r_boost_ & 0.765 \\ synthetic-control & 6 & 600 & 60 & _rly_sum_ & 1.0 \\ teaching & 3 & 151 & 5 & _r_boost_ & 0.548 \\ tic-tac-toe & 2 & 958 & 9 & _r_boost_ & 0.974 \\ titanic & 2 & 2021 & 3 & _r_boost_ & 0.778 \\ twoxom & 2 & 7400 & 20 & _r_boost_ & 0.976 \\ vertebral-column-2clases & 2 & 310 & 6 & _r_boost_ & 0.839 \\ vertebral-column-3clases & 3 & 310 & 6 & _r_boost_ & 0.806 \\ waveform-noise & 3 & 5000 & 40 & _r_boost_ & 0.843 \\ waveform & 3 & 5000 & 21 & _r_boost_ & 0.843 \\ wine & 3 & 178 & 11 & _r_boost_ & 0.944 \\ \hline \hline \end{tabular} \end{table} Table 3: Description of datasets. policy was applied. We define _eff_ as: \[\textit{eff}=1-\frac{\int_{0}^{T}\eta(t)}{\int_{0}^{T}n\cdot t}\ \ \ \ \ dt}, \tag{16}\] where \(\eta(t)\) is the number of points used in the sequential approach at each iteration \(t\), as shown in Figure 5(c). The _eff_ metric models the expected number of samples required for copying. This value can be roughly approximated by \((1-\textit{eff})/2\). A _eff_ value of 90% indicates that on average, only 5% of the available data points are used in the process. Note that both the pure sequential approach, where no sample removal policy is used, and the single-pass approach have 0 efficiency because they both use all the available data points for training. We evaluate the performance of the sequential approach by combining various metrics into a single representation. We consider models with copy accuracy within 5% of the single-pass result \((\mathcal{A}_{\mathcal{C}}^{\text{seq}}/\mathcal{A}_{\mathcal{C}}^{\text{ single}}|nT>0.95)\). Out of these configurations, we select the ones with the highest copy accuracy (\(\mathcal{A}_{\mathcal{C}}^{\text{seq}}\)), best efficiency (_eff_), and fastest convergence (_conv_), referred to as the _Best accuracy_, _Best efficiency_, and _Best convergence_, respectively. To visualize the results, we present four plots: (1) a comparison of the copy accuracy between the single-pass approach and the _Best accuracy_ model, (2) a comparison of the copy accuracy and efficiency between the _Best accuracy_ and _Best efficiency_ results, (3) a comparison of the copy accuracy and convergence rate between the _Best accuracy_ and _Best convergence_ configurations, and (4) a demonstration of the relationship between convergence and efficiency. As an example, Figure 6 shows these plots for three toy datasets: _spirals_, _moons_, and _yin-yang_. In Figure 6a), we can compare the performance degradation or improvement of the sequential approach and the single-pass approach. In Figure 6b), a comparison between the gain in efficiency (square marker) and the the best-performing configuration can be seen for each dataset. The most efficient configuration typically displays a significant efficiency gain. Figure 6c) compares the _Best accuracy_ and _best convergence_ operational points. We observe that both configurations are nearly indistinguishable in terms of both accuracy and convergence, i.e. the most accurate model is also the one which converges faster. Finally, in Figure 6d) we observe that the _Best efficiency_ configurations yield a significant improvement in efficiency, while the lose in convergence is not too big. The biggest gains are therefore obtained in terms of efficiency. This is a relevant results, because it shows that the sequential approach to copying can provide significant advantages for memory usage and computational resource allocation. ### Results We report the metrics for each UCI dataset as introduced above. Table 4 lists the values of efficiency (_eff_), convergence rate (_conv_), and accuracy \(\mathcal{A}_{\mathcal{C}}^{\text{seq}}\) for the three operational points: _Best accuracy_, _Best efficiency_, and _Best convergence_. These results are compared to the original accuracy (\(\mathcal{A}_{O}\)) and the one-shot single-pass copy accuracy (\(\mathcal{A}_{\mathcal{C}}^{\text{single}}\)). We observe that not all copies can perfectly reproduce the original accuracy. This effect is observed in 19 datasets for both single-pass and sequential models. Possible reasons include a mismatch between the copy's capacity and the boundary complexity, or a misalignment between the sampling region and the original data distribution. This observation is in line with results in the literature but outside of the scope of this article. Conversely, in 8 datasets, copies outperform the original classifier performance, specifically in 4 cases. This unexpected result may be due to statistical noise and requires further investigation. The relevant results for our proposal show that the sequential copy process at the _Best accuracy_ operational point matches the single-pass approach in 53 of the 58 problems, performs worse in 4 datasets and better in 1. On average, the copy process converges in Figure 6: a) Comparison of accuracy between single-pass and sequential approaches on the _spirals_, _synyang_, and _moons_ datasets. The dashed line marks equal accuracy between the two methods. Points above (below) the line indicate better accuracy for the sequential (single-pass) approach. b), c), and d) Comparison of _Best accuracy_ (circles), _Best efficiency_ (squares) and _Best convergence_ (stars) operational points for each dataset. The gray dashed lines connect the points of the same dataset and display the linear fit of intermediate solutions. 11.5% of the allotted time with a convergence metric of _conv_\(=0.942\) and efficiency of _eff_\(=0.716\). This requires an average of 14% of the samples used by the single-pass. A graphic display of the results comparing both approaches is shown in Figure 7. The most notable results have been highlighted in darker colors to ease interpretation. Values plotted in the diagonal correspond to cases where both approaches yield comparable results. Most of the sequential copies recover most of the single-pass accuracy, even when training on smaller synthetic datasets. In some datasets, however, copies based on the sequential approach fall below the diagonal. This effect is observed when the amount of memory (\(n\)) is not enough to describe the decision boundary entirely. Finally, we observe some cases where points lie above the diagonal, signaling that sequential copies improve over the single-pass results. The results of the _Best efficiency_ and _Best convergence_ operational points have an average accuracy degradation of 5%, which is not statistically significant. For the _Best efficiency_ operational point, 51 datasets have a degradation in performance, but there's a statistically significant improvement in efficiency in 47 datasets (eff=0.882) using only 6% of the data points. The convergence speed increases by 17.4% in the allotted time, meaning 3% more time compared to the _Best accuracy_ configuration. For the _Best convergence_ operational point, 20 out of 58 datasets show a performance degradation, which is not statistically significant. There's a statistical efficiency loss in 10 datasets, and it requires an average of 15% of the allotted time. This operational point barely improves the convergence speed compared to _Best accuracy_. It differs from it in average in 20 datasets, with a required time to reach the steady state of 11.2% of the allotted time. This small time difference is due to the automatic lambda setting. Large \(\lambda\) values and small \(\delta\) values ensure fast convergence speeds. The automatic lambda algorithm starts with a large lambda value for fast convergence at the initial steps, then reduces it in subsequent iterations to improve accuracy. This results in similar metrics for both _Best accuracy_ and _Best convergence_ configurations. The operational points discussed are graphically represented in Figure 8. Figures 8a), b) and c) mark the _Best accuracy_ operational point with a circle, the _Best efficiency_ operational point with a square and the _Best convergence_ operational point with a star. Each dataset is linked by a line connecting the two points, which is a linear fit of the intermediate solutions. Triangles mark the datasets where there is no statistically significant difference between the considered operational points. Figure 8a) displays that, in most cases, the method's efficiency can be largely increased with only a small degradation in accuracy, as indicated by the relatively flat slopes. Figure 8b) confirms the results, with the _Best accuracy_ and _Best convergence_ operational points tending to be very similar, as evidenced by the high number of triangles. The method thus showcases fast convergence speed and high accuracy simultaneously. Finally, Figure 8c) displays convergence against efficiency, with the _Best convergence_ operational point marked with a circle. A dashed line links this point to the _Best efficiency_ operational point in the same dataset. We observe that there is a larger slope in the lines, which indicates a trade-off between the number of data points used and the speed of convergence. This is to be expected, because the largest the number of points used in the training, the smaller the amount of iterations the system will probably require. However, the method still shows fast convergence, ranging between 85% and 98%. This indicates that the method is not only fast but it also requires very small amount of points. ## 5 Conclusions In this paper, we proposed a sequential approach for replicating the decision behavior of a machine learning model through copying. Our approach offers a unique solution to the problem of balancing memory requirements and convergence speed and is the first to tackle the problem modeled in Equation 1 in the context of copying. To this aim, we moved the copying problem to a probabilistic setting and introduced two theorems to demonstrate that the sequential approach converges to the single-pass approach when the number of samples used for copying increases. We also studied the duality of compression and memorization in the copy model and showed that a perfect copy can compress all the data in the model parameters. To evaluate this effect, we used epistemic uncertainty as a reliable data compression measure for copying. This measure is only valid for copies and can not be extrapolated to standard learning procedures. This is because contrary to the standard learning case, there is no aleatoric uncertainty when copying. Therefore, all uncertainty measured corresponds to that coming from the model itself. As such, we can devise copy models that effectively compress all the data with guarantees. With this in mind, we have identified the phenomenon of catastrophic forgetting in copies, a well-known effect that appears in online learning processes. To mitigate this effect, we have introduced a regularization term derived from an invariant in one of the theorems that enable the process to become more stable. To reduce the computational time and memory used, we also introduced a sample selection policy. This policy controls the compression level that each data sample undertakes. If new data points are already well compressed and represented by the copy at the considered iteration, it is unnecessary to feed them back to this model. As a result of this process, very little data is required during the learning process. Moreover, we observed that the number of samples required to converge to an optimal solution stabilizes to a certain amount. Additionally, we introduced a regularization term for the copy-loss function to prevent noise during the learning process. This term prevents copies from diverging from one iteration to another. To control the hyper-parameter governing this regularization term, we devised an automatic adjustment policy. This policy resorts to a simple but stable meta-learning algorithm that allows us to weigh the dynamic adjustment of the regularization term. As a result, there is no need for hyperparameter tuning. Our empirical validation on 58 UCI datasets and six different machine learning architectures showed that the sequential approach can create a copy with the same accuracy as the single-pass approach while offering faster convergence and more efficient use of computational resources. The sequential approach provides a flexible solution for companies to reduce the maintenance costs of machine learning models in production by choosing the most suitable copying setting based on available computational resources for memory and execution time. ## Acknowledgments This work was funded by Huawei Technologies Duesseldorf GmbH under project TC20210924032, and partially supported by MCIN/AEI/10.13039/501100011033 under project PID2019-105093GB-I00
2305.18840
Learning Perturbations to Explain Time Series Predictions
Explaining predictions based on multivariate time series data carries the additional difficulty of handling not only multiple features, but also time dependencies. It matters not only what happened, but also when, and the same feature could have a very different impact on a prediction depending on this time information. Previous work has used perturbation-based saliency methods to tackle this issue, perturbing an input using a trainable mask to discover which features at which times are driving the predictions. However these methods introduce fixed perturbations, inspired from similar methods on static data, while there seems to be little motivation to do so on temporal data. In this work, we aim to explain predictions by learning not only masks, but also associated perturbations. We empirically show that learning these perturbations significantly improves the quality of these explanations on time series data.
Joseph Enguehard
2023-05-30T08:33:50Z
http://arxiv.org/abs/2305.18840v1
# Learning Perturbations to Explain Time Series Predictions ###### Abstract Explaining predictions based on multivariate time series data carries the additional difficulty of handling not only multiple features, but also time dependencies. It matters not only what happened, but also when, and the same feature could have a very different impact on a prediction depending on this time information. Previous work has used perturbation-based saliency methods to tackle this issue, perturbing an input using a trainable mask to discover which features at which times are driving the predictions. However these methods introduce fixed perturbations, inspired from similar methods on static data, while there seems to be little motivation to do so on temporal data. In this work, we aim to explain predictions by learning not only masks, but also associated perturbations. We empirically show that learning these perturbations significantly improves the quality of these explanations on time series data. Machine Learning, Neural Networks, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning, Learning Perturbations, Learning Perturbations, Learning Perturbations, Learning (Sundararajan et al., 2017; Shrikumar et al., 2017). Another important class of explanation methods is called _perturbation-based_. These methods consist in perturbing a feature or a group of features, and measuring how the resulting prediction changes. A greater change indicates a higher importance of the perturbed features. Such methods include Occlusion (Zeiler and Fergus, 2014), which masks features to estimate their importance. Extremal Masks (Fong and Vedaldi, 2017; Fong et al., 2019) is another perturbation-based method, which learns a mask used to perturb the input. We present this method in more detail in the next section. However, while many explanation methods have been proposed to explain a neural network, few have been developed to handle multivariate time series data. Yet, this type of data is especially important in the medical field, where the data can be a list of timestamped medical events, or of vitals measurements. There is therefore a need to adapt explanation methods to handle this temporal element. These adaptations currently include RETAIN (Choi et al., 2016), an attention-based model which learns this attention over features and time, or FIT (Tonekaboni et al., 2020), which estimates the importance of features over time by quantifying the shift in the predictive distribution. Another method, DynaMask (Crabbe and Van Der Schaar, 2021), adapts perturbation-based methods to multivariate time-series. We will present and discuss this method further in the next section. In this work, we aim to further adapt perturbation-based methods to multivariate time-series driven with the following insight. In the works of Fong and Vedaldi (2017) and Crabbe and Van Der Schaar (2021), while the mask is learned, the perturbation induced by this mask is fixed. For instance, Fong and Vedaldi (2017) replaces a feature with a Gaussian blur (a weighted average of data around the feature) depending on the value of the feature's mask: the lower this value, the higher the amount of blur. Crabbe and Van Der Schaar (2021) adapts this method by blurring the data temporally. This method seems reasonable with images, where information can be assumed to be local, which explains why convolutional neural networks (CNNs), which have a limited filter size, still perform very well on such data. However, multivariate time-series can have long-term dependencies which makes it less obvious to use a temporal Gaussian blur as the perturbation. Instead of replacing a masked feature with a local average, we might want to replace it using data further away in time. But then, how should we choose the correct perturbation formula? This calls to replace fixed perturbations with learnable ones. In this work, we present such a method1 and empirically show that it significantly improves the quality of the explanations, evaluated on both synthetic and real-world data. This study is organised as follows. We first present in more detail the methods of Fong and Vedaldi (2017) and Crabbe and Van Der Schaar (2021) in the next section. We then present our method in the following one. We conduct several experiments in the next section, designed to compare our method with several baselines, and we provide elements of discussion in the last section. Footnote 1: An implement of this work can be found at [https://github.com/josephenguehard/time_interpret](https://github.com/josephenguehard/time_interpret) ## 2 Background Work In this section, we describe in more detail two methods: one developed by Fong and Vedaldi (2017) and its adaptation to time series by Crabbe and Van Der Schaar (2021). Fong and Vedaldi (2017) propose a perturbation-based method which is defined as following. A trainable mask, with values restricted between 0 and 1, is used to generate perturbed data, which is then passed to the neural network to be explained in order to compute predictions. This mask can then be trained in two different manners, that the authors call the _deletion game_ and the _preservation game_. In the deletion game, we aim to mask as little data as possible, while trying to reduce as much as possible the predictions, on the targeted class, of the perturbed data, compared with the original predictions. This objective can be defined as, for a model \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\), a mask \(\textbf{m}\in[0,1]^{n}\), an input \(\textbf{x}\in\mathbb{R}^{n}\) and a perturbation \(\Phi(\textbf{x},\textbf{m}):\mathbb{R}^{n}\times[0,1]^{n}\rightarrow\mathbb{R }^{n}\): \[\operatorname*{arg\,min}_{\textbf{m}\in[0,1]^{n}}\lambda||\textbf{1}-\textbf{m} ||_{1}-\mathcal{L}(f(\textbf{x}),f(\Phi(\textbf{x},\textbf{m}))) \tag{1}\] The value n represents the input dimension, and \(\lambda\) is a hyperparameter balancing both goals. Secondly, in the preservation game, we aim to retain the least amount of data that will preserve the closest predictions compared with the original ones on the targeted class. This objective can be defined as: \[\operatorname*{arg\,min}_{\textbf{m}\in[0,1]^{n}}\lambda||\textbf{m}||_{1}+ \mathcal{L}(f(\textbf{x}),f(\Phi(\textbf{x},\textbf{m}))) \tag{2}\] Moreover, the perturbation \(\Phi(\textbf{x},\textbf{m})\) is fixed given an input and a mask. Fong and Vedaldi (2017) define several strategies 2: Footnote 2: Unintuitively, the original data is masked when **m** = 0. We kept this notation as it is used in both Fong and Vedaldi (2017) and Crabbe and Van Der Schaar (2021). \[\Phi(\textbf{x},\textbf{m})=\begin{cases}\textbf{m}\times\textbf{x}+(\textbf{1 }-\textbf{m})\times\mu_{0}\\ \textbf{m}\times\textbf{x}+(\textbf{1}-\textbf{m})\times\nu\\ \int g_{\sigma_{0}\times(1-\textbf{m})}(\textbf{y}-\textbf{x})\,d\textbf{y} \end{cases} \tag{3}\] The first strategy corresponds to replacing the original masked value **x** with an average \(\mu_{0}\), the second one consisting in replacing this value with Gaussian noise: \(\nu\sim\mathcal{N}(0,1)\) and the last replaces it with a Gaussian blur \(g_{\sigma}\) around **x**, given a maximum std \(\sigma_{0}\). Fong and Vedaldi (2017) also add some regularisation to ensure the perturbation to be more natural in the context of computer vision, but we leave it out, as it is further away from our topic. While this method was developed to explain predictions based on images, Crabbe and Van Der Schaar (2021) adapted it to multivariate time series. They propose as a result a method they call DynaMask, as the learned mask contains in this case a time dimension. The input space is now \(\mathbb{R}^{\mathrm{T}\times n}\), and we consider similarly a neural network \(f\) and a target class \(c\) such as: \(f_{c}(\textbf{x}):\mathbb{R}^{\mathrm{T}\times n}\rightarrow\mathbb{R}\). Therefore, the mask \(\textbf{m}\in\mathbb{R}^{\mathrm{T}\times n}\) and the input \(\textbf{x}\in\mathbb{R}^{\mathrm{T}\times n}\) are also defined on this input space. The main contribution of Crabbe and Van Der Schaar (2021) is to then adapt the perturbation operator \(\Phi\) to account for this temporal information. They also introduce three strategies: \[\Phi(\textbf{x},\textbf{m})_{t,i}=\begin{cases}m_{t,i}\times x_{t,i}+(1-m_{t, i})\times\mu_{t,i}\\ m_{t,i}\times x_{t,i}+(1-m_{t,i})\times\mu_{t,i}^{p}\\ \frac{\sum_{t^{\prime}=1}^{t^{\prime}}x_{t^{\prime},i}\times g_{\sigma(m_{t,i} )}(t-t^{\prime})}{\sum_{t^{\prime}=1}^{t^{\prime}}g_{\sigma(m_{t,i})}(t-t^{ \prime})}\end{cases} \tag{4}\] Where \(\mu_{t,i}\) is an average of \(\textbf{x}_{,i}\) over a window W around \(t\): \[\mu_{t,i}=\frac{1}{2\textrm{W}+1}\sum_{t^{\prime}=t-\textrm{W}}^{t+\textrm{W} }x_{t^{\prime},i} \tag{5}\] and \(\mu_{t,i}^{p}\) is an average of \(\textbf{x}_{,i}\) over a past element up to \(t\): \[\mu_{t,i}^{p}=\frac{1}{\textrm{W}+1}\sum_{t^{\prime}=t-\textrm{W}}^{t}x_{t^{ \prime},i} \tag{6}\] Finally, the last perturbation is a temporal Gaussian blur: \[g_{\sigma(m_{t,i})}(t)=\exp(-\frac{t^{2}}{2\sigma^{2}});\,\sigma(\textbf{m}) =\sigma_{\textrm{max}}(\textbf{1}-\textbf{m}) \tag{7}\] Crabbe and Van Der Schaar (2021) uses these perturbations in a preservation game, which aims to mask the maximum amount of data while keeping close predictions compared with the originals. They also leverage further work of Fong and Vedaldi (2017): Fong et al. (2019), which replaces Equations 1 and 2 with an area constraint. In the preservation mode (the deletion mode can be adapted similarly), the regulation \(\lambda||\textbf{m}||_{1}\) in Equation 2 is replaced with: \(\lambda_{a}(\textbf{m})=||\textrm{vecsort}(\textbf{m})-\textbf{r}_{a}||^{2}\), where \(a\) is a number between 0 and 1, \(\textrm{vecsort}(\textbf{m})\) sorts the values of **m** from lowest to largest, and \(\textbf{r}_{a}\) is a vector containing \((1-a)\times T\times n\) zeros followed by \(a\times T\times n\). As a result, this constraint allows the user to define how much of the data should be masked. In practice, Crabbe and Van Der Schaar (2021) use \(a\) as a hyperparameter, which is tuned for each data point to be explained. ## 3 Method While Crabbe and Van Der Schaar (2021) propose temporal perturbations as adaptations of the ones defined by Fong and Vedaldi (2017) in a computer vision context, these perturbations are kept fixed and local. They are indeed defined either as a moving average perturbation, or as a temporal Gaussian blur. However, temporal data is often characterised by long-term dependencies, and local information can therefore be insufficient to determine the importance of a feature at a particular time. For instance, temporal data can include repetitive patterns, as illustrated on Figure 1, which cannot be taken account using only temporally local information. Moreover, while the perturbations proposed by Crabbe and Van Der Schaar (2021) do include the possibility to include data further away in time, by tuning the size of the window W, or the parameter \(\sigma_{\textrm{max}}\) for the Gaussian blur, it is not clear how to choose such parameters nor how this would solve the issue of long term patterns. This insight calls for a generalised perturbation, which can be tuned to the data we are aiming to explain. A first idea would be to directly learn this perturbation \(\Phi(\textbf{x})\), without needing a mask, by optimizing a function similar to Equation 2. However, this method is problematic as it gives too much liberty to the perturbation model. Indeed, such a model, incentivised to output sparse explanations, could compress the data information into a small part of the input space, stating that this part is important while the rest is uninformative. On the contrary, we need to constrain the perturbation operator to explain each part of the input data without changing or moving it. To overcome this difficulty, we take inspiration from the perturbation operators of Crabbe and Van Der Schaar (2021) in Equation 4. These perturbations are generally defined as \(\textbf{m}\times\textbf{x}+(1-\textbf{m})\times\mu(\textbf{x})\), where \(\mu(\textbf{x})\) is a function of the input. In this work, we propose to replace these fixed functions with a neural network (NN), and to train it in combination with the mask. Our perturbation is therefore defined as: \[\Phi(\textbf{x},\textbf{m}) =\textbf{m}\times\textbf{x}+(\textbf{1}-\textbf{m})\times\textrm{ NN}(\textbf{x}) \tag{8}\] \[\textbf{0}\leq\textbf{m}\leq\textbf{1}\] By keeping **m** between 0 and 1, we constrain the mask to only learn how important each feature is. Moreover, we can see that this equation can be interpreted as a generalisation of the perturbations from Crabbe and Van Der Schaar (2021) defined in Equation 4. The neural network in the second component of Equation 8 can indeed, after training, output a Gaussian blur or an average of \(\mathbf{x}\) over a window. In practice, we want to model \(\text{NN}(\mathbf{x})\) as a weighted sum of \(x_{t,i}\), \(t\in\{1,...,\mathbf{T}\}\). As a result, we choose this model to be a bidirectional GRU (Cho et al., 2014). This would correspond to a general form of a Gaussian blur or a window around each element \(x_{t,i}\). We also compare this choice, in the experiment section, with a unidirectional GRU, which would be closer to the \(\mu_{t,i}^{p}\) average in Crabbe and Van Der Schaar (2021). As in Crabbe and Van Der Schaar (2021), we define the objective of the mask and the GRU combined as a preservation game, aiming to mask as much data as possible while keeping the closest predictions as possible to the original ones. Our objective is therefore: \[\operatorname*{arg\,min}_{\mathbf{m},\Theta\in\text{NN}}\lambda||\mathbf{m} ||_{1}+\mathcal{L}(f(\mathbf{x}),f(\Phi(\mathbf{x},\mathbf{m}))) \tag{9}\] where \(\Theta\) represents the parameters of the neural network, and \(\mathcal{L}\) represents a loss between the original and the perturbed predictions. This loss can be for instance a mean square error for regression tasks, or a cross-entropy loss for classification tasks. One issue that can arise from this objective is that the neural network can be rewarded to mimic the original \(\mathbf{x}\) data. Indeed, we can see from Equation 8 that, if \(\mathbf{m}=\mathbf{0}\), then \(\Phi(\mathbf{x},\mathbf{m})=\text{NN}(\mathbf{x})\). Moreover, if \(\text{NN}(\mathbf{x})\approx\mathbf{x}\), the objective defined in Equation 9 is approximately zero. To prevent this behavior, we modify Equation 9 with the following one: \[\operatorname*{arg\,min}_{\mathbf{m},\Theta\in\text{NN}}\lambda_{1}||\mathbf{ m}||_{1}+\lambda_{2}||\text{NN}(\mathbf{x})||_{1}+\mathcal{L}(f(\mathbf{x}),f( \Phi(\mathbf{x},\mathbf{m}))) \tag{10}\] In Equation 10, we therefore force the perturbations to be minimal, being not null only when there is an incentive to do so. Indeed, in Equation 2, there is a balance on \(\Phi\): \(||\mathbf{m}||_{1}\) tends to make \(\Phi\) uninformative, while \(\mathcal{L}\) does the opposite. Equation 10 differs in that sense from Equation 2, as \(||\mathbf{m}||_{1}\) tends to make \(\Phi\) close to NN(\(\mathbf{x}\)), which is not necessarily uninformative. To entice NN(\(\mathbf{x}\)) to be uninformative, we add the loss \(||NN(\mathbf{x})||_{1}\), using zero as a prior. Therefore, breaking down the objective of Equation 10, we have: * \(||\mathbf{m}||_{1}\) induces \(\Phi(\mathbf{x})\) to be close to NN(\(\mathbf{x}\)) * \(||\text{NN}(\mathbf{x})||_{1}\) induces \(\Phi(\mathbf{x})\) to be close to \(\mathbf{0}\) (uninformative) * \(\mathcal{L}\) induces \(f(\Phi(\mathbf{x},\mathbf{m}))\) to be close to \(f(\mathbf{x})\) (informative) We also set \(\lambda_{1}=\lambda_{2}=1\) in our experiments, while an ablation study on the choice of these hyperparameters can be found in Section 4 and Appendix A. Moreover, contrary to Crabbe and Van Der Schaar (2021), we do not use an area constraint \(||\text{vecsort}(\mathbf{m})-\mathbf{r}_{a}||^{2}\), as it is not clear how to choose the hyperparameter \(a\) on usually complex data. In practice, Crabbe and Van Der Schaar (2021) tune this hyperparameter, which is computationally expensive, as it requires to train multiple masks. We propose instead to directly train our model using Equation 10. Figure 2: **Illustration of our method.** The input is passed through a neural network NN to create a perturbation. A mask \(\mathbf{m}\) is then used to balance the amount of perturbed data: NN(\(\mathbf{x}\)) and unperturbed data: \(\mathbf{x}\), resulting in \(\Phi(\mathbf{x},\mathbf{m})\). Both \(\mathbf{x}\) and \(\Phi(\mathbf{x},\mathbf{m})\) are then passed through the model to explain f. Learnable parameters (\(\mathbf{m}\) and NN(\(\mathbf{x}\))) are presented in continuous boxes, while fixed parameters (the model f) are presented in dashed boxes. The objective of this method is to keep the predictions of the perturbed data as close as possible to the original ones, while masking as much data as possible and to keep the perturbations NN(\(\mathbf{x}\)) as sparse as possible. The overall goal is therefore to identify which features are salient enough to be sufficient to recover the original predictions, when all other features are masked. ## 4 Experiments Following Tonekaboni et al. (2020) and Crabbe & Van Der Schaar (2021), we perform experiments on two datasets: a synthetic one, generated using a Hidden Markov model, and a real-world one, MIMIC-III (Johnson et al., 2016). ### Hidden Markov model experiment We generate data using a 2-state hidden Markov model (HMM), closely following Crabbe & Van Der Schaar (2021). The state \(s_{t}\) can therefore be either 0 or 1, and we generate 200 states: \(t\in[1:200]\). Moreover, the input vector has three features, generated according to the current state: \(\mathbf{x}_{t}\sim\mathcal{N}(\boldsymbol{\mu}_{s_{t}},\boldsymbol{\Sigma}_{s _{t}})\). The label \(y_{t}\) is generated only using the last two features, the first one being irrelevant. The choice of which feature is used to generate the label depends on the state: \[\begin{split} y_{t}&\sim(1+\exp(x_{2,t})^{-1})\; \;\text{if}\;\;s_{t}=0\\ y_{t}&\sim(1+\exp(x_{3,t})^{-1})\;\;\text{if}\;\;s_ {t}=1\end{split} \tag{11}\] Please refer to Crabbe & Van Der Schaar (2021) for more details on this dataset, in particular in the choice of \(\boldsymbol{\mu}_{s_{t}}\) and \(\boldsymbol{\Sigma}_{s_{t}}\). We generate 1000 time series using this method, and train a one-layer GRU (Cho et al., 2014) neural network to predict \(y_{t}\) using \(\mathbf{x}_{t}\), which we aim to explain. As we know the true salient features with this dataset, we evaluate our explanation methods by comparing the similarity between salient features produced by each method and the ground truth. To do so, we use standard classification metrics: area under recall (AUR) and area under precision (AUP). We also use two metrics introduced by Crabbe & Van Der Schaar (2021): Information: \(I_{\mathbf{m}}(\mathbf{a})=-\sum_{(t,i)\in\mathbf{a}}\ln(1-m_{t,i})\) which is analogous to the Shannon information content. A higher value indicates a more informative mask. The second metric is the mask entropy: \(S_{\mathbf{m}}(\mathbf{a})=-\sum_{(t,y)\in\mathbf{a}}m_{t,i}\ln m_{t,i}+(1-m_ {t,i})\ln(1-m_{(t,i)})\) which is analogous to Shannon entropy. In both metrics, \(\mathbf{a}\) corresponds to the true salient features. We compare our method with the following ones: DeepLift (Shrikumar et al., 2017), DynaMask (Crabbe & Van Der Schaar, 2021), Integrated Gradients (IG) (Sundararajan et al., 2017), GradientShap (Lundberg & Lee, 2017), Fit (Tjoa & Guan, 2020), Lime (Ribeiro et al., 2016), Augmented Occlusion (Tonekaboni et al., 2020), Occlusion (Zeiler & Fergus, 2014) and Retain (Choi et al., 2016). Furthermore, our method uses a bidirectional GRU for the perturbation model. We present our results in Table 1. These results3 show that, although our method performs slightly lower than some baselines in terms of AUP, it significantly outperforms all other methods by every other metric. In particular, while it slightly outperforms DynaMask in terms of AUR, it yields better results in terms of AUP, Information and Entropy. These results therefore seem to indicate that using learnable perturbations should be preferred compared with fixed one when explaining predictions based on multivariate time series data. Footnote 3: In Tables 1 and 4, some results differ from Crabbe & Van Der Schaar (2021) due to a few issues in their original implementation. Please refer to issues 4, 8 and 9 in [https://github.com/JonathanCrabbe/Dynamask/issues](https://github.com/JonathanCrabbe/Dynamask/issues). Ablation study on the lambdas.We perform here an ablation study to determine which values of \(\lambda_{1}\) and \(\lambda_{2}\) should be used in Equation 10. We therefore run our experiment using various values of \(\lambda_{1}\) and \(\lambda_{2}\). We report our results on Table 2. This table show that first \(\lambda_{1}\) needs to be close to 1 to yield good results. Indeed, a low value means lower regularisation, therefore retaining a lot of unimportant features. A high value, on the other hand, forces m to be mostly 0, yielding most features to be considered unimportant. Moreover, \(\lambda_{2}\) needs to be at least 1 to force NN(x) to learn uninformative perturbations. Otherwise, there is only a weak mechanism to prevent NN from producing an output similar to x. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & AUP \(\uparrow\) & AUR \(\uparrow\) & I \(\uparrow\) & S \(\downarrow\) \\ \hline DeepLift & 0.92 (0.019) & 0.454 (0.011) & 359 (0.55) & 145 (0.949) \\ DynaMask & 0.711 (0.020) & 0.763 (0.026) & 954 (50.0) & 454 (0.781) \\ IG & 0.918 (0.019) & 0.454 (0.011) & 359 (11.6) & 146 (0.871) \\ GradientShap & 0.849 (0.030) & 0.414 (0.015) & 335 (14.8) & 138 (2.44) \\ Fit & 0.421 (0.013) & 0.549 (0.017) & 346 (22.7) & 164 (27.29) \\ Lime & **0.932** (0.017) & 0.438 (0.008) & 347 (8.46) & 143 (1.47) \\ Occlusion & 0.866 (0.032) & 0.933 (0.006) & 322 (14.6) & 137 (1.90) \\ Aug Occlusion & 0.755 (0.043) & 0.388 (0.025) & 364 (90.02) & 165 (1.42) \\ Retain & 0.645 (0.088) & 0.334 (0.013) & 206 (21.2) & 138 (5.85) \\ \hline Ours & 0.885 (0.030) & **0.781** (0.013) & **1536** (79.0) & **34.1** (3.70) \\ \hline \hline \end{tabular} \end{table} Table 1: Results of each explanation method compared with ours. For each metric, \(\uparrow\) indicates that higher is better, and \(\downarrow\) that lower is better. Mean and std are reported over 5 folds. \begin{table} \begin{tabular}{l c|c c c c c} \hline \hline & & \multicolumn{3}{c}{\(\lambda_{1}\)} & \\ \cline{3-8} & & 0.01 & 0.1 & 1 & 10 & 100 \\ \cline{3-8} & 0.01 & 0.51 - 0.81 & 0.76 - 0.44 & 0.78 - 0.09 & 0.35 - 0.17 & 0.39 - 0.18 \\ \cline{3-8} & 0.1 & 0.51 - 0.91 & 0.65 - 0.83 & 0.95 - 0.08 & 0.32 - 0.16 & 0.37 - 0.20 \\ \(\lambda_{2}\) & 1 & 0.51 - 0.89 & 0.63 - 0.83 & 0.89 - 0.75 & 0.30 - 0.16 & 0.35 - 0.18 \\ \cline{3-8} & 10 & 0.48 - 0.90 & 0.65 - 0.83 & 0.89 - 0.74 & 0.99 - 0.26 & 0.41 - 0.19 \\ \cline{3-8} & 100 & 0.49 - 0.90 & 0.65 - 0.84 & 0.90 - 0.74 & 0.99 - 0.27 & 0.37 - 0.17 \\ \hline \hline \end{tabular} \end{table} Table 2: Influence of \(\lambda_{1}\) and \(\lambda_{2}\) from Equation 10 on the results of the HMM experiment. For each pair of parameters, 2 values are reported: AUP - AUR. The average result over 5 runs is reported. Learning perturbations as a deletion game.We also explore here learning perturbation using Equation 1, masking as little data as possible while changing the model's predictions as much as possible. However, we cannot directly use Equation 1 for two reasons. First, the term \(-\mathcal{L}(f(\mathbf{x}),f(\Phi(\mathbf{x},\mathbf{m})))\) is hard to optimize, as it entices \(f(\Phi(\mathbf{x},\mathbf{m}))\) to be "far" from \(f(\mathbf{x})\) while there is no clarity on what "far" should be here. For this reason, we replace this objective with \(\mathcal{L}(f(\mathbf{0}),f(\Phi(\mathbf{x},\mathbf{m})))\), enticing the predictions to be close to predictions made using **0**, uninformative, as an input. Second, we need to add the term \(||\text{NN}(\mathbf{x})||_{1}\) in the loss. This results in the following objective: \[\operatorname*{arg\,min}_{\mathbf{m},\Phi\in\text{NN}}\lambda_{1}||\mathbf{1 -m}||_{1}+\lambda_{2}||\text{NN}(\mathbf{x})||_{1}+\mathcal{L}(f(\mathbf{0}),f(\Phi(\mathbf{x},\mathbf{m}))) \tag{12}\] We present our results on Table 3, comparing the preservation and the deletion modes. While the second setting outperforms the first one in terms of AUR, it performs poorly according to every other metrics. This might be due to the use of \(\mathcal{L}(f(\mathbf{0}),f(\Phi(\mathbf{x},\mathbf{m})))\), which amounts to learning a "change" in the predictions. This is a less straightforward objective compared with the preservation mode, which aims to retain the original predictions. ### MIMIC-III experiment We evaluate our method on the real-world MIMIC-III dataset, following the works of Tonekaboni et al. (2020) and Crabbe & Van Der Schaar (2021). MIMIC-III consists of patients in intensive-care units (ICU), for which a number of vital signs and lab test results have been regularly measured. The task is here to predict in-hospital mortality of each patient based on 48 hours of data, discretised over each hour. Missing values are imputed using the previous available ones. If there is no previous feature, a standard value is imputed. We train a one layer GRU with a hidden size of 200 to predict this in-hospital mortality, and we aim to explain this model. In this dataset, the true salient features are unknown, and we need to provide different metrics to evaluate our method. Following Crabbe & Van Der Schaar (2021), we compare the original predictions to ones where a certain proportion of the features have been masked. We replace masked features either with an average over time of this feature: \(\overline{x}_{t,i}=\frac{1}{T}\sum_{t}x_{t,i}\), where T = 48 (hours) or with zeros: \(\overline{x}_{t,i}=0\). We use two metrics proposed by Crabbe & Van Der Schaar (2021), and we also draw from the work of Shrikumar et al. (2017) and DeYoung et al. (2019) and propose three additional metrics. These resulting four metrics are then: * **Accuracy** (Acc): We mask the most salient features and compute the resulting accuracy using this masked data. A lower accuracy means that important features to make accurate predictions have been removed. Therefore, lower is better with this metric. * **Cross-Entropy** (CE): We mask the most salient features and compute the cross-entropy between predictions made with this masked data with the original one. A higher value indicates that the predictions have more significantly changed and that important features have been removed. Higher is better with this metric. * **Comprehensiveness** (Comp): We mask the most salient features and compute the average change of the predicted class probability compared with the original one. Higher is better with this metric. * **Sufficiency** (Suff): We only keep the most salient features, and compute the average change of the predicted class probability compared with the original one. Lower is better with this metric. Similar to our previous experiment, we use a bidirectional GRU as our perturbation model. We compare our method against DeepLift (Shrikumar et al., 2017), DynaMask (Crabbe & Van Der Schaar, 2021), Integrated Gradients (IG) (Sundararajan et al., 2017), GradientShap (Lundberg & Lee, 2017), Lime (Ribeiro et al., 2016), Augmented Occlusion (Tonekaboni et al., 2020), Occlusion (Zeiler & Fergus, 2014) and Retain (Choi et al., 2016). We present on Tables 4 and 5 results with our method compared with different baselines, computing our metrics by \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & Acc \(\downarrow\) & Comp\(\uparrow\) & CE \(\uparrow\) & Suff \(\downarrow\) \\ \hline DeepLift & 0.988 (0.002) & -4.36E-4 (0.001) & 0.097 (0.006) & 2.86E-3 (0.001) \\ DynaMask & 0.990 (0.001) & 2.21E-4 (0.001) & 0.097 (0.005) & 2.99E-3 (0.001) \\ IG & 0.988 (0.003) & 2.24E-4 (0.002) & 0.098 (0.006) & 2.21E-3 (0.001) \\ GradientShap & 0.988 (0.004) & -2.21E-3 (0.001) & 0.095 (0.006) & 3.99E-3 (0.001) \\ Line & 0.996 (0.001) & -7.36E-4 (0.001) & 0.094 (0.005) & 3.39E-3 (0.001) \\ Occlusion & 0.988 (0.001) & -1.93E-4 (0.001) & 0.095 (0.005) & 4.57E-3 (0.001) \\ Aug Occlusion & 0.989 (0.001) & 4.59E-4 (0.001) & 0.098 (0.005) & 1.90E-3 (0.002) \\ Retain & 0.989 (0.001) & -3.79E-3 (0.001) & 0.093 (0.005) & 7.70E-3 (0.001) \\ \hline Ours & **0.981** (0.004) & **1.53E-2** (0.004) & **0.118** (0.008) & **-1.19E-2** (0.004) \\ \hline \hline \end{tabular} \end{table} Table 4: Results of each explanation method compared with ours, by masking 20% of the data and replacing masked features with an average over time: \(\overline{x}_{t,i}=\frac{1}{T}\sum_{t}x_{t,i}\). For each metric, \(\uparrow\) indicates that higher is better, and \(\downarrow\) that lower is better. Mean and std are reported over 5 folds. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Mode** & AUP \(\uparrow\) & AUR \(\uparrow\) & I \(\uparrow\) & S \(\downarrow\) \\ \hline Preservation & **0.885** (0.030) & 0.781 (0.013) & **1536** (79.0) & **34.1** (3.70) \\ Deletion & 0.346 (0.0034) & **0.863** (0.012) & 1079 (41.5) & 68.0 (5.07) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of using the preservation mode vs deletion mode on the HMM experiment. The average result over 5 runs is reported. masking 20% of the data, and replacing these features with either an average over time (Table 4) or zeros (Table 5). We also plot on Figures 3 and 4 the cross-entropy (CE) metrics by masking different proportion of the data, and replacing masked data with either an average over time (Figure 3) or zeros (Figure 4). We also perform ablation studies in Appendix A and provide more results in Appendix B. Our results show that our method significantly outperforms every other method on every metric, both using the average over time or zeros as masked data. This also indicates that using learned perturbations is preferable to using fixed ones when explaining predictions on multivariate time series data. Choice of the perturbation generator.While our method seems to perform well compared with existing baselines, we want here to study the impact of the choice of NN in Equation 8. In this study, we propose the following models: * **Zero**: NN(\(\mathbf{x}\)) is set to zero everywhere. Equation 8 is then simply: \(\Phi(\mathbf{x},\mathbf{m})=\mathbf{m}\times\mathbf{x}\). * **GRU**: We use a one layer GRU model: \(\mathrm{NN}(\mathbf{x})=\mathrm{GRU}(\mathbf{x})\), which corresponds to a generalisation of the fixed perturbation \(\mu_{t,i}^{p}\) in Crabbe and Van Der Schaar (2021). * **Bi-GRU**: Finally, we use a one layer bidirectional GRU \(\mathrm{NN}(\mathbf{x})=\mathrm{bi}\)-\(\mathrm{GRU}(\mathbf{x})\), which corresponds to a generalisation of the fixed perturbation \(\mu_{t,i}\) in Crabbe and Van Der Schaar (2021). We present our results on MIMIC-III on Tables 6 and 7, replacing 20% of the data with either an overall average of each feature over time (Table 6), or with zeros (Table 7). We use the same metrics as with our main MIMIC-III experiments. As with the main experiment, we provide more results, masking different proportions of the data, in Appendix B. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & Acc \(\downarrow\) & Comp \(\uparrow\) & CE \(\uparrow\) & Suff \(\downarrow\) \\ \hline Zeros & 0.981 (0.003) & 1.36E-2 (0.001) & 0.116 (0.004) & -1.02E-2 (0.002) \\ GRU & **0.980** (0.004) & **1.76E-2 (0.001)** & **0.122** (0.004) & **-1.37E-2 (0.002) \\ Bi-GRU & 0.981 (0.004) & 1.53E-2 (0.004) & 0.118 (0.008) & -1.19E-2 (0.004) \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of different perturbation models, masking 20% of the data and replacing masked features with an average over time: \(\overline{x}_{t,i}=\frac{1}{T}\sum_{t}x_{t,i}\). For each metric, \(\uparrow\) indicates that higher is better, and \(\downarrow\) that lower is better. Mean and std are reported over 5 folds. Figure 4: **Cross Entropy replacing masked data with zeros.** We present here the results in terms of cross-entropy by masking between 10% and 60% of the data for each patient, and replacing the masked data with zeros: \(\overline{x}_{t,i}=0\). For clarity, we only plot a subset of the baselines. Higher is better with this metric. Figure 3: **Cross Entropy replacing masked data with an average.** We present here the results in terms of cross-entropy by masking between 10% and 60% of the data for each patient, and replacing the masked data with the overall average over time for each feature: \(\overline{x}_{t,i}=\frac{1}{T}\sum_{t}x_{t,i}\). For clarity, we only plot a subset of the baselines. Higher is better with this metric. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & Acc \(\downarrow\) & Comp \(\uparrow\) & CE \(\uparrow\) & Suff \(\downarrow\) \\ \hline DeepLift & 0.972 (0.003) & -1.19E-3 (0.007) & 0.125 (0.014) & -6.92E-3 (0.006) \\ DynNMask & 0.975 (0.002) & -1.27E-3 (0.004) & 0.106 (0.009) & 6.57E-3 (0.012) \\ IG & 0.972 (0.003) & 1.248E-4 (0.007) & 0.127 (0.015) & -7.61E-3 (0.006) \\ GradientShap & 0.968 (0.006) & -6.28E-3 (0.004) & 0.128 (0.017) & 6.61E-4 (0.005) \\ Line & 0.983 (0.003) & -5.22E-3 (0.004) & 0.093 (0.008) & -2.23E-3 (0.019) \\ Occlusion & 0.971 (0.003) & -4.03E-3 (0.003) & 0.122 (0.008) & -4.97E-3 (0.008) \\ Aug Occlusion & 0.972 (0.003) & -6.82E-4 (0.004) & 0.121 (0.009) & -4.62E-3 (0.011) \\ Retain & 0.971 (0.003) & -8.01E-3 (0.006) & 0.0123 (0.009) & 4.90E-4 (0.007) \\ \hline Ours & **0.943** (0.008) & **1.09E-1** (0.023) & **0.318** (0.057) & **-6.94E-2** (0.006) \\ \hline \hline \end{tabular} \end{table} Table 5: Results of each explanation method compared with ours, by masking 20% of the data and replacing masked features with zeros: \(\overline{x}_{t,i}=0\). For each metric, \(\uparrow\) indicates that higher is better, and \(\downarrow\) that lower is better. Mean and std are reported over 5 folds. Our results are interesting on several accounts. Firstly, the Zeros method, which simply perturbs the data by masking non salient features: \(\Phi(\mathbf{x},\mathbf{m})=\mathbf{m}\times\mathbf{x}\), performs significantly better than all other baselines, including DynaMask with fixed perturbations. As each measure in our dataset is normalised, masking one measure with the Zeros method amounts to replacing it with its average over the entire dataset. On the other hand, DynaMask replaced mask data with its average over time _for each individual patient_. This good performance of Zeros could be therefore explained by the fact that many measures do not vary much over time. As a result, replacing masked data with an overall average would be much more informative than replacing it with an average over time for each patient. Secondly, while using the bidirectional GRU perturbation yields better results than Zeros, it is itself outperformed by our method with the unidirectional GRU perturbation. Moreover, using this unidirectional GRU also yields more stable results with a lower standard deviation. Our intuition was that a bidirectional GRU would yield better results, as it would be able to produce outputs based on past and future events. However, it seems that modeling perturbations ignoring future events seems to yield better and more stable results. We used a Bi-GRU to produce our results in Tables 4 and 5, as it corresponds to our original intuition, but we also recommend testing different types of neural networks for best performance when applying our method. Analysis of salient features.We present on Figure 5 the most salient features, averaged over every positive patient, to determine which factors are most important when determining in-hospital mortality. This averaged feature importance indicates a few salient features: anion gap, bicarbonate level, platelet count, systolic blood pressure and respiratory rate. This seems to be consistent with the literature, which has highlighted the importance of these features, conducting studies on the saliency of bicarbonate levels (Lim et al., 2019), platelet count (Zhang et al., 2014) and systolic blood pressure (Kondo et al., 2011). The influence of anion gap on in-hospital mortality is less clear, with conflicting studies on this subject (Glasmacher and Stones, 2015). On the other hand, the respiratory rate is often neglected despite being an important predictor of serious events (Cretikos et al., 2008). However, although the 95% confidence interval associated with these features importances is small due to a large number of patients, there remains a large corresponding standard deviation. We can therefore infer that the importance of each feature greatly depends on each patient. It is indeed possible that a measure such as the systolic blood pressure, for instance, only matters when it is outside of a normal range. As a result, its importance will greatly vary depending on each patient's condition. This demonstrates the superiority of perturbation-based methods compared to directly using a simpler interpretable model such as a decision tree instead of a neural network to predict in-hospital mortality. Indeed, such methods can only infer feature importance on average, and not explain each prediction individually. In addition to determining which feature is salient, our method can also infer **when** it is salient. As a result, we present on Figure 6 the average over positive patients of the importance of all features at each hour. This figure shows that later measurements have a larger impact on the outcome compared with earlier data. To evaluate the accuracy of this finding, we also plot on Figure 7 the positive rate over (true or false) positive patients, when masking earlier measures on one hand, and later measures on the other hand. We can see that masking early features has a minimal impact on the predictions, while masking late features has on the contrary a dramatic impact on the outcome. As a result, it \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & Acc \(\downarrow\) & CE \(\uparrow\) & Comp \(\uparrow\) & Suff \(\downarrow\) \\ \hline Zeros & 0.951 (0.005) & 9.64E-2 (0.013) & 0.305 (0.015) & -6.79E-2 (0.002) \\ GRU & **0.943** (0.007) & **1.22E-1** (0.008) & **0.344** (0.017) & **-7.40E-2** (0.001) \\ Bi-GRU & **0.943** (0.008) & 1.09E-1 (0.023) & 0.318 (0.057) & -6.94E-2 (0.006) \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of different perturbation models, masking 20% of the data and replacing masked features with zeros: \(\overline{x}_{t,i}=0\). For each metric, \(\uparrow\) indicates that higher is better, and \(\downarrow\) that lower is better. Mean and std are reported over 5 folds. Figure 5: **Importance of each feature to predict in-hospital mortality.** For each feature, we present its average importance over time and over multiple patients, using our method with a GRU perturbation network. We infer from these results that anion gap, bicarbonate level, platelet count, systolic blood pressure and respiratory rate are most important for our model when making a prediction. We also plot a 95% confidence interval around these averages. seems that, when predicting in-hospital mortality, the last measurements of each patient is more important to make a prediction, rather than the overall evolution of the patient. ## 5 Conclusion In this work, we have presented an extension of Fong and Vedaldi (2017) and Crabbe and Van Der Schaar (2021) to better explain multivariate time series predictions using a perturbation-based saliency method. Our main intuition is that the choice made by Crabbe and Van Der Schaar (2021) of fixed perturbations is less adapted to temporal data due to the possibility of long-term dependencies. Our results show that using learned perturbations yields better explanations compared with existing methods, including the DynaMask one with fixed perturbations. We have also studied the choice of the neural network to model the perturbation and found that, on the in-hospital mortality task of MIMIC-III, a unidirectional GRU yields better and more stable results than the bidirectional one. Using our method, we have also been able to derive some insights on the neural network predicting in-hospital mortality: which features are on average most important, as well as which measures in time. Precise temporal attributions could be derived similarly for each patient, giving further insight on this model's behavior. Moreover, an inherent limitation to perturbation-based methods such as ours is that it is not able to specify the direction of an explanation. As such, it can measure if a specific feature is important, but cannot distinguish between features having a positive or a negative influence on the prediction. Adapting our method to tackle this issue would prove very beneficial for applications in healthcare. ## 6 Acknowledgements The author would like to thank Vitalii Zhelezniak for his insightful remarks and recommendations during the elaboration of this work. We also thank Anthony Hu and Thomas Uriot for their detailed initial reviews of this paper.
2309.00302
Arithmetic properties of overpartitions
The primary focus of this paper is overpartitions, a type of partition that plays a significant role in $q$-series theory. In 2006, Treneer discovered an explicit infinite family of congruences of overpartitions modulo $5$. In our research, we have identified explicit infinite families of congruences of overpartitions modulo $3,7,11$. This work reveals the connection between overpartitions and half-integral modular forms.
Qi-Yang Zheng
2023-09-01T07:26:27Z
http://arxiv.org/abs/2309.00302v1
# Arithmetic properties of overpartitions ###### Abstract. The primary focus of this paper is overpartitions, a type of partition that plays a significant role in \(q\)-series theory. In 2006, Treneer discovered an explicit infinite family of congruences of overpartitions modulo \(5\). In our research, we have identified explicit infinite families of congruences of overpartitions modulo \(3,7,11\). This work reveals the connection between overpartitions and half-integral modular forms. ## 1. Introduction An overpartition of \(n\) is an ordered sequence of non-increasing positive integers that sum to \(n\), where the first occurrence of each integer may be overlined. For example, \(3\) has eight overpartitions: \[3,\bar{3},2+1,\bar{2}+1,2+\bar{1},\bar{2}+\bar{1},1+1+1,\bar{1}+1+1.\] Obtaining the generating function of the overpartition function is straightforward. \[\sum_{n=0}^{\infty}\bar{p}(n)q^{n}=\prod_{n=1}^{\infty}\frac{1+q^{n}}{1-q^{n}}.\] The results in [4] show that several finite products appearing in \(q\)-series possess natural interpretations in terms of overpartitions. Furthermore, overpartitions have been found to play a central role in bijective proofs of Ramanujan's \({}_{1}\psi_{1}\) summation and the \(q\)-Gauss summation. Our paper shows that overpartitions yield results of another type. First, we introduce Ramanujan-type congruences. Let \(p(n)\) denote the number of unrestricted partitions of \(n\). Ramanujan discovered that, \[p(5n+4) \equiv 0\ (\text{mod }5),\] \[p(7n+5) \equiv 0\ (\text{mod }7),\] \[p(11n+6) \equiv 0\ (\text{mod }11).\] Such congruences also appear in the context of overpartitions. In 2006, Treneer discovered an explicit infinite family of congruences modulo \(5\)[17, Prop. 1.4]. **Theorem 1.1** (Treneer).: _Let \(Q\equiv 4\pmod{5}\) be prime. Then_ \[\bar{p}(5Q^{3}n)\equiv 0\pmod{5}\] _for all \(n\) coprime to \(Q\)._ It is natural to ask whether \(5\) is the only such special prime. Surprisingly, a similar phenomenon occurs for moduli \(3,7,\) and \(11\). In the case of modulus \(3\), we discover two infinite families of congruences. **Theorem 1.2**.: _Let \(Q\equiv 5\pmod{6}\) be prime. Then_ 1. \(\bar{p}(3Q^{3}n)\equiv 0\pmod{3}\) _for all_ \(n\) _coprime to_ \(Q\)_._ 2. \(\bar{p}(Q^{3}n)\equiv 0\pmod{3}\) _for all_ \(n\) _coprime to_ \(Q\)_, provided_ \(n\equiv-1\pmod{3}\)_._ Note that the two types of congruences mentioned above are completely disjoint. For modulus \(7\), we identify three infinite families of congruences. **Theorem 1.3**.: _Let \(Q\equiv 3,5,6\pmod{7}\) be prime. Then_ \[\bar{p}(7Q^{3}n)\equiv 0\pmod{7}\] _for all \(n\) coprime to \(Q\)._ Moving on to modulus \(11\), we find one infinite family of congruences. **Theorem 1.4**.: _Let \(Q\equiv 10\pmod{11}\) be prime. Then_ \[\bar{p}(11Q^{3}n)\equiv 0\pmod{11}\] _for all \(n\) coprime to \(Q\)._ Congruences modulo primes \(m\geq 13\) do exist. However, there seems to be no explicit form for \(Q\) that satisfies the condition \[\bar{p}(mQ^{3}n)\equiv 0\pmod{m}\] for all \(n\) coprime to \(Q\). In fact, we can state the following theorem: **Theorem 1.5**.: _Let \(m\geq 13\) be prime. Then there are positive proportion of primes \(Q\) such that_ \[\bar{p}(mQ^{3}n)\equiv 0\pmod{m}\] _for all \(n\) coprime to \(mQ\)._ In the proofs of these theorems, we discover a close relationship between overpartitions and half-integral weight modular forms. The key to proving these congruences lies in identifying the corresponding modular form. However, finding such a modular form is not a straightforward task. ## 2. Preliminaries First, we introduce the \(U\) and \(V\) operators for formal series. If \(j\) is a positive integer, these operators are defined as follows: \[\left(\sum_{n=0}^{\infty}a(n)q^{n}\right)\ |\ U(j):=\sum_{n=0}^{\infty}a(jn)q^{n},\] \[\left(\sum_{n=0}^{\infty}a(n)q^{n}\right)\ |\ V(j):=\sum_{n=0}^{\infty}a(n)q^{jn}.\] The behavior of these operators is described in the following proposition [13, Proposition 2.22]. **Proposition 2.1**.: _Let \(f(z)\in M_{k}(\Gamma_{0}(N),\chi)\) with integral weight._ 1. _If_ \(j\ |\ N\)_, then_ \(f(z)\ |\ U(j)\in M_{k}(\Gamma_{0}(N),\chi)\)_._ 2. \(f(z)\ |\ V(j)\in M_{k}(\Gamma_{0}(jN),\chi)\)_._ Now, we introduce the Hecke operator on modular forms of integral weight. Let \(Q\) be a prime. The Hecke operator \(T(Q)\) is defined as follows: \[\left(\sum_{n=0}^{\infty}a(n)q^{n}\right)\ |\ T(Q):=\sum_{n=0}^{\infty}\left(a(Qn )+\chi(Q)Q^{k-1}a\left(\frac{n}{Q}\right)\right)q^{n},\] where \(a(n/Q)=0\) if \(Q\nmid n\). Let \(f(z)\in M_{k}(\Gamma_{0}(N),\chi)\). Then, \(f(z)\ |\ T(Q)\in M_{k}(\Gamma_{0}(N),\chi)\). If \(f(z)\) is a cusp form, then so are \(f(z)\ |\ U(j)\), \(f(z)\ |\ V(j)\), and \(f(z)\ |\ T(Q)\). There are \(U\) and \(V\) operators for modular forms of half-integral weight. The following proposition describes the behavior of these operators on half-integral weight modular forms [13, Proposition 3.7]. **Proposition 2.2**.: _Let \(f(z)\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\chi)\)._ 1. _If_ \(j\ |\ N\)_, then_ \(f(z)\ |\ U(j)\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\left(\frac{4j}{ \bullet}\right)\chi)\)_._ 2. \(f(z)\ |\ V(j)\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4jN),\left(\frac{4j}{ \bullet}\right)\chi)\)_._ For primes \(Q\), the half-integral weight Hecke operator \(T(Q^{2})\) for \(f(z)\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\chi)\) is defined as follows: \[\left(\sum_{n=0}^{\infty}a(n)q^{n}\right)\ |\ T(Q^{2})\] \[:=\sum_{n=0}^{\infty}\left(a(Q^{2}n)+\left(\frac{(-1)^{\lambda}n }{Q}\right)\chi(Q)Q^{\lambda-1}a(Q)+\left(\frac{(-1)^{\lambda}}{Q^{2}}\right) \chi(Q^{2})Q^{2\lambda-1}a\left(\frac{n}{Q^{2}}\right)\right)q^{n},\] where \(f(z)\ |\ T(Q^{2})\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\chi)\). Moreover, if \(f(z)\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\chi)\) is a cusp form, then so are \(f(z)\ |\ U(j)\), \(f(z)\ |\ V(j)\), and \(f(z)\ |\ T(Q^{2})\). It's worth recalling that Dedekind's eta function is defined as: \[\eta(z)=q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n}),\] where \(q=e^{2\pi iz}\). Thus \[\sum_{n=0}^{\infty}\bar{p}(n)q^{n}=\frac{\eta(2z)}{\eta^{2}(z)}.\] If \(m\) is a prime, we denote by \(M_{\frac{k}{2}}(\Gamma_{0}(N),\chi)_{m}\) (respectively, \(S_{\frac{k}{2}}(\Gamma_{0}(N),\chi)_{m}\)) the \(\mathbb{F}_{m}\)-vector space obtained by reducing the \(q\)-expansions of modular forms (resp. cusp forms) in \(M_{\frac{k}{2}}(\Gamma_{0}(N),\chi)\) (resp. \(S_{\frac{k}{2}}(\Gamma_{0}(N),\chi)\)) with integer coefficients modulo \(m\). At times, for convenience, we will use the notation \(a\equiv_{m}b\) instead of \(a\equiv b\ (\mathrm{mod}\ m)\). The construction of modular forms requires the utilization of the following theorem [6, Theorem 3]: **Theorem 2.3** (Gordon-Hughes).: _Let_ \[f(z)=\prod_{\delta\mid N}\eta^{r_{\delta}}(\delta z)\] _be a \(\eta\)-quotient provided_ (i) \[\sum_{\delta\mid N}\delta r_{\delta}\equiv 0\ (\mathrm{mod}\ 24);\] (ii) \[\sum_{\delta\mid N}\frac{Nr_{\delta}}{\delta}\equiv 0\ (\mathrm{mod}\ 24);\] (iii) \[k:=\frac{1}{2}\sum_{\delta\mid N}r_{\delta}\in\mathbb{Z},\] _then_ \[f\left(\frac{az+b}{cz+d}\right)=\chi(d)(cz+d)^{k}f(z),\] _for each\(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\Gamma_{0}(N)\) and \(\chi\) is a Dirichlet character \((\mathrm{mod}\ N)\) defined by_ \[\chi(n):=\left(\frac{(-1)^{k}\prod_{\delta\mid N}\delta^{rs}}{n}\right),\ if\ n>0\ and\ (n,6)=1.\] If \(f(z)\) is holomorphic (resp. vanishes) at all cusps of \(\Gamma_{0}(N)\), then \(f(z)\in M_{k}(\Gamma_{0}(N)\,\chi)\) (resp. \(S_{k}(\Gamma_{0}(N),\chi)\)), as \(\eta(z)\) never vanishes on \(\mathcal{H}\). The following theorem (cf. [9]) provides a useful criterion for computing the orders of an \(\eta\)-quotient at all cusps of \(\Gamma_{0}(N)\). **Theorem 2.4** (Martin).: _Let \(c\), \(d\), and \(N\) be positive integers with \(d\mid N\) and \((c,d)=1\). If \(f(z)\) is an \(\eta\)-quotient that satisfies the conditions of Theorem 2.3, then the order of vanishing of \(f(z)\) at the cusp \(c/d\) is_ \[\frac{N}{24}\sum_{\delta\mid N}\frac{r_{\delta}(d^{2},\delta^{2})}{\delta(d^{2 },N)}.\] ## 3. Modulo 3 First, we introduce the Ramanujan theta functions: \[\phi(q)=\sum_{n=-\infty}^{\infty}q^{n^{2}},\] \[\psi(q)=\sum_{n=0}^{\infty}q^{\frac{n(n+1)}{2}}.\] We will use the notation as presented in [1], \[a(q)=\phi(-q^{3}),\] \[b(q)=\frac{(q;q)_{\infty}(q^{6};q^{6})_{\infty}^{2}}{(q^{2};q^{2})_{\infty}(q ^{3};q^{3})_{\infty}},\] where \(\left(a;q\right)_{\infty}=\prod_{n=0}^{\infty}(1-aq^{n})\). The following lemma is useful for obtaining some surprising identities [1, Lemma 2.6 and 2.7]. **Lemma 3.1** (Andrews-Hirschhorn-Sellers).: \[\phi(-q)=a(q^{3})-2qb(q^{3}),\] \[a(q)^{3}-8qb(q)^{3}=\frac{\phi(-q)^{4}}{\phi(-q^{3})}.\] We will begin by proving the following theorem, which provides important dissections. **Theorem 3.2**.: \[\sum_{n=0}^{\infty}\bar{p}(3n)q^{n}=\frac{(q^{2};q^{2})_{\infty}^{4}(q^{3};q^ {3})_{\infty}^{6}}{(q;q)_{\infty}^{8}(q^{6};q^{6})_{\infty}^{3}},\] \[\sum_{n=0}^{\infty}\bar{p}(3n+1)q^{n}=2\frac{(q^{2};q^{2})_{\infty}^{3}(q^{3}; q^{3})_{\infty}^{3}}{(q;q)_{\infty}^{7}},\] \[\sum_{n=0}^{\infty}\bar{p}(3n+2)q^{n}=4\frac{(q^{2};q^{2})_{\infty}^{2}(q^{6}; q^{6})_{\infty}^{3}}{(q;q)_{\infty}^{6}}.\] **Corollary 3.3**.: \[\bar{p}(3n+2)\equiv 0\pmod{4}.\] Proof.: By [11, Theorem 1.2.] we have \[\sum_{n=0}^{\infty}\bar{p}(n)q^{n}=\frac{\eta(2z)}{\eta^{2}(z)}=\frac{1}{\phi (-q)}.\] Thus by Lemma 3.1 we have \[\sum_{n=0}^{\infty}\bar{p}(n)q^{\frac{n}{3}}=\frac{1}{\phi(-q^{1/3})}=\frac{1 }{a(q)-2q^{1/3}b(q)}=\frac{a(q)^{2}+2q^{1/3}a(q)b(q)+4q^{2/3}b(q)^{2}}{a(q)^{3} -8qb(q)^{3}}.\] By comparing the coefficients on both sides, we obtain \[\sum_{n=0}^{\infty}\bar{p}(3n)q^{n}=\frac{a(q)^{2}}{a(q)^{3}-8qb(q)^{3}}=\frac{(q ^{2};q^{2})_{\infty}^{4}(q^{3};q^{3})_{\infty}^{6}}{(q;q)_{\infty}^{8}(q^{6};q^{ 6})_{\infty}^{3}},\] \[\sum_{n=0}^{\infty}\bar{p}(3n+1)q^{n}=\frac{2a(q)b(q)}{a(q)^{3}-8qb(q)^{3}}=2 \frac{(q^{2};q^{2})_{\infty}^{3}(q^{3};q^{3})_{\infty}^{3}}{(q;q)_{\infty}^{7}},\] \[\sum_{n=0}^{\infty}\bar{p}(3n+2)q^{n}=\frac{4b(q)^{2}}{a(q)^{3}-8qb(q)^{3}}=4 \frac{(q^{2};q^{2})_{\infty}^{2}(q^{6};q^{6})_{\infty}^{3}}{(q;q)_{\infty}^{6}}.\] Now we are able to prove Theorem 1.2 (1). Proof of Theorem 1.2 (1).: Note that \[\sum_{n=0}^{\infty}\bar{p}(3n)q^{n}=\frac{(q^{2};q^{2})_{\infty}^{4}(q^{3};q^{ 3})_{\infty}^{6}}{(q;q)_{\infty}^{8}(q^{6};q^{6})_{\infty}^{3}}\equiv_{3}\frac {(q;q)_{\infty}^{10}}{(q^{2};q^{2})_{\infty}^{5}}=\phi^{5}(-q).\] It is well known that \(\phi(q)\in M_{\frac{1}{2}}(\Gamma_{0}(4))\) (for example, see [13, Proposition 1.41]). Since \(\phi(-q)=2\phi(q)\mid U(2)\mid V(2)-\phi(q)\), by Proposition 2.2, we obtain \[\phi(-q)\in M_{\frac{1}{2}}(\Gamma_{0}(16)).\] So \[\phi^{5}(q)=\sum_{n=0}^{\infty}r_{5}(n)q^{n}\in M_{\frac{5}{2}}(\Gamma_{0}(4)),\] where \(r_{s}(n)\) denotes the number of ways to express \(n\) as the sum of \(s\) squares. By [3, Lemma 5.1.] we know that for each odd prime \(Q\), \[r_{5}(Q^{2}n)+Q\left(\frac{n}{Q}\right)r_{5}(n)+Q^{3}r_{5}\left(\frac{n}{Q^{2} }\right)=(Q^{3}+1)r_{5}(n), \tag{3.1}\] thus \[(-1)^{Q^{2}n}r_{5}(Q^{2}n)+(-1)^{n}Q\left(\frac{n}{Q}\right)r_{5}(n)+(-1)^{n/Q ^{2}}Q^{3}r_{5}\left(\frac{n}{Q^{2}}\right)=(-1)^{n}(Q^{3}+1)r_{5}(n),\] where \((-1)^{n/Q^{2}}=0\) if \(Q^{2}\nmid n\). The equation above is equivalent to \[\phi^{5}(-q)\mid T(Q^{2})=(Q^{3}+1)\phi^{5}(-q).\] So if \(Q\equiv 5\pmod{6}\), we have \(\phi^{5}(-q)\mid T(Q^{2})\equiv 0\pmod{3}\), hence \[\sum_{n=0}^{\infty}\bar{p}(3n)q^{n}\mid T(Q^{2})\equiv 0\pmod{3}\,,\] i.e. \[\bar{p}(3Q^{2}n)+Q\left(\frac{n}{Q}\right)\bar{p}(n)+Q^{3}\bar{p}\left(\frac{n}{Q ^{2}}\right)\equiv 0\pmod{3}\] for all \(n\). Let \(n=Qn\) with \((n,Q)=1\) to eliminate the latter two parts, obtaining \[\bar{p}(3Q^{3}n)\equiv 0\pmod{3}\] for all \(n\) coprime to \(Q\). The proof for Theorem 1.2 (2) is more complicated. First, we need a theorem by Sturm [16, Theorem 1], which offers a useful criterion for determining when modular forms with integer coefficients become congruent to zero modulo a prime through finite computation. We will use the refined version of Sturm's Theorem, as presented in [13, Theorem 2.58], which also applies to half-integral weight modular forms. **Theorem 3.4** (Sturm).: _Suppose \(f(z)=\sum_{n=0}^{\infty}a(n)q^{n}\in M_{\frac{k}{2}}(\Gamma_{0}(N),\chi)_{m}\) such that_ \[a(n)\equiv 0\pmod{m}\] _for all \(n\leq\frac{kN}{24}\prod_{p\mid N}\left(1+\frac{1}{p}\right)\). Then \(a(n)\equiv 0\pmod{m}\) for all \(n\in\mathbb{Z}\)._ Now, let us briefly introduce the famous Shimura lift [15]. The result we use is from [10] (also referred to in [13, Theorem 3.14]). **Theorem 3.5** (Niwa-Shimura).: _Suppose that \(g(z)=\sum_{n=1}^{\infty}b(n)q^{n}\in S_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\chi)\) with \(\lambda\geq 1\). Let \(t\) be a positive square-free integer, and define the Dirichlet character \(\psi_{t}\) by \(\psi_{t}(n)=\chi(n)\left(\frac{-1}{n}\right)^{\lambda}\left(\frac{t}{n}\right)\). If complex numbers \(A_{t}(n)\) are defined by_ \[\sum_{n=1}^{\infty}\frac{A_{t}(n)}{n^{s}}:=\sum_{n=1}^{\infty}\frac{\psi_{t}( n)}{n^{s-\lambda+1}}\cdot\sum_{n=1}^{\infty}\frac{b(tn^{2})}{n^{s}},\] _then_ \[\operatorname{Sh}_{t}(g(z)):=\sum_{n=1}^{\infty}A_{t}(n)q^{n}\] _is a modular form in \(M_{2\lambda}(\Gamma_{0}(2N),\chi^{2})\). If \(\lambda\geq 2\), then \(\operatorname{Sh}_{t}(g(z))\) is a cusp form._ With tools above, we are able to prove Theorem 1.2 (2). Proof of Theorem 1.2 (2).: By Theorem 3.2 we have \[\sum_{n=0}^{\infty}\bar{p}(3n+1)q^{3n+1}=2q\frac{(q^{6};q^{6})_{\infty}^{3}(q ^{9};q^{9})_{\infty}^{3}}{(q^{3};q^{3})_{\infty}^{7}}\equiv_{3}2q(q;q)_{\infty }^{6}(q^{2};q^{2})_{\infty}^{9}=2\eta^{6}(z)\eta^{9}(2z).\] By a table listed in [2], we know that for each odd prime \(Q\), \(\eta^{6}(z)\eta^{9}(2z)\in S_{\frac{15}{2}}(\Gamma_{0}(16))\) is a Hecke eigenform for operator \(T(Q^{2})\). Let \[\eta^{6}(z)\eta^{9}(2z)=\sum_{n=1}^{\infty}b(n)q^{n},\] then for each odd prime \(Q\), \[\sum_{n=1}^{\infty}\left(b(Q^{2}n)+\left(\frac{-n}{Q}\right)Q^{6}b(n)+Q^{13}b \left(\frac{n}{Q^{2}}\right)\right)q^{n}=\mu(Q)\sum_{n=1}^{\infty}b(n)q^{n},\] for some constant \(\mu(Q)\). By comparing the coefficient of the term \(q^{1}\), we obtain \[b(Q^{2})+\left(\frac{-1}{Q}\right)Q^{6}b(1)=\mu(Q)b(1).\] Since \(b(1)=1\), we have \[\mu(Q)=b(Q^{2})+\left(\frac{-1}{Q}\right)Q^{6}.\] Unfortunately, we have less information about \(b(Q^{2})\). Next, we will use the Shimura lift to study the properties of \(b(Q^{2})\). Let \[\operatorname{Sh}_{1}(\eta^{6}(z)\eta^{9}(2z))=\sum_{n=1}^{\infty}A_{1}(n)q^{ n}\in S_{14}(\Gamma_{0}(8)).\] By definition, we know that \[A_{1}(n) =\sum_{d|n}\left(\frac{-1}{d}\right)\chi_{0}(d)d^{6}b\left(\frac {n^{2}}{d^{2}}\right)\ \ \text{(where $\chi_{0}$ is the trivial character mod $16$)}\] \[=\sum_{d|n\atop d\text{ odd}}\left(\frac{-1}{d}\right)d^{6}b \left(\frac{n^{2}}{d^{2}}\right).\] Note that \[A_{1}(Q)=b(Q^{2})+\left(\frac{-1}{Q}\right)Q^{6}.\] Since our purpose is to study \(A_{1}(Q)\), we do not need information for \(A_{1}(n)\) with \((n,6)>1\), so we are going to cancel these terms. Precisely, we will study \[\sum_{n=1\atop n\equiv 1,5\ (\text{mod 6})}^{\infty}A_{1}(n)q^{n}.\] Note that \[\sum_{n=1\atop n\equiv 1,5\ (\text{mod 6})}^{\infty}A_{1}(n)q^{n}\] \[=\sum_{n=1}^{\infty}A_{1}(n)q^{n}-\sum_{n=1\atop n\equiv 0\ (\text{mod 2})}^{\infty}A_{1}(n)q^{n}-\sum_{n=1\atop n\equiv 0 \ (\text{mod 3})}^{\infty}A_{1}(n)q^{n}+\sum_{n=1\atop n\equiv 0\ (\text{mod 6})}^{\infty}A_{1}(n)q^{n}, \tag{3.2}\] ## Arithmetic properties of overpartitions where \[\sum_{n=1\atop n=0\pmod{6}}^{\infty}A_{1}(n)q^{n}=\sum_{n=1}^{\infty}A_{1}(6n)q^{ 6n}=\sum_{n=1}^{\infty}A_{1}(n)q^{n}\ |\ U(6)\ |\ V(6).\] If we view \(\sum_{n=1}^{\infty}A_{1}(n)q^{n}\) as an element in \(S_{14}(\Gamma_{0}(24))\), then \[\sum_{n=1}^{\infty}A_{1}(6n)q^{6n}=\sum_{n=1}^{\infty}A_{1}(n)q^{n}\ |\ U(6)\in S _{14}(\Gamma_{0}(24)).\] Hence \[\sum_{n=1\atop n=0\pmod{6}}^{\infty}A_{1}(n)q^{n}=\sum_{n=1}^{\infty}A_{1}(n) q^{n}\ |\ U(6)\ |\ V(6)\in S_{14}(\Gamma_{0}(144)).\] Similar argument can be applied to other terms of (3.2). Finally, we obtain \[\sum_{n=1\atop n=1,5\pmod{6}}^{\infty}A_{1}(n)q^{n}\in S_{14}(\Gamma_{0}(144 )).\] Use Theorem 2.3 and 2.4, we find that \(\eta^{4}(6z)\in S_{2}(\Gamma_{0}(36))\). Since Eisenstein series \[E_{4}(z)=1+240\sum_{n=1}^{\infty}\sigma_{3}(n)q^{n}\in M_{4}(\Gamma_{0}(1))\] and \(E_{4}(z)\equiv 1\pmod{3}\), we have \(\eta^{4}(6z)E_{4}^{3}(z)\in S_{14}(\Gamma_{0}(144))\). Thus \(\eta^{4}(6z)\in S_{14}(\Gamma_{0}(144))_{3}\). Now we can apply Theorem 3.4 to show that \[\sum_{n=1\atop n=1,5\pmod{6}}^{\infty}A_{1}(n)q^{n}\equiv\eta^{4}(6z)\ \ (\mbox{mod }3)\,, \tag{3.3}\] well beyond the Sturm bound of \(336\). Note that \(\eta^{4}(6z)\) have nonzero coefficients only at \(q^{6n+1}\) terms, we obtain \(A_{1}(n)\equiv 0\pmod{3}\) for all \(n\equiv 5\pmod{6}\). Thus if odd prime \(Q\equiv 5\pmod{6}\), we have \[\mu(Q)=b(Q^{2})+\left(\frac{-1}{Q}\right)Q^{6}=A_{1}(Q)\equiv_{3}0.\] Thus for prime \(Q\equiv 5\pmod{6}\), \[\sum_{n=0\atop n=1\pmod{3}}^{\infty}\bar{p}(n)q^{n}\ |\ T(Q^{2})\equiv 0\pmod{3}\,.\] Or equivalently, \[\bar{p}(Q^{2}n)+\left(\frac{-n}{Q}\right)Q^{6}\bar{p}(n)+Q^{13}\bar{p}\left( \frac{n}{Q^{2}}\right)\equiv 0\pmod{3}\] satisfies for all \(n\equiv 1\pmod{3}\). Let \(n=Qn\) with \((n,Q)=1\) and \(n\equiv-1\pmod{3}\) to eliminate the latter two parts, obtaining \[\bar{p}(Q^{3}n)\equiv 0\pmod{3}\] for all \((n,Q)=1\) and \(n\equiv-1\pmod{3}\). _Remark_.: We can compute that \(\dim S_{14}(\Gamma_{0}(8))=11\) (see, for example, [5, Section 3.9]). Using Theorem 2.3 and Theorem 2.4, we can find a set of basis \[\{\beta_{1},\cdots,\beta_{11}\}=\] \[\left\{\frac{\eta^{32}(z)}{\eta^{4}(2z)},\frac{\eta^{44}(4z)}{ \eta^{16}(8z)},\eta^{20}(2z)\eta^{8}(4z),\eta^{8}(2z)\eta^{20}(4z),\frac{\eta ^{32}(4z)}{\eta^{4}(2z)},\eta^{20}(4z)\eta^{8}(8z),\right.\] \[\left.\eta^{4}(2z)\eta^{8}(4z)\eta^{16}(8z),\frac{\eta^{20}(4z) \eta^{16}(8z)}{\eta^{8}(2z)},\frac{\eta^{8}(4z)\eta^{24}(8z)}{\eta^{4}(2z)}, \frac{\eta^{32}(8z)}{\eta^{4}(4z)},\frac{\eta^{4}(2z)\eta^{40}(8z)}{\eta^{16 }(4z)}\right\}.\] A straightforward computation reveals that \[\sum_{n=1}^{\infty}A_{1}(n)q^{n} =\beta_{1}+96\beta_{2}-2304\beta_{3}-65536\beta_{5}-24576\beta_{6 }+393216\beta_{8}-6291456\beta_{10}\] \[\equiv_{3}\beta_{1}-\beta_{5}.\] Using the above congruence to prove (3.3) is simpler than using the definition of the Shimura lift. ## 4. Modulo 7 The case modulo 7 is quite different from modulo 3, since the explicit expression for \(\sum\bar{p}(7n)q^{n}\) is very complicated and hard to analyse. First we need to show that \(\sum\bar{p}(7n)q^{n}\) is a modular form, then we can use Sturm's Theorem to find a modular form which equals to \(\sum\bar{p}(7n)q^{n}\) in the sense modulo 7, so that we can avoid analyse the explicit expression for \(\sum\bar{p}(7n)q^{n}\). **Theorem 4.1**.: _For each prime \(m\geq 3\), let \(m^{\prime}=(m\bmod 8)\), then_ \[\sum_{n=0}^{\infty}\bar{p}(mn)q^{n}\in M_{(8+m^{\prime})\frac{m-1}{2}-\frac{1 }{2}}(\Gamma_{0}(16))_{m}.\] Proof.: We define an \(\eta\)-quotient by \[f(m;z)=\frac{\eta(2z)}{\eta^{2}(z)}\eta^{a}(mz)\eta^{b}(2mz)\equiv_{m}\eta^{ am-2}(z)\eta^{bm+1}(2z),\] where \(a=2m^{\prime}-8\) and \(b=16-m^{\prime}\). One can use Theorem 2.3 and 2.4 to show that \[\eta^{am-2}(z)\eta^{bm+1}(2z)\in S_{4m+\frac{mm^{\prime}-1}{2}}(\Gamma_{0}(2)).\] In fact, it has order \(m\) at the cusp \(\infty\) and order \((mm^{\prime}-1)/8\) at the cusp \(0\). On the other hand, \[\frac{\eta(2z)}{\eta^{2}(z)}\eta^{a}(mz)\eta^{b}(2mz)=\sum_{n=0}^{\infty}\bar{p} (n)q^{n+m}\cdot(q^{m};q^{m})_{\infty}^{a}(q^{2m};q^{2m})_{\infty}^{b}.\] Since \(U(m)\equiv T(m)\pmod{m}\), we have \[\frac{\eta(2z)}{\eta^{2}(z)}\eta^{a}(mz)\eta^{b}(2mz)\ |\ U(m)\equiv_{m}\eta^{am-2}(z)\eta^{bm+1}(2z)\ |\ T(m). \tag{4.1}\] As for the left hand side of (4.1), we have \[\frac{\eta(2z)}{\eta^{2}(z)}\eta^{a}(mz)\eta^{b}(2mz)\ |\ U(m)=\sum_{ \stackrel{{ n=0}}{{n=0\pmod{m}}}}^{\infty}\bar{p}(n)q^{\frac{n+m}{m}} \cdot(q;q)_{\infty}^{a}(q^{2};q^{2})_{\infty}^{b}.\] One can use Theorem 2.3 and 2.4 to show that \(\eta^{8}(z)\eta^{8}(2z)\in S_{8}(\Gamma_{0}(2))\) has order \(1\) at all cusps. So as for the right hand side of (4.1), we have \[\eta^{am-2}(z)\eta^{bm+1}(2z)\ |\ T(m)=\eta^{8}(z)\eta^{8}(2z)g(m;z),\] where \(g(m;z)\in M_{4m-8+\frac{mm^{\prime}-1}{2}}(\Gamma_{0}(2))\). Combining the above two equation to show that \[\sum_{\stackrel{{ n=0}}{{n=0\pmod{m}}}}^{\infty} \bar{p}(n)q^{\frac{n}{m}}\equiv_{m}\eta^{8-a}(z)\eta^{8-b}(2z)g(m;z)=\frac{ \eta^{2(8-m^{\prime})}(z)}{\eta^{8-m^{\prime}}(2z)}g(m;z),\] where \[\frac{\eta^{2(8-m^{\prime})}(z)}{\eta^{8-m^{\prime}}(2z)}=\phi^{8-m^{\prime}} (-q)\in M_{\frac{8-m^{\prime}}{2}}(\Gamma_{0}(16)).\] Hence \[\sum_{n=0}^{\infty}\bar{p}(mn)q^{n}\in M_{(8+m^{\prime})\frac{m-1}{2}-\frac{1 }{2}}(\Gamma_{0}(16))_{m}.\] We need more tools to proceed. Let \[\phi_{s,t}(q)=\phi(q)^{s}(2q^{\frac{1}{4}}\psi(q^{2}))^{t}.\] If \(4\mid t\), we may write \[\phi_{s,t}(q)=\sum_{n=0}^{\infty}\phi_{s,t}(n)q^{n}.\] Let \(r_{s,t}(n)\) denote the number of representations of \(n\) as a sum of \(s\) even squares and \(t\) odd squares, considering both negative numbers and order. Then \[\sum_{n=0}^{\infty}r_{s,t}(n)q^{n}=\binom{s+t}{s}\phi_{s,t}(q^{4}). \tag{4.2}\] We will require the following fact about modular forms of half-integral weight. **Theorem 4.2** ([8, page 184, Prop. 4.]).: \(M_{\frac{2k+1}{2}}(\Gamma_{0}(4))\) _is the vector space consisting of all linear combinations of \(\phi_{2k+1-4j,4j}(q),j=0,1,\cdots,\lfloor k/2\rfloor\)._ Now we are going to prove an identity of divisor functions. **Theorem 4.3**.: \[\sum_{i=1}^{n}\sigma(2i-1)\sigma(2n-2i+1)=\sum_{\genfrac{}{}{0.0pt}{}{d\mid n}{ n/d\text{ odd}}}d^{3}.\] Proof.: By [12, page 6] we have \[q\psi^{4}(q^{2})=\sum_{n=0}^{\infty}\sigma(2n+1)q^{2n+1}\] and [12, page 8] \[q^{2}\psi^{8}(q^{2})=\sum_{n=1}^{\infty}\sum_{\genfrac{}{}{0.0pt}{}{d\mid n}{ n/d\text{ odd}}}d^{3}q^{2n}.\] Comparing the coefficients of both sides gives the desired result. Proof of Theorem 1.3.: By Theorem 4.1 we have \[\sum_{n=0}^{\infty}\bar{p}(7n)q^{n}\in M_{\frac{89}{2}}(\Gamma_{0}(16))_{7}.\] Thus \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(7n)q^{n}=2\sum_{n=0}^{\infty}\bar{p}(7n)q^ {n}\ |\ U(2)\ |\ V(2)-\sum_{n=0}^{\infty}\bar{p}(7n)q^{n}\in M_{\frac{89}{2}}(\Gamma_{0}(3 2))_{7}.\] On the other hand, since Eisenstein series \[E_{6}(z)=1-504\sum_{n=1}^{\infty}\sigma_{5}(n)q^{n}\in M_{6}(\Gamma_{0}(1))\] and \(E_{6}(z)\equiv 1\pmod{7}\), we have \[E_{6}^{7}(z)(\phi^{5}(q)-2\phi_{1,4}(q))\in M_{\frac{89}{2}}(\Gamma_{0}(32)).\] One can use Theorem 3.4 to show that \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(7n)q^{n}\equiv\phi^{5}(q)-2\phi_{1,4}(q)\pmod{ 7},\] well beyond the Sturm bound of 178. So we can view \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(7n)q^{n}\in M_{\frac{5}{2}}(\Gamma_{0}(4))_{ 7}.\] Now we are going to analyse \((\phi^{5}(q)-2\phi_{1,4}(q))\mid T(Q^{2})\) for odd prime \(Q\). Since \(M_{\frac{5}{2}}(\Gamma_{0}(4))\) is of dimension \(2\), we can write \[(\phi^{5}(q)-2\phi_{1,4}(q))\mid T(Q^{2})=c_{0}\phi^{5}(q)+c_{1}\phi_{1,4}(q) \tag{4.3}\] for some constants \(c_{0},c_{1}\). Since The above equation is equivalent to \[2\phi_{1,4}(q)\mid T(Q^{2})=(Q^{3}+1-c_{0})\phi^{5}(q)-c_{1}\phi_{1,4}(q) \tag{4.4}\] Comparing the constant term of both sides of (4.4), obtaining \(c_{0}=Q^{3}+1\). To obtain \(c_{1}\), we may rewrite (4.4) via the definition of Hecke operator to obtain \[2\phi_{1,4}(Q^{2}n)+2Q\left(\frac{n}{Q}\right)\phi_{1,4}(n)+2Q^{3}\phi_{1,4} \left(\frac{n}{Q^{2}}\right)=-c_{1}\phi_{1,4}(n).\] Thus \[2\phi_{1,4}(Q^{2})+2Q\phi_{1,4}(1)=-c_{1}\phi_{1,4}(1).\] Since \(\phi_{1,4}(1)=16\), we have \(\phi_{1,4}(Q^{2})+16Q=-8c_{1}\). So next we need to compute \(\phi_{1,4}(Q^{2})\). By (4.2) we have \[\sum_{n=0}^{\infty}r_{1,4}(n)q^{n}=5\sum_{n=0}^{\infty}\phi_{1,4}(n)q^{4n}.\] Thus \(5\phi_{1,4}(Q^{2})=r_{1,4}(4Q^{2})\). Since \(4Q^{2}\equiv 4\pmod{8}\), we notice that if \[4Q^{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+x_{5}^{2},\] then some one of \(x_{i}\) must be a multiple of \(4\) and others are odd. So \[r_{1,4}(4Q^{2}) =5r_{0,4}(4Q^{2})+10\sum_{i>1\atop 4|i|}r_{0,4}(4Q^{2}-i^{2})\] \[=5r_{0,4}(4Q^{2})+10\sum_{i=1}^{\lfloor Q/2\rfloor}r_{0,4}(4Q^{2} -16i^{2}).\] Note that the number of representations of \(8n+4\) as a sum of \(4\) positive odd squares equals the number of representations of \(n\) as a sum of \(4\) triangle numbers. Moreover, the number of representations of \(n\) as a sum of \(4\) triangle numbers is equal to \(\sigma(2n+1)\)[7, (1.15)]. Thus \(r_{0,4}(8n+4)=16\sigma(2n+1)\), the coefficient \(16\) occurs since we count square of negative odd numbers. Consequently, \[\phi_{1,4}(Q^{2}) =16\sigma(Q^{2})+32\sum_{i=1}^{(Q-1)/2}\sigma(Q^{2}-4i^{2})\] \[=16\sigma(Q^{2})+32\sum_{i=1}^{(Q-1)/2}\sigma((2i-1)(2Q-2i+1))\] \[=16\sigma(Q^{2})+16\sum_{i=1}^{(Q-1)/2}\sigma((2i-1)(2Q-2i+1))\] \[\qquad\qquad\qquad+16\sum_{i=(Q+3)/2}^{Q}\sigma((2i-1)(2Q-2i+1))\] \[=16\sum_{i=1}^{Q}\sigma((2i-1)(2Q-2i+1)).\] Since \((2i-1,2Q-2i+1)=(2i-1,2Q)=(2i-1,Q)=1\) when \(i\neq(Q+1)/2\), we have \[\phi_{1,4}(Q^{2})=16\sigma(Q^{2})-16\sigma^{2}(Q)+16\sum_{i=1}^{Q}\sigma(2i-1 )\sigma(2Q-2i+1).\] By Theorem 4.3 we have \[\phi_{1,4}(Q^{2}) =16(1+Q+Q^{2})-16(1+Q)^{2}+16(1+Q^{3})\] \[=16(Q^{3}-Q+1).\] Recalling that we have computed \(\phi_{1,4}(Q^{2})+16Q=-8c_{1}\), so \(c_{1}=-2(Q^{3}+1)\). Now (4.3) becomes \[(\phi^{5}(q)-2\phi_{1,4}(q))\ |\ T(Q^{2})=(Q^{3}+1)(\phi^{5}(q)-2\phi_{1,4}(q)). \tag{4.5}\] Note that \(Q^{3}+1\equiv 0\pmod{7}\) for \(Q\equiv 3,5,6\pmod{7}\), so for these primes \(Q\) we have \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(7n)q^{n}\ |\ T(Q^{2})\equiv 0\pmod{7}\,,\] i.e. \[(-1)^{Q^{2}n}\bar{p}(7Q^{2}n)+(-1)^{n}Q\left(\frac{n}{Q}\right)\bar{p}(7n)+(-1 )^{n/Q^{2}}Q^{3}\bar{p}\left(\frac{7n}{Q^{2}}\right)\equiv 0\pmod{7}\] satisfies for all \(n\). Let \(n=Qn\) with \((n,Q)=1\) to eliminate the latter two parts, obtaining \[\bar{p}(7Q^{3}n)\equiv 0\pmod{7}\] for all \((n,Q)=1\). **Corollary 4.4**.: _For each odd prime \(Q\), \(\phi_{1,4}(q)\) is an eigenform for Hecke operators \(T(Q^{2})\), and the eigenvalue is \(Q^{3}+1\)._ Proof.: It immediately follows from (4.5). ## 5. Modulo 11 By Theorem 4.2 we know that a set of basis of \(M_{\frac{9}{2}}(\Gamma_{0}(4))\) is \(\phi^{9}(q)\), \(\phi_{5,4}(q)\), and \(\phi_{1,8}(q)\). The following theorem describes how Hecke operators act on these basis [3, Theorem 6.5.]. **Theorem 5.1** (Cooper).: _Let \(Q\) be an odd prime, \(\alpha=Q^{7}+1\), \(\beta=\theta(p)\), then_ \[\begin{pmatrix}\phi^{9}(q)\ |\ T(Q^{2})\\ \phi_{5,4}(q)\ |\ T(Q^{2})\\ \phi_{1,8}(q)\ |\ T(Q^{2})\end{pmatrix}=\frac{1}{17}\begin{pmatrix}17 \alpha&-2\alpha+2\beta&2\alpha-2\beta\\ 0&16\alpha+\beta&\alpha-\beta\\ 0&16\alpha-16\beta&\alpha+16\beta\end{pmatrix}\begin{pmatrix}\phi^{9}(q)\\ \phi_{5,4}(q)\\ \phi_{1,8}(q)\end{pmatrix},\] _one can see definition of \(\theta(p)\) in [3, (2.1)]. We will not use the value of \(\theta(p)\) in this paper._ Proof of Theorem 1.4.: By Theorem 4.1 we have \[\bar{p}(11Q^{3}n)\in M_{\frac{100}{2}}(\Gamma_{0}(16))_{11}.\] Thus \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(11n)q^{n}=2\sum_{n=0}^{\infty}\bar{p}(11n) q^{n}\ |\ U(2)\ |\ V(2)-\sum_{n=0}^{\infty}\bar{p}(11n)q^{n}\in M_{\frac{109}{2}}(\Gamma_{0}( 32))_{11}.\] On the other hand, since Eisenstein series \[E_{10}(z)=1-264\sum_{n=1}^{\infty}\sigma_{9}(n)q^{n}\in M_{10}(\Gamma_{0}(1))\] and \(E_{10}(z)\equiv 1\pmod{11}\), we have \[E_{10}^{5}(z)(\phi^{9}(q)-2\phi_{5,4}(q))\in M_{\frac{109}{2}}(\Gamma_{0}(32)).\] One can use Theorem 3.4 to show that \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(11n)q^{n}\equiv\phi^{9}(q)-2\phi_{5,4}(q) \pmod{11}\,,\] well beyond the Sturm bound of 218. So we can view \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(11n)q^{n}\in M_{\frac{9}{2}}(\Gamma_{0}(4))_{11}.\] For odd prime \(Q\), by Theorem 5.1 we obtain \[(\phi^{9}(q)-2\phi_{5,4}(q))\ |\ T(Q^{2}) =\alpha\phi^{9}(q)-2\alpha\phi_{5,4}(q)\] \[=(Q^{7}+1)(\phi^{9}(q)-2\phi_{5,4}(q)).\] Since \(Q^{7}+1\equiv 0\pmod{11}\) for \(Q\equiv 10\pmod{11}\), so for these primes \(Q\) we have \[(\phi^{9}(q)-2\phi_{5,4}(q))\ |\ T(Q^{2})\equiv 0\pmod{11},\] so for these primes \(Q\), \[\sum_{n=0}^{\infty}(-1)^{n}\bar{p}(11n)q^{n}\ |\ T(Q^{2})\equiv 0\pmod{11},\] i.e. \[(-1)^{Q^{2}n}\bar{p}(11Q^{2}n)+(-1)^{n}Q^{3}\left(\frac{n}{Q}\right)\bar{p}(11 n)+(-1)^{n/Q^{2}}Q^{7}\bar{p}\left(\frac{11n}{Q^{2}}\right)\equiv 0\pmod{11}\] satisfies for all \(n\). Let \(n=Qn\) with \((n,Q)=1\) to eliminate the latter two parts, obtaining \[\bar{p}(11Q^{3}n)\equiv 0\pmod{11}\] for all \((n,Q)=1\). ## 6. Modulo other primes For odd primes \(m\leq 11\), we have already prove that for each odd prime \(Q\equiv-1\pmod{m}\), \[\bar{p}(mQ^{3}n)\equiv 0\pmod{m} \tag{6.1}\] for all \((n,Q)=1\). However, this fail for \(m=13\), since \(\bar{p}(13\cdot 103^{3}\cdot 3)\equiv 12\pmod{13}\). Moreover, (6.1) seems to fail for primes \(\geq 13\). However, for primes \(\geq 13\), we still can prove that there are infinitely many primes \(Q\) such that \[\bar{p}(mQ^{3}n)\equiv 0\pmod{m}\] for all \(n\) coprime to \(mQ\). First we need a theorem due to Serre. **Theorem 6.1** (Serre).: _The set of primes \(Q\equiv-1\pmod{Nm}\) such that_ \[f\ |\ T(Q)\equiv 0\pmod{m}\] _for each \(f(z)\in M_{k}(\Gamma_{0}(N),\psi)_{m}\) has positive density, where \(T(Q)\) denotes the usual Hecke operator acting on \(M_{k}(\Gamma_{0}(N),\psi)\)._ Some application of Shimura lift genealize this result (see [13, 3.30]). **Theorem 6.2**.: _The set of primes \(Q\equiv-1\pmod{4Nm}\) such that_ \[f\ |\ T(Q^{2})\equiv 0\ (\mathrm{mod}\ m)\] _for each \(f(z)\in S_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\psi)_{m}\) has positive density, where \(T(Q^{2})\) denotes the usual Hecke operator acting on \(S_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\psi)\)._ Although the theorem states that such primes have a positive proportion in the arithmetic sequence \(4Nmn-1\), when we actually find some examples of congruences, we shall test all primes. By Theorem 4.1, \(\sum\bar{p}(mn)q^{n}\) may be a non-cusp form. Unfortunately, we cannot directly use Theorem 6.2 to prove the existence of congruences modulo primes \(\geq 13\). However, we can still test whether \(\sum\bar{p}(mn)q^{n},|,T(Q^{2})\equiv_{m}0\), but there is less chance of finding such an example. The next theorem shows that we can transform a non-cusp form into a cusp form without changing many coefficients. **Theorem 6.3**.: _Suppose that \(m\geq 3\) and \(f=\sum_{n=0}^{\infty}a(n)q^{n}\in M_{\lambda+\frac{1}{2}}(\Gamma_{0}(4N),\psi )_{m}\), then_ 1. _If_ \(m\geq 5\)_,_ \[\sum_{\begin{subarray}{c}n=0\\ n\neq 0\pmod{m}\end{subarray}}^{\infty}a(n)q^{n}\in S_{\lambda+\frac{m^{2}}{2}} \left(\Gamma_{0}\left(\frac{4Nm^{2}}{(N,m)}\right),\psi\right)_{m}.\] 2. _If_ \(m=3\)_,_ \[\sum_{\begin{subarray}{c}n=0\\ n\neq 0\pmod{3}\end{subarray}}^{\infty}a(n)q^{n}\in S_{\lambda+\frac{25}{2}} \left(\Gamma_{0}\left(\frac{36N}{(3,N)}\right),\psi\right)_{3}.\] Proof.: According to proof of [17, Prop. 3.5.], we know that if there are integer \(t\) such that \[\sum_{n=0}^{\infty}a(n)q^{n}\ |\ U(m^{t})\] is holomorphic at all cusps \(\frac{a}{cm^{2}}\), then \[\sum_{n=0}^{\infty}a(n)q^{n}\ |\ U(m^{t})-\sum_{n=0}^{\infty}a(n)q^{n}\ |\ U(m^{t+1})\ | \ V(m)\] vanishes at all cusps \(\frac{a}{cm^{2}}\). Since \(f\) itself is holomorphic at all cusps, we can simply take \(t=0\). Now \[\begin{split}\sum_{n=0\atop n\not\equiv 0\pmod{m}}^{\infty}a(n)q^{n}& =\sum_{n=0}^{\infty}a(n)q^{n}-\sum_{n=0}^{\infty}a(n)q^{n}\ |\ U(m)\ |\ V(m)\\ &\in M_{\lambda+\frac{1}{2}}\left(\Gamma_{0}\left(\frac{4Nm^{2}} {(N,m)}\right),\psi\right)_{m}\end{split} \tag{6.2}\] vanishes at all cusps \(\frac{a}{cm^{2}}\). By [17, page 13] we know \[\begin{cases}1\equiv_{m}\frac{\eta^{m^{2}}(z)}{\eta(m^{2}z)}\in M_{\frac{m^{2 }-1}{2}}(\Gamma_{0}(m^{2}))&\text{ if }m\geq 5,\\ 1\equiv_{3}\frac{\eta^{27}(z)}{\eta^{3}(9z)}\in M_{12}(\Gamma_{0}(9))& \text{ if }m=3.\end{cases} \tag{6.3}\] vanishes at all cusps \(\frac{a}{c}\) of \(\Gamma_{0}(Nm^{2})\) with \(m^{2}\nmid c\). Multiplying the two modular form in (6.2) and (6.3) gives the desired result. Proof of Theorem 1.5.: It is a corollary of Theorem 4.1, 6.2, and 6.3. While Ramanujan-type congruences modulo all primes \(m\) do exist, it is important to note that discovering them may require extensive computations. We encourage interested readers to explore and seek examples of congruences modulo primes \(\geq 13\). It is natural to ask whether only primes \(3,5,7,11\) are special. In fact, we may conjecture that **Conjecture 6.4**.: _Let \(m\) be an odd prime. If for all odd primes \(Q\equiv-1\pmod{m}\), we have_ \[\bar{p}(mQ^{3}n)\equiv 0\pmod{m}\] _for all \(n\) coprime to \(Q\), then \(m=3,5,7,11\)._ ## Acknowledgement We would like to express our gratitude to OEIS for providing many valuable references.
2301.03029
Topic Modelling of Swedish Newspaper Articles about Coronavirus: a Case Study using Latent Dirichlet Allocation Method
Topic Modelling (TM) is from the research branches of natural language understanding (NLU) and natural language processing (NLP) that is to facilitate insightful analysis from large documents and datasets, such as a summarisation of main topics and the topic changes. This kind of discovery is getting more popular in real-life applications due to its impact on big data analytics. In this study, from the social-media and healthcare domain, we apply popular Latent Dirichlet Allocation (LDA) methods to model the topic changes in Swedish newspaper articles about Coronavirus. We describe the corpus we created including 6515 articles, methods applied, and statistics on topic changes over approximately 1 year and two months period of time from 17th January 2020 to 13th March 2021. We hope this work can be an asset for grounding applications of topic modelling and can be inspiring for similar case studies in an era with pandemics, to support socio-economic impact research as well as clinical and healthcare analytics. Our data and source code are openly available at https://github. com/poethan/Swed_Covid_TM Keywords: Latent Dirichlet Allocation (LDA); Topic Modelling; Coronavirus; Pandemics; Natural Language Understanding; BERT-topic
Bernadeta Griciūtė, Lifeng Han, Goran Nenadic
2023-01-08T12:33:58Z
http://arxiv.org/abs/2301.03029v6
# Topic Modelling of Swedish Newspaper Articles about Coronavirus: ###### Abstract Topic Modelling (TM) is from the research branches of natural language understanding (NLU) and natural language processing (NLP) that is to facilitate insightful analysis from large documents and datasets, such as a summarisation of main topics and the topic changes. This kind of discovery is getting more popular in real-life applications due to its impact on big data analytics. In this study, from the social-media and healthcare domain, we apply popular Latent Dirichlet Allocation (LDA) methods to model the topic changes in Swedish newspaper articles about _Coronavirus_. We describe the corpus we created including 6515 articles, methods applied, and statistics on topic changes over approximately 1 year and two months period of time from 17th January 2020 to 13th March 2021. We hope this work can be an asset for grounding applications of topic modelling and can be inspiring for similar case studies in an era with pandemics, to support socio-economic impact research as well as clinical and healthcare analytics. _Our data is openly available at [https://github.com/poethan/Swed_Covid_TM._](https://github.com/poethan/Swed_Covid_TM._) **Keywords:** Latent Dirichlet Allocation (LDA); Topic Modelling; Coronavirus; Pandemics; Natural Language Understanding ## 1 Introduction During the Coronavirus (COVID-19) pandemic 1, when the majority of the countries imposed strict lockdowns, the Swedish government instead adopted a relatively different and even controversial approach towards Coronavirus compared to other countries especially during the "first wave" such as keeping many sectors open in society (Creutz et al., 2021; Hedman et al., 2022; Kubai, 2022). 2 There have been some studies on the impact of the Swedish COVID-19 policy such as the effect on overall infection rates, death rates, and the vulnerability towards older people (Pashakhanlou, 2022), on the criticism and argument about lack of scientific guidance from the government (Brusselaers et al., 2022), on the potential influence towards other kinds of disease on nation and region levels (Saarentausta et al., 2022), and on the spread level among different personnel, e.g. dental sector (Fredriksson et al., 2022). Footnote 1: World Health Organisation (WHO) COVID-19 Dashboard [https://covid19.who.int/6,656,601](https://covid19.who.int/6,656,601) deaths and 651,918,402 confirmed cases till 23 December 2022. Footnote 2: [https://www.government.se/search/](https://www.government.se/search/) Researchers from the statistical background also carried out mathematical modellings on the prediction of spatiotemporal risks towards incidence, intense care (IC) admission, and death, e.g. Bayesian analysis by Jaya et al. (2022). However, there has not been enough comprehensive research work on the socio-economic impact of computational social science fields. In addition, for future healthcare text analytics studies, there are yet many investigations to be carried out on debates and discussions from the society around this topic and policy. In this work, to further facilitate this research direction, we leverage the methodologies from natural language processing (NLP) and carry out an experimental investigation on topic modelling using Swedish Newspaper articles about Coronavirus. This aims at getting more insight into the topic focuses and changes around Coronavirus in Swedish society over more than a year from 17th January 2020 to 13th March 2021. The method we applied to this study is an unsupervised statistical generative model called Latent Dirichlet Allocation (LDA), which was first designed by Blei et al. (2003) to address text modelling, classification, and collaborative filtering tasks. LDA method has been proven to be very effective in topic discovery and similar text identification, in the topic modelling field since then (Rehurek and Sojka, 2010; Tong and Zhang, 2016; Asmussen and Moller, 2019). The rest of the paper is organised as below: Section 2 surveys the related work to ours on topic modelling in NLP, healthcare text analytics, and social media mining, Section 3 presents LDA method, Section 4 follows up with our experimental work carried out on Coronavirus topic using LDA, and Section 5 concludes the paper with discussion and future work. ## 2 Related Work We present related work on topic modelling (TM) methods briefly in NLP, Healthcare Text Analytics (HTA), and TM in Journalism and Social media research. TM research started in the early 2000s as a breakthrough in machine learning (ML) techniques to address the automatic analysis of large amounts of digital (electronic) archived documents. These methods used hierarchical probabilistic models including LDA by Blei et al. (2003), Markov Topic Models (MTM) by Wang et al. (2009), and their variations such as including authorship features by ROSEN-ZVI (2004), Dynamic Topic Models (DTM) by Blei and Lafferty (2006); Blei (2012), which incorporated the evolution feature of the topics based on LDA often using a corpus with a sequence of time stamps, for instance, using journal articles. Despite the popularity of LDA, Gerlach et al. (2018) designed a new approach to carry out TM by looking into community networks using a "stochastic block model" (SBM). The SBM was designed to automatically detect the "number of topics", which parameter has to be manually set up in the LDA algorithms. With the new development of NLP methodologies, especially the continuous vector space representation of lexical semantics (Vaswani et al., 2017; Devlin et al., 2019), in very recent years, researchers also tried to approach the TM task in this track. Representative work includes BERTtopic (Grootendorst, 2022) and Top2Vec (Angelov, 2020). TM in continuous vector space is still an emerging direction. In Healthcare Text Analytics (HTA), Kovacevic et al. (2012) applied both rules and machine learning based methods to investigate topic categorisation of statements from suicide notes. Spasic et al. (2014) carried out survey work on cancer research models applied to text mining from NLP. Noble et al. (2021) used electronic health records (EHRs) to identify the outbreak of dog disease in the United Kingdom. Some recent advances in HTA using clinical discharged summary letters are reported by Wu et al. (2022) from data-constrained fine-tuning comparing different pre-trained language models. In journalism, Newspaper, and Social media mining field, Jacobi et al. (2016) applied LDA algorithms to the New York Times articles covering nuclear technology from 1945 to 2010s for topic trend and pattern analysis. There are also research projects that made efforts on creating such datasets and shared tasks to facilitate the advancement of TM in social-media healthcare. For instance, the "Social Media Mining for Health (SMM4H)-2017 shared task" (Sarker et al., 2018) organisers prepared 15,717 annotated tweets for classifying adverse drug reactions, and 10,260 tweets for classifying medication consumption. Some earlier work on social media web crawling for terminological and lexical research, and topic modelling can be found at (Greenwood and Nenadic, 2008, 2009). Recently, Piksina and Vernholmen (2020) carried out an experimental investigation on the Swedish stock market change due to Coronavirus using social media data and the LDA method via the "Konstanz information miner" Analytics Platform (Berthold et al., 2009). However, to the best of our knowledge, there isn't published research work on a comprehensive investigation and analysis of Swedish COVID19 policy on socio-economical impact using LDA methods. In our investigations, we will report the topic modelling results on several different categories including public opinions on the origins of the virus, governmental health recommendation policy, scientific research sector, and economy. ## 3 Revisiting LDA Method The concept of generative model LDA can be described as below (Blei et al., 2003; Blei, 2012): \[p(\beta_{1:K},\theta_{1:D},z_{1:D},w_{1:D})\] \[=\Pi_{i=1}^{K}p(\beta_{i})\Pi_{d=1}^{D}p(\theta_{d})\] \[\left(\Pi_{n=1}^{N}p(z_{d,n}|\theta_{d})p(w_{d,n}|\beta_{1:K},z_{ d,n})\right)\] where the four main parameters \(\beta\), \(\theta\), \(z\), and \(w\) represent respectively the "topic distribution", "topic proportion of document", "topic assignment of document", and the "observed words of document". For instance, 1) \(\beta_{k}\) from \(\beta_{1:K}\) can be a vocabulary distribution of words, i.e., their statistical probabilities; 2) \(\theta_{d}\) from \(\theta_{1:D}\) is the topic proportion of the \(d\)th document, e.g. \(\theta_{d,k}\) can be the "topic proportion" of \(\beta_{k}\) reflected in the \(d\)th document; 3) \(z_{d}\) from \(z_{1:D}\) is the "topic assignment" to the \(d\)th document, e.g. \(z_{d,n}\) can be the "topic assignment" of the \(n\)th word in the \(d\)th document; and 4) \(w_{d}\) from \(w_{1:D}\) is the set of observed words in the \(d\)th document, e.g. \(w_{d,n}\) can be the \(n\)th word observed from the \(d\)th document. The calculation is based on conditional probability theories, i.e. the "topic assignment" event \(z_{d,n}\) is conditioned on the "topic proportion" \(\theta_{d}\), and the "word observation" \(w_{d,n}\) is conditioned on all the topics \(\beta_{1:K}\) and the topic assignment \(z_{d,n}\). The ideal computation is to sum all the joint distribution across all the possibilities among the potential topics. However, this computation is so huge that alternative solutions are often used to approximate it. There are two popular methods including sampling and variation (Blei, 2012). The difference is that sampling-based methods try to approximate it using collected samples and forming an empirical distribution (Gladkoff et al., 2022), while variation methods aim at structuring the task as an optimisation challenge (Blei et al., 2003). ## 4 Topic Modelling on Coronavirus ### Corpus Preparation Since the first wave of COVID-19 in Europe, the term "Coronavirus" has stayed in the main headlines of the media. How has the focus in Coronavirus related articles shifted with time regarding different stages of the lockdown and the exponentially rising infection numbers? The Swedish government's decision on non-harsh lockdown but rather appearing to the common sense of people seeking herd immunity made it stand out from the nearby countries. But how has this decision been depicted and commented on in local media? To investigate these questions, we chose to create a corpus from articles published by Sveriges Television (SVT) 3, Sweden's national public television broadcaster funded by a public service tax. The SVT is not the biggest news site in Sweden and is less popular than commercial "Expressen" or "Aftonbladet". However, we expect it to be more neutral and have fewer click-bite articles since it is funded by the tax. Footnote 3: [https://www.svt.se](https://www.svt.se) We created our corpus by scraping an SVT webpage in which Coronavirus related articles were collected 4. As of 13th of March 2021 when this research project was done, there were 6,515 Covid19 related articles with the first one coming from the 17th of January 2020, which covered more than one year period. Footnote 4: [https://www.svt.se/nyheter/utrikes/25393539](https://www.svt.se/nyheter/utrikes/25393539) To visualise the distribution of these articles per time, we created a graph depicting the number of articles published each day related to COVID from 2020/01/17 to 2021/03/31, which is 422 days overall in Figure 1. As shown in the figure, the number of published articles peaked, in the beginning, two months of the year, then steadily decreased towards the summer of 2020 and rose again in autumn when the second wave of the virus arrived. This visualisation can be further improved in future work by investigating how many percentages of all published articles including COVID and non-COVID ones had the Corona term constituted. ### Text Pre-Processing For our experiments, we have chosen a period of exactly one year, dating from the publication of the first COVID article 2020/01/17 to 2021/01/17. To ensure a higher percentage of the articles including Coronavirus as the topic and the diversity of content, we filtered out local news ("lokalt") since they have many repeated content, and only kept nationwide ("inrikes") and foreign ("utrikes") categories. The corpus consisted of 2,251 articles after filtering. For experimental purposes, we divided the corpus into 12 time frames counting from the 17th day of each month. The number of articles per time frame, following the tendency seen in the graph (Figure 1), was very uneven. The largest number of articles among these time-frames was from 17th March 2020 to 17th April 2020 including 569 articles, while the smallest number came from 17th August to 17th September with 23 articles only. While we initially planned to take the same number of articles for every month as an alternative solution, sorting in accordance with the real trend made it more objective. As mentioned by Asmussen and Moller (2019), the LDA method requires several parameter setups including text pre-processing, selection of the number of topics to generate, and expert analysis of outputs, etc. Text pre-processing was first carried out for topic modelling purposes using the Gensim toolkit by Rehurek and Sojka (2010), such as removing "stop words" and very rare words which might appear only once. Gensim used a vectorised dictionary including bigrams as part of this processing. 5 Footnote 5: [https://radimrehurek.com/gensim/intro.html](https://radimrehurek.com/gensim/intro.html) To perform tokenisation and summarise the words, we tried SnowBall Stemmer 6 but the output was not ideal to our task. Words kept their suffixes, e.g. "virus-et" ( meaning "the virus"), or were difficult to guess. The spaCy library 7 also has references to a Swedish lemmatier, but none of the implementations seemed to be available for usage. Finally, we lemmatised the corpus using Stanza lemnatiser Qi et al. (2020) developed by the Stanford NLP group. 8 Footnote 6: [https://snowballstem.org/](https://snowballstem.org/) Footnote 7: [https://spacy.io/](https://spacy.io/) Footnote 8: [https://stanfordnlp.github.io/stanza/](https://stanfordnlp.github.io/stanza/) ### More Parameters: number of topics Using LDA algorithms, the execution time of Gensim is relatively short for the smaller number of iterations. We have run it several times with different parameter values on the "number of topics", trying to inspect what could be the optimal case, which is not a straightforward task. While topic coherence tests can be helpful to compute the optimal topic number, we have chosen a more "human-in-the-loop" method to ensure better quality output. We check the result representation ourselves manually by using different parameters, to see how overlapping the output topics can be. We tried various options between 10 to 50 topics, and it seemed that 15 to 25 topics are not too overlapping and meanwhile keep a good balance between being too general and too specific. The visualisation of topic distribution for 10 and 30 topics is displayed in Figure 2. To carry out dynamic topic modelling (DTM), i.e. the change of topics over time, Gensim packages have two different implementations available, its own "ldaseqmodel" and a python wrapper for "dtmmodel". 9 We have tried both of them and the biggest difference we observed seemed to be the running time. The "ldaseq" model runs for almost 8 hours while the "dtm" model only needs 12 minutes. This time difference is not fully compara Figure 1: Number of Published Articles with Timeline Figure 2: Distribution Representation of 10 (up) vs 30 (down) Topics ble though, since the first model we ran was using un-lemmatisised corpus which might result in more tokens to process. We tried the "ldased" model with 15 topics and the "dtm" model with 15, 20, and 25 topics. The output of topics seemed quite similar between these models. For visualisation, we used the pyLDA package plus the "dtmvisual" toolkit by Brousseau et al. (2019). 10 Footnote 10: [https://github.com/GSukr/dtmvisual](https://github.com/GSukr/dtmvisual) ### Outputs of LDA using Gensim Using LDA algorithms integrated with Gensim generally gave us well-grouped topics that have been summarised on what the country was breathing in during the examined time period. Among these topics, there are word groups related to the most vulnerable part of society, e.g. the "old people", "infections and deaths" in old people's homes. Other topics include "children and school" and "home-teaching", etc. The most prevalent topics include "Health Ministry", "state epidemiologist A. Tegnell", and "death per day". There are also some economy-related topics, as well as talking about the "origin of the virus", or the situation in other countries. ### Classified Outputs using DTM We list some example outputs using dynamic topic modelling (DTM) below with 20 trained topics from the following categories: 1) the origin of the virus and WHO strategies, 2) public health recommendations, 3) antibody scientific research, and 4) economy. In Figure 3, the graph shows how Coronavirus related topics shifted from talking about China ("kina") and "Wuhan" to the World Health Organisation (WHO). This reflected people's attention from discussing the virus's origin to how WHO was addressing the issue with what strategies. Figure 4 demonstrates the topics on public recommendations, where indicated a drastic change in the frequency of face masks ("munskydd") which reflected the change in people's attitude towards this suggestion. There is a decrease in the mention of the Public Health Agency ("folkalsomyn-dighet") but an increase in spread and infection ("smitstpridning"), which is coherent with people's concerns more about the social impact of the virus. We also observed a decrease in "recommendation" (_rekommendation_) while an increase in "advice" (_rad_). Figure 5 shows a different perspective from "researcher" (_forskare_), "antibody" (_antikropp_), and "drugs" (_lakemedel_) related discussion. The blue curve on "antibodies" showed a sharp increase in the beginning months of the studying period, then stayed as an influential topic. This reflected either the trust or debates in society regarding biomedical scientific research. From a socio-economic perspective, Figure 6 shows how the concerns on the economy increased over time, including the keywords "stock exchange" (_borrs_) and "sales" (_forsaljing_), both of which curves grew steadily. ## 5 Discussion and Future Work In this work, to understand more about the societal impact of the Swedish government on their policies towards Coronavirus, we carried out an experimental investigation using topic modelling (TM) methodology on Swedish newspaper articles covering around one year period. We first introduced the pandemic background of our work and related scientific research on this topic. Then we explained the corpus we collected and the mathematical models of latent Dirichlet allocation (LDA) we applied for this study including the toolkits we used. We finally presented the topic modelling outputs using LDA and dynamic topic modelling (DTM) by classifying them into several catergories from the topics on the origin of the virus and WHO strategies, to public health recommendations, scientific biomedical research, to the economic discussion. The outcomes proved the successful applications of LDA method in our research task. In future work, there are many aspects that can be carried out to directly improve this study. Firstly, it is worthy to collect articles published by different newspaper agents and forums. For instance, some newspaper agents might have more to say about the opinions from pop stars on the lockdown, e.g. the "Aftonbladet.se", while others might have more radical criticism on the governmental strategies. Secondly, data collected from different countries can also enrich the experimental outputs, especially from nearby Scandinavian countries. For instance, Denmark and Norway both chose a stricter lockdown during the pandemic, shutting down the borders with Sweden and criticising the tactic of the latter. Very often, Scandinavian countries talk about each other in their news articles, sharing some related words in the topics under concern, which gives a chance to build a comparable corpus. Thirdly, in the technical components, some further steps can be done to optimise the model, e.g. extending the stop word list and removing the least and most common words, optimising the number of topics via the criterion of topic coherence, and investigating the performance of different models available on LDA, e.g. Mallet toolkit 11 and "infinite DTM". Footnote 11: [https://rare-technologies.com/tutorial-on-malllet-in-python/](https://rare-technologies.com/tutorial-on-malllet-in-python/) Extending from this case study, there are also many research directions to be explored. Firstly, ambiguous word analysis and cross-lingual TM (Boyd-Graber, 2010) can be carried out to enhance model performance by borrowing corresponding techniques from NLP fields. Secondly, "multi-word token (MWT)" expansion features in Stanza tool (Qi et al., 2020) can be further investigated by using multi-word expres Figure 4: Topics Related to Public Recommendations with Timeline Figure 3: Topics on China (kina) and WHO with Timeline sion (MWE) related advanced studies (Han et al., 2020, 2021). Thirdly, as a departure from statistical probabilistic models, we plan to explore the word embedding space and neural models. For instance, the "Word2Vec" and "FastText" options in Gen-sim packages can be a bridge from word vectors to neural structures such as BERT-topic 12. Footnote 12: some of our initial outputs from BERT-topic model are displayed in the paper Appendix Footnote 13: some of our initial outputs from BERT-topic model are displayed in the paper Appendix Fourthly, it will be interesting to carry out experimental investigation on "exchangeable topic modeling" in a multi-nominal situation (Blei and Lafferty, 2006), and see how different topics interact with each other. Finally, from the evaluation perspective, qual Figure 5: Topics Related to Antibody Research with Timeline Figure 6: Topics Related to Economy with Timeline itative evaluation of the LDA and topic modelling research was always demanded in the field (Nikolenko et al., 2017; Blei, 2012), e.g. there are spaces for expert-based human-in-the-loop evaluations (Han, 2022; Han and Gladkoff, 2022) to be carried out on the coherence levels within extracted topics. How to better evaluate model confidence levels (Gladkoff et al., 2022), and the interpretation of the algorithms are other research directions. ## Acknowledgements The authors LH and GN thank the project support from HIPS (R126035 task A05) and from JigSaw (R124782 task A07) at The University of Manchester; BG thanks Prof. Dr. Alexander Koller for advising the experimental work.
2301.04746
Switchable Lightweight Anti-symmetric Processing (SLAP) with CNN Outspeeds Data Augmentation by Smaller Sample -- Application in Gomoku Reinforcement Learning
To replace data augmentation, this paper proposed a method called SLAP to intensify experience to speed up machine learning and reduce the sample size. SLAP is a model-independent protocol/function to produce the same output given different transformation variants. SLAP improved the convergence speed of convolutional neural network learning by 83% in the experiments with Gomoku game states, with only one eighth of the sample size compared with data augmentation. In reinforcement learning for Gomoku, using AlphaGo Zero/AlphaZero algorithm with data augmentation as baseline, SLAP reduced the number of training samples by a factor of 8 and achieved similar winning rate against the same evaluator, but it was not yet evident that it could speed up reinforcement learning. The benefits should at least apply to domains that are invariant to symmetry or certain transformations. As future work, SLAP may aid more explainable learning and transfer learning for domains that are not invariant to symmetry, as a small step towards artificial general intelligence.
Chi-Hang Suen, Eduardo Alonso
2023-01-11T22:55:05Z
http://arxiv.org/abs/2301.04746v5
Switchable Lightweight Anti-symmetric Processing (SLAP) with CNN Outspeeds Data Augmentation by Smaller Sample - Application in Gomoku Reinforcement Learning ###### Abstract To replace data augmentation, this paper proposed a method called SLAP to intensify experience to speed up machine learning and reduce the sample size. SLAP is a model-independent protocol/function to produce the same output given different transformation variants. SLAP improved the convergence speed of convolutional neural network learning by 83% in the experiments with Gomoku game states, with only one eighth of the sample size compared with data augmentation. In reinforcement learning for Gomoku, using AlphaGo Zero/AlphaZero algorithm with data augmentation as baseline, SLAP reduced the number of training samples by a factor of 8 and achieved similar winning rate against the same evaluator, but it was not yet evident that it could speed up reinforcement learning. The benefits should at least apply to domains that are invariant to symmetry or certain transformations. As future work, SLAP may aid more explainable learning and transfer learning for domains that are not invariant to symmetry, as a small step towards artificial general intelligence. ## 1 Introduction ### Problem Convolutional neural network (CNN) is now the mainstream family of models for computer vision, thanks to its weight sharing mechanism to efficiently share learning across the same plane by so-called kernels, achieving local translational invariance. But CNN is not reflection and rotation invariant. Typically it can be addressed by data augmentation to inputs by reflection and rotation if necessary, but the sample size would increase substantially. [1] criticised CNN that it could not learn spatial relationships such as orientation, position and hierarchy and advocated their novel capsule to replace CNN. [2] improved capsule using routing by agreement mechanism and outperformed CNN at recognising overlapping images, but they also admitted that it tended to account for everything in the structure. This implies capsule is too heavy in computation. Inspired by the idea of capturing orientation information in capsule network [2], this paper proposed a novel method called Switchable Lightweight Anti-symmetric Process (SLAP), a protocol to produce the same output given different transformation variants, with the main research question: can symmetry variants be exploited directly by SLAP to improve and combine with CNN for machine learning? Very often, we know in advance if a certain machine learning task is invariant to certain types of transformation, such as rotation and reflection. E.g. in Gomoku, the state is rotation (perpendicularly) and reflection (horizontally and vertically) invariant in terms of winning probability, and "partially" translation invariant. Symmetry is often exploited by data augmentation for deep learning. But this greatly increases the dataset size if all symmetry variants are included - e.g. there are 8 such variants for each Gomoku state. SLAP was invented in this paper to avoid such expansion (see 1.2). On the other hand, reinforcement learning is notorious for lengthy training time and large sample size required. Data augmentation may help improve performance in reinforcement learning, but it would increase the sample size. This research tried to kill two birds by one stone, SLAP, by applying with CNN in reinforcement learning (of Gomoku), challenging the widely used practice of data augmentation, aiming at reducing the sample size and improving the learning speed. ### Switchable Lightweight Anti-symmetric Process (SLAP) SLAP is a model-independent protocol and function to always produce or choose the same variant regardless of which transformation variant (by specified symmetry) is given, and if required also output the corresponding transformation. It can be used upon any function or model to produce outputs that are invariant with regard to specified symmetric properties of the inputs. If some (type) of the outputs are not invariant but follow the same transformation, the corresponding transformation information from SLAP may be used to transform these outputs back. It can be viewed as standardization of symmetry, as opposed to standardization of scale. After processing, symmetric variants are filtered out - that's why it is named 'anti-symmetric process'. Ironically, with this antisymmetric process, the function or model (e.g. CNN) to be fed would look as if it is symmetric with regard to whichever the symmetry variant is the input, and the same output is produced. It is a novel method to exploit symmetry variants in machine learning without increasing the number of training samples by data augmentation. The motivation is to concentrate experience to speed up learning, without enlarging the sample size by data augmentation. See details in 3.1. ### Gomoku Gomoku, or Five in a Row, is a 2-player board game, traditionally played with Go pieces (black and white stones) on a Go board (19x19), nowadays on 15x15 board. For experiments in this research, mini board 8x8 was used instead to save computation, and the rule of freestyle version was adopted: * Black and white place stones of his colour alternatively at an unoccupied intersection point of the board. Black first. * Winner: the one who first forms an unbroken chain of 5 stones of his colour in a straight line (horizontal, vertical or diagonal). * Draw happens if there is no winner when the board is full. * Gomoku was chosen to demonstrate the benefit of SLAP because: * Gomoku has huge number of state representations (325 = 2x10\({}^{107}\)), which justify the use of neural network for learning. * Gomoku is rotation and reflection invariant, but only "partially" translation invariant, so ideal to test different transformations. * Gomoku is Markov Decision Process, meeting the basic mathematical assumption of reinforcement learning. * [4] and [5] showed a general effective reinforcement learning algorithm for board games and Gomoku is simple to implement. ## 2 Background ### CNN CNN (convolutional neural network) has been widely used for computer vision but it is known that CNN is weak to deal with changes by rotation/orientation unless with much larger sample size by data augmentation. To address this problem, [1] proposed that neural network should make use of their then novel capsule, learning to recognize an implicitly defined visual entity and output probability of its existence and instantiation parameters such as pose; they showed that a transforming auto-encoder could be learnt to force the output (which is a vector instead of scalar) of a capsule to represent an image property that one might want to manipulate. [2] showed that a discriminatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and was considerably better than CNN at recognizing highly overlapping digits, using the so-called routing by agreement mechanism, and yet [2] admitted that one drawback was the tendency of capsule to account for everything in an image. It implies that the capsule might be too "heavy" for computation and so a lightweight method is required. But the capsule network with routing by agreement algorithm has been proved not to be a universal approximator [3], i.e. not fit to all kinds of problems. As such, this research did not attempt to replace CNN by capsule, but simply created SLAP to combine with CNN. Instead of forcing the output to represent certain transformation information (e.g. orientation angle), SLAP forces the input of different variants (e.g. different rotation angle) to give the same output variant (and output the transformation information e.g. angle, if needed). Nevertheless, the invention of SLAP was inspired by [1] & [2] trying to address the weakness of CNN. ### Groupoid in Gomoku There are different Gomoku states of the same groupoid (see Fig. 1), which means having local symmetry but not necessarily global symmetry of the whole structure [6]. Groupoid is more challenging than symmetry or group, as some groupoids may not have the same status, e.g. see Fig. 1. But the potential for learning is huge as there are much more variants, e.g. 156 variants by translation in Fig. 1. ### AlphaGo Zero / Alpha Zero For reinforcement learning of Gomoku in this research, the baseline algorithm was chosen to follow that of AlphaGo Zero [4] and Alpha Zero [5] papers because domain knowledge was not required. The algorithm was concisely summarized by [7] as follows: _Neural network_ The neural network feature extractor is a type of CNN. It takes state s as input and yields value of state \(v_{\theta}(s_{\mathrm{x}})\in[-1,1]\) and policy \(\overline{p}_{\theta}(\mathrm{s})\) as probability vector over all possible actions. It has the following loss function (excl. regularization terms): \(\mathrm{loss}=\sum_{\mathrm{r}}(v_{\theta}(s_{\mathrm{r}})-z_{\mathrm{t}})^{2} -\overline{\mu}_{\mathrm{t}}\). \(\mathrm{log}(\overline{p}_{\theta}(\mathrm{s}))\) , where \(z_{\mathrm{t}}\), \(\overline{\mu}_{\mathrm{t}}\) are final outcome {-1,0,1} and estimate (to be discussed below) of policy from state s: respectively, with 1, 0, -1 representing win, draw, lose respectively for current player. _Monte Carlo Tree Search (MCTS) as policy improvement operator_ At each node, action is chosen by maximizing U(s, a), the upper confidence bound of Q-value Q(s, a), calculated by: \(\mathrm{U(s,a)=Q(s,a)+C^{*}P(s,a)*\frac{\sqrt{\sum_{\mathrm{x}}N(s,b)}}{1+N(s,a)}}\) where N(s, a) = no. of times taking action a from state s in MCTS simulation, P(s, ) - \(\overline{p}_{\theta}(\mathrm{s})\), and the policy estimate of probability is improved by using \(\overline{\mu}_{\mathrm{t}}\)= N(s, ) / \(\sum N(s,b)\) When a new node (not visited before from parent node) is reached, instead of rollout, the value of new node is obtained from neural network and propagated up the search path. Unless the new node is terminal, the new node is expanded to have child nodes. _Self-play training as policy evaluation operator_ In each turn, a fixed number of MCTS simulations are conducted from the state s, and action is selected by sampling from the policy estimate of probabilities improved by MCTS, thus generating training sample data. At the end of an iteration, the neural network is updated by learning from the training sample data. The evaluation metric would be based on winning and drawing percentages of the AI against an independent evaluation agent. There are differences among AlphaGo Zero and AlphaZero, see Fig. 2: \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & AlphaGo Zero [4] & AlphaZero [5] \\ \hline \begin{tabular}{l} \begin{tabular}{l} \(\mathrm{Pitting}\) \\ models \\ \end{tabular} & Yes, model with new weights plays against previous one; new weights are adopted only if it wins 55\% or above \\ \end{tabular} & No, always use new weights after each iteration of neural network learning \\ \hline \begin{tabular}{l} \begin{tabular}{l} \(\mathrm{Symmetry}\) \\ \end{tabular} & Data augmentation and reflection to increase sample size by 8 times for training; transform to one of 8 variants randomly in self-play for inference \\ \hline \begin{tabular}{l} Action in \\ self-play \\ \end{tabular} & \begin{tabular}{l} Sampled proportional to visit \\ count in MCTS in first 30 moves, then selected greedily by max visit count \\ \end{tabular} & \begin{tabular}{l} Sampled proportional to visit \\ count in MCTS \\ \end{tabular} \\ \hline \begin{tabular}{l} Outcome \\ prediction \\ \end{tabular} & \begin{tabular}{l} Assume binary win/loss, estimate \& optimise winning \\ probability \\ \end{tabular} & \begin{tabular}{l} Also consider draw or other outcomes, estimate \& optimise \\ expected outcome \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 1: Gomoku groupoid. Black can stop white win in C, but not in A or B. Figure 1: Gomoku groupoid. Black can stop white win in C, but not in A or B. Figure 2: Differences between AlphaGo Zero and AlphaZero. learning from separating out (disentangling) the underlying structure of the world into disjoint parts of its representation. Upon this work, [13] showed by theory and experiments that Symmetry-Based Disentangled Representation Learning (SBDRL) could not only be based on static observations: agents should interact with the environment to discover its symmetries. They emphasized that the representation should use transitions rather than still observations for SBDRL. This was taken into account for designing the Gomoku representation for reinforcement learning in this research. One may expect that an artificial general intelligence (AGI) system, if invented, should be able to learn unknown symmetry. Researchers have worked on this, for example [14] proposed learning unknown symmetries by different principles of family of methods. But it is equally important to learn by exploiting symmetry more effectively. For example, if an AGI system can interpret the rules of Gomoku and realize from the rules that Gomoku is reflection and rotation invariant, it should directly exploit such symmetry instead of assuming symmetry is unknown. Ideally, such exploitation should be switched on easily if one wishes, and hence the term'switchable' in SLAP, which can be used upon any function or model. If transfer learning in CNN is analogous to reusing a chair by cutting the legs and installing new legs to fit another, such'switchable learning' in SLAP is analogous to turning the switch of an adjustable chair to fit certain symmetries. Such kind of'switch'in design can also help AI be more explainable and transparent, and more easily reused or transferred, while an AGI system should be able to link and switch to different sub-systems easily to solve a problem. SLAP can also reduce memory required. For example, AlphaGo Zero used a transposition table [4], a cache of previously seen positions and associated evaluations. Had SLAP been used instead of data augmentation, such memory size could be reduced by a factor of 8, or alternatively 8 times more positions or states could be stored. Indeed memory plays an important role in reinforcement learning as well by episodic memory, an explicit record of past events to be taken as reference for making decisions, improving both sample efficiency and speed in reinforcement learning as experience can be used immediately for making decisions [15]. It is likely that an AGI system would, just like human, use memory to solve some problems rather than always resort to learning from scratch. And in the real world, a continuous space, there can be much more than 8 equivalent variants. Recently, [16] suggested that symmetry should be an important general framework that determines the structure of universe, constrains the nature of natural tasks and consequently shape both biological and artificial general intelligence; they argued that symmetry transformations should be a fundamental principle in search for a good representation in learning. Perhaps SLAP may contribute a tiny step towards AGI, by shaping input representations directly by symmetry transformation. Note that SLAP can be used upon any function or model and even if some (types) of the outputs are not invariant but follow the same transformation, these may be broken down and use the transformation information output from SLAP to make appropriate transformation back later for these parts only. A little kid often mistakes \(b\) for \(d\) at the beginning of learning alphabets, and it appears that human learning types of objects by vision might naturally assume symmetry first and then learn non-symmetry later. If a machine learning problem is to be split into stages or parts by specified symmetry as a guide, SLAP might help by wrapping certain parts of a function or neural network model. ## 3 Methods ### Slap SLAP forces the input of different variants (e.g. different rotation angle) to give the same output variant (and output the transformation atoms or parts) by specified symmetry as a guide, SLAP might help by wrapping certain parts of a function or neural network model. If the image/state has multiple input channels or planes in one sample, the first channel/plane is compared first by list comparison. SLAP was implemented by numpy instead of torch tensor for faster speed, because numpy uses view for rotation and reflection. The output variant replaced the input state when SLAP was applied in training. During inference time, output action probabilities from neural network would be transformed back using the transformation information (rotation & reflection) from SLAP. #### 3.1.1 Invariance Denote s, t = slap(x\({}_{i}\)), where slap is SLAP function in pythonic style, s is the symmetry (of certain group G, with n symmetry variants for each state) variant and t is corresponding transformation information. Given property of slap, for all iE\({}_{\text{N}}\)\({}_{\text{c}=\text{np}}\) s, t = slap(x\({}_{i}\)) = slap(x\({}_{i}\)) =... = slap(x\({}_{\text{s}}\)) Denote s = slap(x\({}_{i}\))[0], t = slap(x\({}_{i}\))[1], the pythonic expression to capture first and second return variables of a function respectively. Denote h(slap(x\({}_{i}\))[0]) as h\({}^{\text{nlap}}\)(x\({}_{i}\)) for any function h. Given an arbitrary function y = f(x), y = f\({}^{\text{nlap}}\)(x\({}_{i}\)) = y = f(slap(x)[0]) = y = f(s) for all i \(\cdot\) y = f\({}^{\text{nlap}}\)(x\({}_{i}\)) is invariant with respect to i (i.e. symmetry of group G). When f is the neural network, the composite function resulting from the neural network, f\({}^{\text{nlap}}\), is invariant to symmetry (of group G). #### 3.1.2 Differentiability SLAP was not applied to intermediate layers of neural networks for Gomoku, so its differentiability was not required in this research. Approximation would be required to make it differentiable. #### 3.1.3 Groupoid and SLAP-CC As Gomoku is only 'partially' invariant to translation, it is also interesting to experiment with translation variants, which are considered to be groupoid instead of group as they are symmetric locally but not necessarily symmetric globally. There can be many more translation variants than rotation and reflection variants, see 2.2. To save computation, different algorithm (crop and centre) was used to'standardize' translation variants, denoted as SLAP-CC in the below, to emphasize that it shared the same general idea as SLAP, but just different way for implementation. Denoted as _cc_ in the code. The algorithm of SLAP-CC, shown in Fig. 4, would concentrate experience around the centre, as input variant was centred to become output variant. If it could not be exactly centred, the algorithm would make it slightly lean to top left. #### 3.1.3 Groupoid and SLAP-CC As Gomoku is only 'partially' invariant to translation, it is also interesting to experiment with translation variants, which are considered to be groupoid instead of group as they are symmetric locally but not necessarily symmetric globally. There can be many more translation variants than rotation and reflection variants, see 2.2. To save computation, different algorithm (crop and centre) was used to'standardize' translation variants, denoted as SLAP-CC in the below, to emphasize that it shared the same general idea as SLAP, but just different way for implementation. Denoted as _cc_ in the code. The algorithm of SLAP-CC, shown in Fig. 4, would concentrate experience around the centre, as input variant was centred to become output variant. If it could not be exactly centred, the algorithm would make it slightly lean to top left. Figure 4: SLAP-CC algorithm. Data cluster towards centre. Figure 3: SLAP algorithm. Positive large data cluster towards top left. Note that since Gomoku is not completely invariant to translation, SLAP-CC was used to add information as additional planes instead, as opposed to replacing the input state when SLAP was applied. 2 planes representing stones of different colours (current and opponent players respectively) centred together by SLAP-CC, followed by 2 planes representing original indices for vertical and horizontal positions respectively (scaled linearly to [1, -1]) were added along with original 4 planes in Gomoku state representation (see 3.2). The scaled position indices for whole plane were to give neural network a sense of original positioning. ### Representation of Gomoku In this research, the representation of Gomoku followed the style of AlphaGo Zero / AlphaZero, with simplification and taking [13] into account for representation design. For each Gomoku state, there were 4 planes representing current player stones, opponent stones, last action and current colour respectively by one-hot-encoding. See Fig. 5 for a typical Gomoku state in this research, which used simplified board size 8x8 instead. For labels, probabilities of a move over all positions were represented by 8x8 flattened vector. Final outcome (value) of current player was represented by 1, 0, -1 respectively for win, draw, lose. ### SLAP in Gomoku Reinforcement Learning SLAP was used to pre-process states for network training and inference. Transformation information from SLAP was only used in network inference to convert probabilities (not estimated outcome) back to corresponding game board positions for MCTS to improve probabilities of actions, which were used as sampling probabilities to make a move in self-play (but greedy in evaluation). See Fig. 6. For SLAP-CC, it was applied at the same place as SLAP in the above flow chart, but data augmentation was kept instead of being replaced and no transformation information was used to transform probabilities output of the network. See methods in 3.1.3. ### Testing Benefits for Neural Network Learning To decouple from reinforcement learning dynamics, synthetic states of Gomoku were created for testing neural network learning with SLAP vs with typical data augmentation (by rotation and reflection), the latter of which had 8 times the number of training samples. Self-play was not involved in this testing. Synthetic states were generated by first creating states each with only 5 stones connected in a straight line (i.e. win status) for all combinations for current black player, then removing one stone (to be repeated with another stone 5 times to create 5 different states) and randomly adding 4 opponent stones to become one about-to-win state. Together these were one set of 480 about-to-win states. Different sets could be created since white stones were merely random. Each set was mixed with 1000 purely random states, also with 4 stones for each player. 8 mixed sets were created, i.e. 11,840 states. 15%, i.e. 1,776 were reserved for validation test. Labels were assigned as follows: if there were one or more choices to win immediately (include some purely random states, though the chance would be very remote), the value of state would be labelled as 1 and the winning position(s) would be labelled with probability of move = 1/no. of winning positions, while others were labelled 0; otherwise the value of state would be labelled as 0 and the probability of move for each available position would be random by uniform distribution, normalizing and summing to 1. Neural networks (see A1) with SLAP vs with data augmentation would learn from training samples of states and labels to predict labels of validation data given the input states. Validation loss and its speed of convergence would be the key metrics. First, at preliminary stage, for each set of hyperparameters the neural network ran 1000 iterations each with batch size 512 sampled from training samples of size 10,064 and 80,512 respectively for neural networks with SLAP and neural networks with data augmentation. Sampled with replacement, same as during reinforcement learning. There were 2400 combinations of hyperparameters by grid search, shown in Fig. 7: If Num_ResBlock > 0, the residual blocks replaced the common CNN layers and added a convolutional layer of 256 filters (3x3 kernel, stride 1, padding 1, no bias, ReLU activation) as the first layer. No autoclip [17] in optimizer, unlike reinforcement learning. At stage 2, selected models from previous stage would run for 10,000 iterations instead of 1,000 iterations, with losses recorded every 10 iterations. ### Testing Benefits for Reinforcement Learning The baseline algorithm of Gomoku reinforcement learning followed AlphaGo Zero/AlphaZero (see 2.3). Among their differences, the baseline algorithm in this research followed the better version, and thus followed AlphaZero except on symmetry exploitation. Like AlphaGo Zero, the baseline exploited symmetry by data augmentation to increase no. of training samples by 8 times, but random transformation was not done in self-play. Autoclip to gradients [17] was added in the optimizer for stable learning. Reinforcement learning required much more computation than neural network learning, so to save computation, the same neural network will be used and the testing of hyperparameters would be based on best models in neural network learning by synthetic Gomoku states, with some deviations to be tested by grid search. Stage 1: each of 240 models were trained by self-play of 250 games. Data buffer size: 1,250 and 10,000 for SLAP and non-SLAP models respectively, both roughly equivalent to storing latest 60 games. Stage 2: selected models were trained by self-play of 5000 games. With more games arranged for training, larger data buffer size could be used. So data buffer size was increased to 5000 and 4000 respectively for SLAP and non-SLAP models, roughly equivalent to Figure 6: SLAP used in Gomoku reinforcement learning. Figure 7: Hyperparameters tested at preliminary stage of CNN learning. storing latest 250 games. To align with stage 1 testing initially, the initial data buffer size was kept same as stage 1 for first 1000 games. This also got rid of initial poor-quality game state data quickly. Learning rate multiplier was used to adaptively decrease learning rate by half if validation loss increased beyond 3-sigma limit, measured every 100 games. Evaluation: Independent agent(s), also called evaluation agent or evaluator, was built by pure Monte Carlo Tree Search (MCTS) with random policy to play against the trained AI. The strength of a pure MCTS agent depends on no. of playouts (aka. simulations) in each move. To facilitate observation of growing strength, multitier evaluation was built by playing 10 games against each of 3 pure MCTS agents (30 games total), each with 1000, 3000, 5000 playouts respectively. Overall winning rate (tie counted as half win) against them would be the key metrics for reinforcement learning. It was often either a win or loss, and seldom a tie. Assuming that a tie could be neglected, especially after counting tie as half win, it simplified as Bernoulli distribution with standard deviation approximated by \(\sqrt{\text{p}(1-\text{p})/30}\) to calculate confidence interval, where 30 is the number of trials in each evaluation. ### Code Implementation The part regarding AlphaZero was upgraded from [18]. Details of implementation and code repository: [https://github.com/chihangs](https://github.com/chihangs). ## 4 Results ### Impact on Neural Network Learning #### 4.1.1 SLAP vs Baseline (Data Augmentation) The best few SLAP and baseline models converged to loss around 2.81 (difference < 0.01), all without residual blocks. 3 SLAP models (denoted as 80,...) and 3 baseline models (denoted as 00,...) were selected and their losses were plotted in Fig. 8, where each model had Adam optimizer, same learning rate 0.001, no dropout, no residual blocks, but different values of L2 (\(10^{-3}\), \(10^{-4}\), \(10^{-5}\)). Above 6 models were repeated 3 more times to calculate average time (by no. of iterations) for convergence. SLAP speeded up the convergence by 95.1% and 71.2% measured by validation loss reaching 3.0 and 2.9 respectively, 83.2% in average. #### 4.1.2 Testing Sample Size Holding validation dataset unchanged, the training data sample size was reduced by holding out some samples to match required size, using models with L2=\(10^{-4}\) from Fig. 8. SLAP models converged when sample size was 5032 or above, but they were more vulnerable to decreasing no. of training samples and failed to converge when the sample size decreased to 2516 or below, while their baseline counterpart models (8 times the sample size) still converged. #### 4.1.3 SLAP-CC vs Baseline (Data Augmentation) SLAP-CC (see 3.1.3) was added to the 3 best baseline models from Fig. 8. Validation losses of SLAP-CC converged to around 2.8 for all 3 values of L2, similar to its baseline counterparts. Experiments were repeated 3 more times to calculate average time (by no. of iterations) to converge. The time for validation loss to reach 3.0 and 2.9 both worsened by 30.7% in average for SLAP-CC. ### Impact on Reinforcement Learning #### 4.2.1 SLAP vs Baseline (Data Augmentation) The best SLAP model had highest winning rate 86.7%, equivalent to winning 26 games out of 30. 95% confidence interval was 86.7% +/-12.2%, i.e. (74.5%, 98.9%). The best baseline model had highest winning rate 93.3%, equivalent to winning 28 games out of 30; 95% confidence interval = 93.3% +/- 8.9%, i.e. (84.4%, 100%) Best SLAP and baseline models had similar winning rates, by confidence intervals. If winning rate of two thirds (66.6%) is used as benchmark for this three-tier evaluation, both took 1000 games to achieve or surpass this. However, non-SLAP took 1250 games only to first achieve winning rate of 86.6%, while SLAP took 3000 games. SLAP spent 0.761 second per move in self-play, 10.8% more time than baseline (only 5% more in a separate speed-optimizing version). SLAP tended to decrease learning rate multiplier more frequently, implying more frequent significant increase of validation loss. #### 4.2.2 Testing Buffer Size Best models of SLAP and non-SLAP were repeated but with smaller data buffer size of only 1,250 and 10,000 respectively throughout whole reinforcement learning. Similar to stage 2, above models were trained by 5000 games. With fewer data in buffer, the highest winning rate achieved for SLAP model was only 73.3%, below the corresponding confidence interval. The highest winning rate achieved for non-SLAP model was only 83.3%, below the corresponding confidence interval. So, it harmed reinforcement learning when data buffer was too small and it was good decision to use larger data buffer at stage 2. #### 4.2.3 SLAP-CC vs Baseline (Data Augmentation) SLAP-CC was tested by same configurations as best baseline model from 4.2.1, but adding information from SLAP-CC and scaled position indices as extra input feature planes. The new model also ran for 5000 games. See methods in 3.1.3 and 3.3. The best winning rate achieved for SLAP-CC model was 96.7%, slightly higher than the baseline, but within the confidence interval. NB: Learning rate multiplier did not change throughout training. ## 5 Discussion Despite the widely use of data augmentation to increase the variety of transformation variants in samples to improve machine learning, this paper proved that using SLAP to decrease the variety could achieve the same performance of typical data augmentation with sample size reduced by 87.5% and faster by 83.2% in convolutional neural network learning, and statistically the same performance for reinforcement learning with sample size reduced by 87.5%. The success could be explained by concentrating learning experience to certain regions when different variants were standardized, implicitly sharing weights among variants. The proof of invariance (see 3.1.1) after applying SLAP did not require the network to be CNN and it could be an arbitrary function, so the applicability of SLAP should not be restricted to CNN. While SLAP exploited only reflection and rotation symmetries in learning Gomoku, the general concept of SLAP and the proof of invariance could apply to other symmetries. Figure 8: Validation losses of SLAP and baseline models. As no domain specific features or knowledges (except symmetry) were used in SLAP, the benefits shown in the experiments should apply generally for domains that are symmetry invariant. Shortcomings: in Gomoku reinforcement learning, SLAP tended to decrease learning rate multiplier more frequently, implying more frequent significant increase of validation loss. This instability could be caused by faster neural network learning. Note that AlphaGo Zero only dropped learning rate twice over 1,000,000 training steps in their planned schedule [4]. It might imply that SLAP would need quite different hyperparameters in reinforcement learning (as opposed to sharing the same hyperparameters of baseline models in the neural network learning experiment), and more or better searches of hyperparameters for reinforcement learning would be required, though it was constrained by computation resources. Another plausible explanation for not speeding up reinforcement learning was the insignificant portion of neural network learning in the whole reinforcement training, implying that the time saved in neural network learning would be insignificant for the whole reinforcement learning in our chosen setting. Limitations: the results only applied to symmetry-invariant domain, and SLAP could be more vulnerable if the sample was too small (see 4.1.2). SLAP required 10.8% more time for self-play in 4.2.1, but the overhead would be insignificant if the simple CNN were replaced by a deep one. It was not yet proved to speed up reinforcement learning. Neither was it proved to be able to exploit groupoid patterns. ## 6 Conclusion and Future Work SLAP could improve the convergence speed of neural network (CNN in the experiment) learning synthetic states in Gomoku by 83.2%, with only one eighth of training sample size of baseline model (data augmentation). Since no domain specific features or knowledges were used in SLAP, it should also benefit neural network learning generally for domains that are symmetry invariant, especially for reflection and rotation symmetry. As SLAP is model-independent, the benefits should apply to models beyond CNN. But it was not yet proved to speed up reinforcement learning, though it could achieve similar performance with smaller training sample size. Neither was it proved to exploit groupoid variants effectively. As future work, SLAP may be applied in domains that are not fully symmetry invariant, by breaking down the neural network layers into two parts - first learning as if it were fully symmetry invariant. Or even split into stages by type of symmetries. Although SLAP is not directly differentiable, one workaround would be similar to that in transforming Gomoku action probabilities. That is, given the transformation information as another input, transform the learned output back to corresponding original position, and then carry out necessary subsequent computations forward. This helps create more explainable stages and transfer learning. Another future work might be differentiable approximation of SLAP.
2305.07960
Sound-to-Vibration Transformation for Sensorless Motor Health Monitoring
Automatic sensor-based detection of motor failures such as bearing faults is crucial for predictive maintenance in various industries. Numerous methodologies have been developed over the years to detect bearing faults. Despite the appearance of numerous different approaches for diagnosing faults in motors have been proposed, vibration-based methods have become the de facto standard and the most commonly used techniques. However, acquiring reliable vibration signals, especially from rotating machinery, can sometimes be infeasibly difficult due to challenging installation and operational conditions (e.g., variations on accelerometer locations on the motor body), which will not only alter the signal patterns significantly but may also induce severe artifacts. Moreover, sensors are costly and require periodic maintenance to sustain a reliable signal acquisition. To address these drawbacks and void the need for vibration sensors, in this study, we propose a novel sound-to-vibration transformation method that can synthesize realistic vibration signals directly from the sound measurements regardless of the working conditions, fault type, and fault severity. As a result, using this transformation, the data acquired by a simple sound recorder, e.g., a mobile phone, can be transformed into the vibration signal, which can then be used for fault detection by a pre-trained model. The proposed method is extensively evaluated over the benchmark Qatar University Dual-Machine Bearing Fault Benchmark dataset (QU-DMBF), which encapsulates sound and vibration data from two different machines operating under various conditions. Experimental results show that this novel approach can synthesize such realistic vibration signals that can directly be used for reliable and highly accurate motor health monitoring.
Ozer Can Devecioglu, Serkan Kiranyaz, Amer Elhmes, Sadok Sassi, Turker Ince, Onur Avci, Mohammad Hesam Soleimani-Babakamali, Ertugrul Taciroglu, Moncef Gabbouj
2023-05-13T16:37:18Z
http://arxiv.org/abs/2305.07960v1
# Sound-to-Vibration Transformation for ###### Abstract Automatic sensor-based detection of motor failures such as bearing faults is crucial for predictive maintenance in various industries. Up to 51% of motor failures are attributed to bearing faults alone. Such failures can lead to unexpected downtime, increased maintenance costs, and even catastrophic accidents. Numerous methodologies have been developed over the years to detect bearing faults. Despite the appearance of numerous different approaches for diagnosing faults in motors have been proposed, vibration-based methods have become the de facto standard and the most commonly used techniques. However, acquiring reliable vibration signals, especially from rotating machinery, can sometimes be infeasibly difficult due to challenging installation and operational conditions (e.g., variations on accelerometer locations on the motor body), which will not only alter the signal patterns significantly but may also induce severe artifacts. Moreover, sensors are costly and require periodic maintenance to sustain a reliable signal acquisition. To address these drawbacks and void the need for vibration sensors, in this study, we propose a novel sound-to-vibration transformation method that can synthesize realistic vibration signals directly from the sound measurements regardless of the working conditions, fault type, and fault severity. As a result, using this transformation, the data acquired by a simple sound recorder, e.g., a mobile phone, can be transformed into the vibration signal, which can then be used for fault detection by a pre-trained model. The proposed method is extensively evaluated over the benchmark Qatar University Dual-Machine Bearing Fault Benchmark dataset (QU-DMBF), which encapsulates sound and vibration data from two different machines operating under various conditions. Experimental results show that this novel approach can synthesize such realistic vibration signals that can directly be used for reliable and highly accurate motor health monitoring. The benchmark dataset, our results, and the optimized PyTorch implementation of the proposed approach are now publicly available. Operational Neural Networks; Bearing Fault Detection; 1D Operational U-Nets; Machine Health Monitoring; Signal Transformation. ## I Introduction Otor fault detection is an essential component of a predictive maintenance pipeline in various industries, such as manufacturing, aerospace, and energy. Bearings are essential components of rotating machinery equipment which is commonly used in industry, and their failure can result in unexpected downtime, increased maintenance costs, and even catastrophic accidents. Numerous methods and techniques have been presented to recognize and address bearing problems throughout the years, which has led to substantial research into bearing fault identification. These methods can be grouped into three main categories: model-based methods [1]-[4], traditional signal-processing approaches, [5]-[11], and machine (ML) and deep learning (DL) methods, [12]-[24]. A multitude of bearing fault detection methods was proposed in the last decade, and the above-mentioned techniques all share the common trait of using vibration data to identify certain potential faults. Due to its capacity to track changes efficiently in the mechanical behavior of the bearing, vibration data is commonly used for fault detection and thus set a _de facto_ standard in this domain. As bearings begin to degrade, their vibration signature changes. By examining the variations in the vibration signature, it is possible to identify the early stages of bearing failure and take corrective action before a catastrophic failure takes place. DL approaches have successfully been used to detect faults in rotating machines directly over raw vibration data [12]-[25]. However, certain problems may occur in real-time implementations of such fault detection methods. First, reliable vibration data acquisition, especially from a rotating machine at high speeds can be a challenging task. Electric motors can produce high levels of background noise including electrical interference, ambient vibrations, and sensor noise, which frequently alter the vibration signals. Due to this noise, vibration readings may not be accurate, resulting in false alarms or missed detections. O. Avci is with the Department of Civil, Construction and Environmental Engineering, Iowa State University, Ames, IA, USA (email: [email protected]) MH. Soleimani-Babakamali and E. Taciroglu are with the Department of Civil and Environmental Engineering, University of California, Los Angeles, CA, USA. (email: soleimaniam@[email protected], [email protected].) Besides acquisition-induced problems, installing accelerometers with wires in certain locations of rotating machines may also pose certain difficulties and operational drawbacks. Furthermore, the vibration sensors, e.g., accelerometers, may not work well or even break under harsh conditions. On the other hand, wireless accelerometers, especially with a high-resolution data capability are quite costly and require periodic maintenance for a reliable acquisition. Another critical issue is the mounting location of vibration sensors where slight positional variations cause significant changes in vibration patterns. Figure 1 shows raw vibration signals from the two distinct machines of the QU-DMBF dataset [25]. For each machine, the vibration signals are acquired by distinct accelerometers within a close proximity. It is clear in Figure 1 that signals from different accelerometers are entirely different from one another. For instance, even though sensors 2 and 5 from Machine A and sensors 6 and 3 as well as 2, 5, and 4 from Machine B are placed close to each other, their vibration signals are entirely different. Therefore, a DL model trained over one of these sensors for fault detection will ultimately underperform or may even fail on the data acquired from another sensor placement. To address all the aforementioned drawbacks, this study proposes sound-to-vibration transformation with the aim of _sensorless_ fault detection. Consider the following practical use-case scenario: once the fault detector is trained over the vibration data acquired by an accelerometer at any location, the proposed transformer can also be trained over the vibration (from the same sensor) and sound data by the manufacturer of the motor. The proposed transformer thus learns to synthesize realistic vibration data directly from the acquired sound measurements. The two trained models, the fault detector and the transformer will then be shared with the motor's operator for continuous health monitoring. In this way, realistic vibration signals can be produced directly from the sound (recorded by e.g., a mobile phone) and then the pre-trained detector over the synthesized vibration data can be used for fault detection during the operational lifetime of the motor as illustrated in Figure 2. Therefore, machine operators will no longer need to purchase, install, and maintain accelerometer(s) with the _same_ setup used by the manufacturer (e.g., the _same_ sensor model installed at the _same_ location during the training of the fault detector). Figure 1: Samples of _simultaneously_ recorded vibration signals over different sensor locations on the QU-DMBF dataset. Figure 2: A practical use-case illustration of the proposed motor fault detection. As the typical use-case scenario shown in Figure 2, besides the cost and energy savings, the proposed approach will also yield a robust fault detection since it can synthesize a vibration signal that is similar to the one used during the training of the classifier by the manufacturer regardless of the aforementioned variations. This makes fault detection far more reliable and accurate by a classifier that was pre-trained over the actual vibration signal. In this study, 1D Operational U-Net (Op-UNet) is used as the network model for the proposed sound-to-vibration transformer. Op-UNets have Self-Organized Operational Neural Network (Self-ONN) architecture [27]-[33] which are heterogeneous network models with generative neurons that can perform optimal non-linear operations for each kernel element. Self-ONNs have outperformed their CNN counterparts in many tasks [24], [25], [34]-[41] significantly even with a reduced network complexity and depth compared to CNNs. In this study, we aim to leverage this superiority to synthesize highly realistic vibration signals and thus achieve the same fault detection performance level by the manufacturer's pre-trained model over the original vibration signal. The rest of the paper is organized as follows: a brief outline of 1D Self-ONNs and the proposed sound-to-vibration transformer with the Operational U-Nets are introduced in Section II. A detailed set of experimental results over the two distinct machines from the QU-DMBF dataset are presented in Section III. Finally, Section IV concludes the paper and suggests topics for future research. ## II Methodology ### _1D Self-Organized Operational Neural Networks_ In this section, we briefly review Self-ONNs with some of their key characteristics. Different from the convolution operator of CNNs, the nodal operator of each generative neuron of a Self-ONN can perform any nonlinear transformation which can be expressed based on Taylor approximation near origin: \[\psi(x)=\sum_{n=0}^{\infty}\frac{\psi^{(n)}(0)}{n!}x^{n} \tag{1}\] The \(Q^{th}\) order truncated approximation, formally known as the Taylor polynomial, is represented by the following finite summation: \[\psi(x)^{(Q)}=\sum_{n=0}^{Q}\frac{\psi^{(n)}(0)}{n!}x^{n} \tag{2}\] The above formulation can approximate any arbitrary function \(\psi(x)\) near 0. When the activation function bounds the neuron's input feature maps in the vicinity of 0 (e.g., _tanh_), the formulation in (2) can be exploited to form a composite nodal operator where the power coefficients, \(\frac{\psi^{(n)}(0)}{n!}\), can be the parameters of the network learned during training. It was shown in [35]-[37] that the 1D nodal operator of the \(k^{\text{th}}\) generative neuron in the \(t^{\text{th}}\) layer takes the following general form: \[\begin{split}\widehat{\psi}_{k}^{l}\left(w_{ik}^{l(Q)}(r),y_{l}^ {l-1}(m+r)\right)\\ =\ \sum_{q=1}^{Q}w_{ik}^{l(Q)}(r,q)\left(y_{l}^{l-1}(m+r)\right)^{q} \end{split} \tag{3}\] Let \(x_{ik}^{l}\in\mathbb{R}^{M}\) be the contribution of the \(i^{\text{th}}\) neuron at the \((l-1)^{th}\) layer to the input map of the \(l^{th}\) layer. Therefore, it can be expressed as, \[\widehat{x_{ik}^{l}}(m)=\sum_{r=0}^{K-1}\sum_{q=1}^{Q}w_{ik}^{l(Q)}(r,q)\left( y_{l}^{l-1}(m+r)\right)^{q} \tag{4}\] where \(y_{l}^{l-1}\in\mathbb{R}^{M}\) is the output map of the \(i^{\text{th}}\) neuron at the \((l-1)^{th}\) layer, \(w_{ik}^{l(Q)}\) is a learnable kernel of the network, which is a \(K\times Q\) matrix, i.e., \(w_{ik}^{l(Q)}\in\mathbb{R}^{K\times Q}\), formed as, \(w_{ik}^{l(Q)}(r)=[w_{ik}^{l(Q)}(r,1),w_{ik}^{l(Q)}(r,2),...,w_{ik}^{l(Q)}(Q)]\). By the commutativity of the summation operations in (4), one can alternatively write: \[\widehat{x_{ik}^{l}}(m)=\sum_{q=1}^{Q}\sum_{r=0}^{K-1}w_{ik}^{l(Q)}(r,q-1)y_{ l}^{l-1}(m+r)^{q} \tag{5}\] One can simplify this as follows: \[\widehat{x_{ik}^{l}}=\sum_{q=1}^{Q}Conv1D\left(w_{ik}^{l(Q)},\left(y_{l}^{l- 1}\right)^{q}\right) \tag{6}\] Hence, the formulation can be accomplished by applying Q 1D convolution operations. Finally, the output of this neuron can be formulated as follows: \[x_{k}^{l}=\ b_{k}^{l}+\sum_{i=0}^{N_{l-1}}x_{ik}^{l} \tag{7}\] where \(b_{k}^{l}\) is the bias associated with this neuron. The \(0^{th}\) order term, \(q=0\), the DC bias, is ignored as its additive effect can be compensated by the learnable bias parameter of the neuron. With the \(Q=1\) setting, a _generative_ neuron reduces back to a convolutional neuron. The raw-vectorized formulations of the forward propagation and detailed formulations of the Back-Propagation (BP) training in the raw-vectorized form can be found in [27], [29], and [35]. ### _Sound to Vibration Transformation by 1D Operational UNet_ The ultimate goal of this study is to synthesize highly realistic vibration signals of a motor from its sound so that the synthesized vibration data can directly be used for accurate fault detection by the original classifier pre-trained on the actual vibration data. As discussed earlier, this will void the need for any sensor installation, and a simple sound recorder will suffice for continuous motor health monitoring. In this study, 1-second paired audio and vibration signals are used to train the Operational U-Net. Each segment is first linearly normalized as follows: \[X_{N}(i)=\frac{2(X(i)-X_{min})}{X_{max}-X_{min}}-1 \tag{8}\] where \(X(i)\) is the original sample amplitude in the segment, \(X_{N}(i)\) is the normalized segment, \(X_{min}\) and \(X_{max}\) are the minimum and maximum amplitudes within the segment, respectively. This will scale the segment linearly in the range of [-1 1] where \(X_{min}\rightarrow\ -1\) and \(X_{max}\to 1\). As illustrated in Figure 3, the proposed network has a total of 15 operational layers. The first 10 layers are organized into a 1D Operational U-Net (Op-UNet) model which consists of 5 operational layers in the encoder and 5 transposed operational layers in the decoder with skip connections. This is the transformer network, where the input and output of the network are the 1-second sound and the corresponding vibration segments, respectively. Figure 4 depicts the training process of the Op-UNet. To reinforce the learning by emphasizing the fault status of the given signal, in addition to this 10-layer U-Net, the transformer network is cascaded with a self-ONN classifier with 5 operational layers and 2 dense layers. By cascading this classifier to the transformer, the training will use the advantage of discriminating _fault_ segments from _normal_ segments, which in turn will improve the regression (transformation) task. After training, the classifier can be ignored. The objective function used for training consists of a combination of three distinct loss functions. To generate more realistic vibration signals, both temporal and spectral signal representations are taken into consideration by utilizing their corresponding loss functions. The objective is, therefore, to minimize the mean-absolute error (MAE) in both time and frequency domains. The MAE loss function in the _time_ domain, where, \(X_{N}\) is the normalized input _sound_ signal, \(Synth(X_{N})\) is the synthesized _vibration_ signal and \(Y\) is the corresponding actual vibration signal can be formulated as follows: \[Loss_{Time}=||(Y\ \ -Synth(X_{N})\ \ )||_{1} \tag{9}\] For the spectral loss function, the N-point discrete STFT of the input and output signals is first performed as expressed in Eq. (10), where X is the input signal and W is the window function. Eq. (11) formulates the complex-valued \(N\)-point discrete STFT from which the \(N\)-point discrete spectrogram, \(Spec\big{(}X(n,k)\big{)}\), can be computed. We used \(N\)=256 samples long _Hanning_ window with 128 samples overlap. Eqs. (12) and (13) formulate the spectral and classification loss functions, respectively. \[STFT[X,w,n]=X(n,w)=\sum_{m}X[m]W[n-m]e^{-jwm} \tag{10}\] \[X(n,k)=X(n,w)]_{w=\frac{2\pi k}{N}}\rightarrow\ Spec\big{(}X(n,k)\big{)}=|X(n,k )|^{2} \tag{11}\] \[Loss_{STFT}=||STFT(Y\ \ -Synth(X)\ \ ||_{1} \tag{12}\] Figure 4: The training scheme of the cascaded network model for the proposed sound-to-vibration transformation. Figure 3: The network architecture of the Op-UNet model. \[Loss_{class}=\frac{1}{N}\sum_{i=0}^{N}(C(Y)\ \ -C(Synth(X))\ \ )^{2} \tag{13}\] where, \(C(Y)\) and \(C(Synth(X))\) are the class labels of the actual and synthesized vibration signals, respectively. Finally, the overall objective function used for training combines all loss functions and can be expressed as follows: \[Loss_{total}=Loss_{class}+\lambda(Loss_{Time}+\ Loss_{STFT}) \tag{14}\] where \(\lambda\), is the weight parameter that balances the temporal and spectral loss with classification loss. ## III Experimental Results This section will first introduce the benchmark motor bearing fault dataset that was used in this study. Then, the experimental setup for evaluating the proposed sound-to-vibration transformer will be discussed. In Section III.C, quantitative and qualitative evaluations of the results and discussions, especially for the real use-case scenarios, are provided. Finally, in Part III.D, the computational complexity of the proposed approach is examined in-depth. ### _Qatar University Dual-Machine Bearing Fault Benchmark Dataset: QU-DMBF_ The benchmark dataset utilized in this study was established by Qatar University researchers using 2 different electric machines (Machine A and Machine B). The experimental setup is given in Figure 1 which illustrates the orientation of the sensors and the installation of two machines. The configuration for Machine-A consists of a 3-phase AC motor, two double-row bearings, and a shaft rotating at a maximum speed of 2840 RPM. A spring mechanism placed a 6 kips radial load on the shaft and bearing. PCB accelerometers (352C33high sensitivity Quartz ICP) mounted on the bearing housing. It weighs 180 kg and is 100x100x40cm. The working conditions for Machine-A are based on the following: * 19 different bearing configurations: 1 healthy, 18 fault cases: 9 with a defect on the outer ring, and 9 with a defect on the inner ring. The defect sizes vary from 0.35mm to 2.35mm. * 5 different accelerometer localization: 3 different positions and 2 different directions (radial and axial) * 2 different load (force) levels: 0.12 kN and 0.20 kN. * 3 different speeds: 480, 680, and 1010 RPM. We collected data for 270 seconds for each operating circumstance for a healthy bearing, and 30 seconds for each faulty bearing case. This results in a total time of 30 x 18 x 5 x 2 x 3 = 16,200 seconds of data measurement. The sound was also simultaneously recorded with the same sampling frequency as the vibration data. In contrast, Machine B's design consists of a DC motor, two single-row bearings, and a shaft with a constant rotating speed of 2000 RPM. A spring mechanism installed a 6 kips radial load on the shaft and bearing. PCB accelerometers (353B33 high sensitivity Quartz ICP) mounted on the bearing housing. It weighs 3.5 kg, and the configuration measures 165x82x63 cm. The working conditions for Machine B vary as follows: * 19 different bearing configurations: 1 healthy, 9 with a defect on the outer ring, and 9 with a defect on the inner ring. The defect sizes vary from 0.35mm to 2.35mm. * 6 different accelerometer positions. * A fixed load (force) of 0.40 kN. * 5 different speeds: 240, 360, 480, 700, and 1020 RPM. 270 seconds of vibration/sound data for each operating condition for a healthy bearing are available in this dataset. As a result, the total time of the healthy bearing vibration data is 270 x 6 x 1 x 5 = 8,100 seconds. 30 seconds of vibration/sound data for each working condition for each faulty bearing are available. This results in a 2:1 ratio of the faulty to healthy data, with a total time of 30 x 18 x 6 x 1 x 5 = 16,200 seconds. As a result, the dataset for machine B lasts 24,300 seconds in total (6.75 hours). The sound of each machine was simultaneously recorded with the same sampling frequency as the vibration data. As opposed to the challenges of vibration data collection (i.e., see Figure 1), there is a crucial advantage for the sound signal acquisition as such a location sensitivity does not exist. This has been confirmed in a recent study[25] where even a DL classifier trained on the data acquired by one sensor may fail to detect certain faults in another's data. The same study has further shown that the most reliable vibration data for fault detection is acquired from the closest accelerometer to the bearings, i.e., accelerometer-1 for both machines. So, we have selected this accelerometer for training the transformers of both machines and used them to synthesize the corresponding vibration signal, which is then evaluated with the actual vibration signal. The QU-DMBF is now publicly shared in [42] and [43] to serve as the dual-machine bearing fault detection benchmark for the research community. ### _Experimental Setup_ For the training of the proposed network model, the 1D Operational U-Net and the Self-ONN classifier, a batch size of 8, and a maximum of 1000 Back-Propagation (BP) iterations are used for all experiments. The Adam optimizer with an initial learning rate of 10\({}^{-4}\) is used via BP. The parameter \(\lambda\) in Eq (14) is set to 100. The first 2100 seconds of sound signals and their vibration counterparts are used for training, and the next 800 seconds of data are used for the validation set. For both machines, the data partition with one of the speed settings is spared for testing. The fault detector is a compact 1D Self-ONN model with 5 operational layers and 2 dense layers. It has 32 neurons in the hidden dense layer and 16 neurons in each of the hidden operational layers. For this binary classification task, the output layer has two neurons. The input layer neuron records a 1-second vibration segment. In all layers, the "tanh" nonlinear activation function is employed. The operational layers' kernel sizes are set at 81, 41, 21, 7, and 7 respectively. Operational layers have corresponding strides of 8, 4, 2, 2, and 2. The ADAM optimizer is used with the initial learning factor, \(\epsilon\), set to 10\({}^{-4}\) for the Back-Propagation (BP) training. The loss function is the Mean-Squared-Error (MSE), and the maximum number of BP iterations (epochs) is 50. We implemented both transformer and fault detector networks using the FastONN library [29] based on PyTorch. The benchmark dataset, our results, and the optimized PyTorch implementation of the proposed approach are now publicly shared with the research community [42]. ### _Results_ This section presents quantitative and qualitative (visual) evaluations of the proposed methodology. To compute the quantitative results, several experiments were conducted over both real and synthesized vibration data using the Self-ONN classifier. The results of the experiments were evaluated using several common performance metrics, including accuracy, sensitivity (recall), positive predictivity (precision), and the F1-Score. Four different training and testing scenarios were selected to validate the effectiveness of the proposed approach. In each scenario, the classifier was trained and tested with non-overlapping real and synthesized vibration signals for two independent machines. The quantitative results are presented in Table 1. To provide a clear representation of these scenarios, abbreviations were used in the table for the _synthesized_ and _real_ data for Machine-A and Machine-B as SA, RA, SB, and RB, respectively where RA and RB correspond to _real_ vibration signals. The 2nd and 4th row of the given table shows the classification performance of the proposed model over the _synthesized_ vibration signals, SA and SB, respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & & \multicolumn{3}{c|}{**Healthy**} & \multicolumn{3}{c|}{**Faulty**} \\ \hline Train & Test & Accuracy & Sensitivity & Precision & F1-Score & Sensitivity( & Precision & F1-Score \\ Data & Data & **(\%)** & **(\%)** & **(\%)** & **(\%)** & **(\%)** & **(\%)** & **(\%)** \\ \hline RA & RA & 99.70 & 100 & 99.12 & 99.56 & 99.54 & 100 & 99.77 \\ \hline **RA** & **SA** & 99.76 & 100 & 99.12 & 99.56 & 99.51 & **100** & 99.76 \\ \hline RB & RB & 97.56 & 93.67 & 99.55 & 96.52 & 99.76 & 96.54 & 98.12 \\ \hline **RB** & **SB** & 97.17 & 98.51 & 93.43 & 95.90 & 96.48 & 99.22 & 97.83 \\ \hline \end{tabular} \end{table} Table 1: The fault detection performance over real (RX) and synthesized (SX) vibration data for machine X. Figure 5: 8 sets of sound-to-vibration transformation results from both machines. Overall, fault detection results presented in Table 1 indicate that the models trained over the real vibration data, and tested over both real and synthesized data perform equally well with a high-detection performance. When we examine the individual test results, one can observe that results for Machine A are almost identical for Machine B, and the test results over real and synthesized data may differ maximum of only 0.4%, which is negligible. This is strong evidence that the proposed model can transform sound to such vibration signals that are very similar to their real counterparts and thus, the pre-trained detector can achieve a fault detection performance almost identical to the one with the real vibration data. For the qualitative evaluation, Figure 5 presents 8 sets of sound, real and synthesized signals in both time and frequency domains corresponding to both healthy and faulty data from both machines. Once again, the results show that the synthesized vibration signals are quite similar to their real counterparts regardless of the data class (healthy or faulty). In particular, a close inspection of the spectral representation shows that the proposed transformation can indeed synthesize such signals that share the same spectral signatures of their real counterparts, i.e., the spectral peak locations representing the major spectral components in both healthy and faulty vibration data perfectly match. It is worth mentioning that the two machines produce entirely different levels of sound as a result of their varied sizes; however, this seems no effect on the transformation outputs. When we take a closer look at the signals of Machine A, one can discover that the input audio signal patterns have similarities to those of the vibration signals. The proposed approach, however, suppresses the high-frequency peaks and amplifies the low-frequency peaks in accordance with the actual spectrum. ### _Computational Complexity Analysis_ The network size, total number of parameters (PARs), and inference time for each network configuration are computed in this section. Detailed formulations of the PARs calculations for Self-ONNs are available in [29]. A 2.2 GHz Intel Core i7 computer with 16 GB of RAM and an NVIDIA GeForce RTX 3080 graphics card was used for all experiments. The FastONN library [29] and Pytorch are used to implement the 1D Op-UNet network. The proposed network has a total of 377K parameters. The processing time to generate a 1-second vibration segment takes around 6.5 msec for a single CPU implementation. This shows that the proposed transformation can achieve 150 times faster than the real-time requirements with a single CPU, indicating the potential of a real-time implementation even on low-cost, low-power hardware. ## IV Conclusions Efficient detection of bearing faults is crucial for predictive maintenance in various industries. In this study, we propose a novel sound-to-vibration transformation method that can synthesize realistic vibration signals regardless of the working conditions or fault status. It has been shown in this paper that a simple audio recorder (e.g., a mobile phone) and the proposed machine learning model are sufficient for an accurate fault detection. The quantitative results show that the fault detection accuracy difference achieved using synthesized and real data is less than 0.5%, which is negligible. Regardless of the motor type (AC/DC), size, fault type/severity, and sound level, the results demonstrate that the proposed method can transform the sound signal to synthesize the corresponding (real) vibration signal. Therefore, the proposed approach makes motor health monitoring significantly more practical and accessible, as it eliminates the need for any vibration sensor. It is also highly efficient, inexpensive, and robust because all of the aforementioned challenges and drawbacks associated with using accelerometers for data acquisition can effectively be eliminated with a simple sound recorder. As a result, the proposed approach has the potential to revolutionize the field of predictive maintenance and make the process more accessible and practical for various other applications, e.g., mechanical fault detection on vehicles or any moving platform in general. Our future study will focus on performing fault recognition to further identify the type of defect, estimate its severity, and pinpoint the defect's location for comprehensive real-time health monitoring in a _sensorless_ fashion. Finally, zero-shot fault detection [25] using only the sound signal will be another crucial objective for our future research.
2308.08477
Detecting Quadratically Coupled Ultra-light Dark Matter with Stimulated Annihilation
Ultra-light Dark Matter (ULDM) is one of the most promising DM candidates. Due to the Bose enhancement, we find the annihilation rate of the ULDM in the presence of background photon radiation can be greatly enhanced and produce a distinctive reflected electromagnetic wave with an angular frequency equal to the ULDM mass. We propose to utilize such stimulated annihilation to probe the ULDM with the electromagnetic quadratic coupling by emitting a beam of radio into space. With a power of 50 MW emitter, we forecast the sensitivity of quadratic coupling in different local halo models for low-frequency radio telescopes, such as LOFAR, UTR-2 and ngLOBO.
Yuanlin Gong, Xin Liu, Lei Wu, Qiaoli Yang, Bin Zhu
2023-08-16T16:36:28Z
http://arxiv.org/abs/2308.08477v3
# Detecting Ultra-light Dark Matter with Stimulated Annihilation ###### Abstract Ultra-light Dark Matter (ULDM) is one of the most promising DM candidates. We find that the annihilation rate of the ULDM in the presence of background photon radiation can be greatly enhanced due to the Bose enhancement. We propose to utilize such stimulated annihilation to probe the ULDM. By emitting a beam of radio into space, we can have a distinctive reflected electromagnetic wave with an angular frequency equal to the ULDM mass. We show that low-frequency radio telescopes, such as LOFAR, UTR-2 and ngLOBO, can offer a new avenue for detecting this signal, especially for the local DM halo model near the Earth. With a power of 50 MW emitter, the expected limits could be several orders of magnitude stronger than that from Big Bang nucleosynthesis (BBN) in the ULDM mass \(m_{\phi}\) range, \(2.07\times 10^{-8}\) eV \(\sim 4.5\times 10^{-8}\) eV. ## I Introduction Dark matter, the invisible substance accounting for more than 80% of the matter in the universe, continues to be a compelling mystery in modern physics [1]. The extensively studied canonical cold dark matter models, represented by weakly interacting massive particles, are attractive, providing the correct relic abundance, and simultaneously solving other modern puzzles, such as the hierarchy problem [2]. However, the recent null experiment results put increasingly stringent constraints on various WIMP models [3]. The ultra-light dark matter (ULDM) that features a spin-0 particle with a mass ranging from \(10^{-24}\) eV to eV is a competitive alternative, which could be produced through the misalignment mechanism or its variants in the early universe [4; 5; 6; 7; 8]. The wave-like nature of these particles can not only preserve the merits of cold dark matter but also may provide a solution to the small-scale structure problem [9; 10; 11]. Moreover, recent study on the galactic scale such as gravitationally lensed images indicates the increasing success of the wave-like DM versus the particle-like DM [12]. Besides, the ULDM also possibly addresses the Strong CP problem [13; 14; 15; 16], aforementioned hierarchy problem [17], and dark energy [18; 19; 20]. The distribution of the ULDM near Earth is crucial for its detection in laboratory experiments. Aside from the DM halo of the Milky Way halo [21; 22; 23], the formation of a local ULDM dark halo [24; 25; 26] bound to other gravitational sources, such as the Sun and Earth, provides an alternative profile possibility. The Earth halo results in much higher dark matter densities around the Earth's vicinity compared to the Milky Way halo over a wide mass range, which could significantly affect the sensitivity of ULDM detection. Note that the approaches to detecting the ULDM strongly depend on its interactions with the Standard Model (SM) particles. Generically, the linear couplings of the ULDM with the SM particles are dominant, however, which have been tightly constrained by atom clocks [27; 28; 29; 30; 31], atomic spectroscopy [33; 34; 35; 36; 37], laser interferometry [38; 39], gravitational-wave detectors [41; 42; 43], astrophysical probes [44; 45; 46; 47; 48; 49] and others [50; 51; 52; 53; 54; 55; 56]. Besides, the enhanced emission of the axion clusters or axion-like particles may provide a complementary bound [57; 58; 59]. On the other hand, the quadratic interactions of the ULDM dominating over the linear ones in recent years have drawn particular interest, such as in theories with \(\mathbb{Z}_{N}\) symmetry [60; 61] and relaxion mechanism [62]. Despite the above experimental bounds, the parameter space of the quadratic interactions remains largely unexplored. In this article, we propose to search for the ULDM by detecting the electromagnetic wave signal produced from its stimulated annihilation via the quadratic interaction (as shown in Fig. 1). In contrast with the spontaneous process, the flux of photons from stimulated annihilation can be greatly enhanced by the presence of ambient photons, due to Bose enhancement. By using a radio emitter with a power of 50 MW and low-frequency radio telescopes, we will obtain a new limit of the ULDM in the mass range of \(2.07\times 10^{-8}\) eV \(\sim 4.5\times 10^{-8}\) eV, which could be much stronger than the existing limits for the local DM halo near the Earth. ## II Stimulated Annihilation At low energy, the effective representation of the ULDM is that of a coherently oscillating classical field given by \(\phi(\vec{x},t)=\phi_{0}\sum_{j}\alpha_{j}\sqrt{f(v_{j})\Delta v}\cos(-m_{ \phi}t+\vec{p}_{j}\cdot\vec{x}+\phi_{j})\), where the oscillating frequency equals the mass \(m_{\phi}\) of the underlying particle, and \(\phi_{0}=\sqrt{2p_{\rm DM}}/m_{\phi}\), depending on the DM density \(\rho_{\rm DM}\) and the mass, refers to the present-day oscillation amplitude, \(\alpha_{j}\) is a random number of the Rayleigh distribution \(P(\alpha_{j})=\alpha_{j}e^{-\alpha_{j}/2}\), \(f(v_{j})\) is the local dark matter speed distribution, \(\phi_{j}\) is a phase factor and \(\Delta v\) is the speed interval. Typically \(f(v_{j})\) is a narrow pick in most halo models, thus the ULDM can effectively be considered a mono-frequency field within the coherent time. The quadratic interaction of the ULDM field \(\phi\) with the electromagnetic fields \(A_{\mu}\) is given by, \[\mathcal{L}=\frac{d_{e}^{(2)}}{4}\frac{2\pi}{M_{\rm Pl}^{2}}\phi^{2}F_{\mu\nu}F ^{\mu\nu}, \tag{1}\] where the field strength \(F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}\), \(d_{e}^{(2)}\) is the quadratic coupling constant, and \(M_{\rm Pl}=1.22\times 10^{19}\) GeV is the Planck mass (for the pseudo-scalar case, see [63]). For simplicity, we assume \(g^{\prime}=\frac{2\pi}{M_{\rm Pl}^{2}}d_{e}^{(2)}\) in our calculations. Since the DM is non-relativistic in the halo, the angular frequency of the ULDM is approximate to its mass, \(\omega_{\phi}=m_{\phi}\). With Eq. 1, we can have the cross-section of the spontaneous annihilation process \(\phi\phi\rightarrow\gamma\gamma\), \[\sigma_{0}=\frac{1}{32\pi}\frac{1}{\beta}g^{\prime 2}m_{\phi}^{2}, \tag{2}\] where the factor \(\beta\) is the velocity of ULDM. Due to the tiny coupling and the small mass of the ULDM, the spontaneous annihilation rate is highly suppressed. We note that a certain incoming photon can stimulate the annihilation of the ULDM and thus produce an observable electromagnetic signal. To obtain the production rate of the photons from stimulated annihilation, we first consider the number of initial and final states in the phase space of the spontaneous annihilation process \(\phi\phi\rightarrow\gamma\gamma\). In the vacuum, there are \(f_{\phi}\) ULDMs with a certain momentum and zero photons in the phase space of the initial states. After annihilation, the final state will contain \((f_{\phi}-1)\) ULDMs and two photons, i.e., \(|{\rm i}\rangle_{0}=|f_{\phi},f_{\phi};0,0\rangle,\ |{\rm f}\rangle_{0}=|f_{\phi}-1,f_{ \phi}-1;1,1\rangle.\) However, due to the Bose enhancement of identical photons, if the annihilation occurs in the background of \(f_{\gamma}\) photons, this process will be stimulated and leads to \[|{\rm i}\rangle=|f_{\phi},f_{\phi};f_{\gamma},f_{\gamma}\rangle,\ |{\rm f} \rangle=|f_{\phi}-1,f_{\phi}-1;f_{\gamma}+1,f_{\gamma}+1\rangle. \tag{3}\] Then, we can have the scattering amplitude of the stimulated annihilation \[\mathcal{M}_{{\rm i}\to{\rm f}}\ =\ \mathcal{M}_{0}^{\dagger}f_{\phi}(f_{ \gamma}+1), \tag{4}\] where \(\mathcal{M}_{0}\) is the spontaneous annihilation amplitude. The inverse process of two photons annihilating to two ULDMs in the vicinity of \(f_{\gamma}\) ambient photons corresponds to the following matrix element: \[\mathcal{M}_{{\rm f}\to{\rm i}}=\mathcal{M}_{0}(f_{\phi}+1)f_{\gamma}. \tag{5}\] In the presence of ambient photons, the effective annihilation amplitude for the stimulated annihilation of the ULDMs is determined by the difference between the amplitude of the annihilation and production processes of the photons: \[|\mathcal{M}_{{\rm i}\to{\rm f}}|^{2}-|\mathcal{M}_{{\rm f }\to{\rm i}}|^{2} = |\mathcal{M}_{0}|^{2}[f_{\phi}^{2}(f_{\gamma}+1)^{2}-(f_{\phi}+1) ^{2}f_{\gamma}^{2}], \tag{6}\] \[= |\mathcal{M}_{0}|^{2}[f_{\phi}^{2}+2f_{\gamma}f_{\phi}^{2}-2f_{ \phi}f_{\gamma}^{2}-f_{\gamma}^{2}],\] \[\approx |\mathcal{M}_{0}|^{2}f_{\phi}^{2}(1+2f_{\gamma}).\] The four terms in the second line are interpreted as the contributions from spontaneous annihilation, stimulated annihilation, inverse stimulated annihilation and inverse spontaneous annihilation. It is then clear that the factor \(2f_{\gamma}\) is the enhancement for stimulated annihilation compared to spontaneous annihilation. We have used the Figure 1: Conceptual design of our proposed experiment. A powerful radio beam (blue wavy line) is sent to the space to stimulate the annihilation of the ULDM (red bullet). The reflected radio (red wavy line) will be detected by the array telescope. approximation \(f_{\phi}\gg f_{\gamma}\) in the last line. From the Boltzmann equation, we can obtain the annihilation rate of the ULDMs from the stimulated annihilation process \(\phi(p_{1})\phi(p_{2})\rightarrow\gamma(k_{1})\gamma(k_{2})\), \[\dot{n}_{\phi} = -\int d\Pi_{\phi}d\Pi_{\phi}d\Pi_{\gamma}d\Pi_{\gamma}(2\pi)^{4} \delta^{4}(p_{1}+p_{2}-k_{1}-k_{2}) \tag{7}\] \[\times\frac{1}{4}\cdot[|\mathcal{M}_{\rm i\to f}|^{2}-| \mathcal{M}_{\rm f\to i}|^{2}]\] \[= -4\beta n_{\phi}^{2}\sigma_{0}(1+2f_{\gamma}),\] where \(d\Pi_{i}=g_{i}dp^{3}/((2\pi)^{3}\cdot 2E_{i})\) is the usual phase-space volume. The factor of 4 in the second line is the symmetry factor for identical particles in the initial and final states. \(n_{i}\) is the number density of the ULDM or the photon, which is related to phase-space distribution by \(n_{i}=\int\frac{g_{i}}{(2\pi)^{3}}f_{i}(\mathbf{p})d^{3}p_{i}\). The production rate of the photons is the negative of the annihilation rate of the ULDMs. This is greatly enhanced by the factor \(2f_{\gamma}\), which arises from the stimulation of the ambient photons as \(\omega_{\gamma}=m_{\phi}\). Moreover, the production rate of the photons depends on the ULDM density as \(\rho_{\rm DM}^{2}\), rather than \(\rho_{\rm DM}\). ## III Signal power The signal power received by the telescope can be obtained by integrating the Eq. 7 over the time, the solid angle, the frequency and the area \[P = -\int dAd\nu d\Omega\int_{0}^{t_{\rm eff}}dt\dot{n}_{\phi}, \tag{8}\] \[= \frac{1}{8}\frac{g^{{}^{\prime}2}}{m_{\phi}^{2}}\frac{P_{0}}{ \Delta\nu_{\phi}}\int_{0}^{t_{\rm eff}}\rho_{\phi}^{2}dt.\] where the duration time of emission is denoted as \(t_{\rm eff}\). We ignored the interaction between the photon and the electron in the environment, and factorized out the power of the source as, \[P_{0}=\int dAd\nu d\Omega n_{\gamma}. \tag{9}\] Here we assume that \(f_{\gamma}\) is a Gaussian-like function with an expected value \(\omega_{\gamma}\), and is related to \(n_{\gamma}=\frac{2}{(2\pi)^{3}}4\pi\omega_{\gamma}^{2}\Delta\omega_{\gamma}f_ {\gamma}\) by averaging it over the bandwidth \(\Delta\omega_{\gamma}=\Delta\omega_{\phi}\equiv 2\pi\Delta\nu_{\phi}\), with the ULDM bandwidth \(\Delta\nu_{\phi}=2\nu_{\phi}\sigma_{v}\) depending on the velocity dispersion \(\sigma_{v}\) of the ULDM [58; 59; 64]. Note that \(P_{0}\) is only determined by the properties of the emitter. In addition, it can be seen that the signal power \(P\) in Eq. 8 is sensitive to the local density of the ULDM halo. The commonly used iso-thermal DM halo of the Milky Way predicts a local energy density of \(\rho_{I}\approx 0.3\) GeV/cm\({}^{3}\) with a velocity dispersion of \(\sigma_{I}=270/\sqrt{3}\) km/s [21; 22; 23]. On the other hand, the Earth halo model, as an extension of the boson star has been recently discussed in [24; 25; 26]. In the presence of quartic self-interactions and subsequent gravitational focusing, an external gravitational source, such as the Earth, in the background of virialized DM can effectively capture the ULDM, to form an over-density local halo [26]. The maximally allowed value of the energy density \(\rho_{\star}\) of the Earth halo is given by \[\rho_{\star}(r)\propto\left\{\begin{array}{ll}\exp{(-2r/R_{\star})}&\mbox{ for }R_{\star}>R_{E},\\ \exp{\left(-r^{2}/R_{\star}^{2}\right)}&\mbox{for }R_{\star}\leq R_{E}. \end{array}\right. \tag{10}\] Here \(R_{\star}\) is the radius of the Earth halo, which is a function of the ULDM mass [24]. \(R_{E}\) is the radius of the Earth. As a comparison with the iso-thermal DM halo, the Earth halo has a much higher density. On the other hand, with the increase in the ULDM mass, the Earth halo density decreases exponentially. ## IV Results In this work, our frequency range of interest is \(5-30\) MHz. For lower frequencies, the impact of the ionosphere will become significant, while for higher frequencies, the signal in the Earth halo will be too weak due to the very low density of the ULDM. To detect such low-frequency radio signals, we use the LOFAR [65], UTR-2 [66] and ngLOBO [67] to estimate the sensitivity. LOFAR, as one of the new-generation radio telescopes, is capable of detecting radio signals in the frequency range \(10\) MHz \(\sim 90\) MHz with an unparalleled sensitivity owing to the novel phased-array design, dense core array, and long interferometric baselines [65; 68]. Different from the traditional dish telescope, a number of dipole antenna elements are well arranged to compose a circular array with a diameter of \(70\sim 80\) meter, which enables it to observe in several modes and to detect the transient pulse signal with its high time and frequency resolution. The minimal frequency resolution of LOFAR is about \(700\) Hz [65]. As for the UTR-2, a T-shaped antenna array can achieve a lower operating frequency range \(8-32\) MHz with a frequency resolution down to \(4\) kHz [66]. The low band of the ngLOBO is to cover the frequency range \(5-150\) MHz [67], the resolution of which can reach \(1\) kHz at least [69; 70]. The relevant parameters of the three telescopes are given in Table. 1. The noise power of a radio array telescope with a frequency bandwidth \(\Delta\) during the observing time \(t_{\rm off}\) reads [71] \[P_{\rm n}=\frac{2kT_{\rm sys}}{\eta_{\rm s}}\sqrt{\frac{\Delta}{n_{\rm pol}t_{ \rm off}}}, \tag{11}\] where \(k\) is the Boltzmann constant and \(n_{\rm pol}=2,1,2\) is the number of polarizations for LOFAR [65], UTR 2 [66] and ngLOBOO [69; 70] respectively. For simplicity, we assume the detection efficiency \(\eta_{s}=1\) in our numerical calculations. The bandwidth is determined by \(\Delta=\max(\Delta\nu_{\phi},\Delta\nu_{\rm res})\), where \(\Delta\nu_{\rm res}\) the telescope frequency resolution. \(T_{\rm sys}\) is the temperature of the array system. It is caused by several inevitable noises, such as the cosmic microwave background, the environment, the galaxy, and the instrument. Among them, galaxy noise is dominant [72]. Then, we approximate the system temperature as \(T_{\rm sys}\approx 1.23\times 10^{8}\) K (MHz/\(\nu\))\({}^{2.55}\), where \(\nu\) is the frequency of the noise photon [65; 72]. Besides, the transverse velocity \(\vec{v}_{\perp}\) of the ULDM perpendicular to the outgoing radio beam will result in the displacement of the reflected radio signal from the location of the outgoing power source. Thus, the duration time \(t_{\rm off}\) in Eq. 8 should be less than the effective time \(t_{\rm eff}=C\frac{R}{\langle|\vec{v}_{\perp}|\rangle}\), where \(R\) is the radius of the array telescope. In the frequency range of our interest, \(\langle|\vec{v}_{\perp}|\rangle\) is about 124 km/s for the iso-thermal halo [63] and is about 1.2 km/s for the Earth halo [24]. In reality, the geometry factor \(C\) depends on the specific configuration of the emitter relative to the collector in the experiment. Here we take \(C=0.3\) to estimate the sensitivity. For an emitter with power \(P_{0}\), we can have the total energy consumption \(E_{0}=NP_{0}t_{\rm off}\), where \(N=m_{\phi}/2\Delta\nu_{\phi}\) is the emission times. In the following calculations, we set \(E_{0}=10\) MWyear, \(P_{0}=50\) MW and \(R=50\) m. By requiring \(P>P_{\rm n}\), we can obtain the exclusion limit of the dimensionless coupling \(d_{e}^{(2)}\), \[d_{e}^{(2)}\ <\ D\cdot\left(\frac{50\ {\rm MW}}{P_{0}}\frac{T_{\rm sys}}{3.5 \times 10^{5}\ {\rm K}}\right)^{\frac{1}{2}}\cdot\left(\frac{\Delta}{n_{\rm plo}}\right)^{ \frac{1}{4}} \tag{12}\] with \[D=\left\{\begin{array}{ll}1.85\cdot 10^{28}\cdot\left(\frac{16\ {\rm sec}}{t_{\rm off }}\right)^{\frac{1}{4}}\left(\frac{m_{\phi}^{3}}{{\rm GeV}^{3}}\frac{{\rm GeV} ^{7}}{\int_{t_{\rm eff}}^{1}\rho_{\rm 2}^{2}dt}\right)^{\frac{1}{2}},\ {\rm Ear }\\ 7.43\cdot 10^{28}\cdot\left(\frac{4219\ {\rm sec}}{t_{\rm off}}\right)^{ \frac{1}{4}}\left(\frac{m_{\phi}^{3}}{{\rm GeV}^{3}}\frac{{\rm GeV}^{7}}{\rho _{\rm 1}^{2}t_{\rm off}}\right)^{\frac{1}{2}},\ \ \ \ {\rm Iso} \end{array}\right. \tag{13}\] For the Earth halo, since the frequency resolution \(\Delta\nu_{\rm res}\) is larger than the bandwidth \(\Delta\nu_{\phi}\) of ULDM in our mass range, we take \(\Delta=0.7,\ 1,\ 4\) kHz in Eq. 12 for LOFAR, UTR-2 and ngLOBOO, respectively. While for the iso-thermal halo, we can take \(\Delta=\Delta\nu_{\phi}\). It is because that the bandwidth of the ULDM is scaled as \(15.09\ {\rm kHz}\cdot(m/0.03\ {\rm\mu eV})\), which is larger than the frequency resolution of the LOFAR, UTR-2 and ngLOBOO. The resulting expected exclusion limits are shown in Fig. 2. Due to the higher density, we find that the bounds in the Earth halo model can be at most 8 orders of magnitude stronger than that of the BBN in the range of \(2.07\times 10^{-8}\ {\rm eV}<m_{\phi}<4.5\times 10^{-8}\ {\rm eV}\) (\(5<f<11\) MHz) and at most 17 orders of magnitude stronger than that of the Figure 2: The expected bounds on the plane of the dimensionless coupling \(d_{e}^{(2)}\) versus the ULDM mass \(m_{\phi}\) for the iso-thermal halo and the Earth halo. The array telescopes LOFAR, UTR-2 and ngLOBO are considered here. The grey dashed lines are the limits from the BBN and Supernova [34]. \(2.07\times 10^{-8}\text{ eV}<m_{\phi}<6.2\times 10^{-8}\text{ eV}\) (\(5<f<15\) MHz). Our limits in the iso-thermal halo model are weaker than the BBN, but comparable to Supernova. Note that the sensitivity can be further enhanced by increasing the power of the emitter and enlarging of the array telescope radius. In addition, a better frequency resolution of the telescope will improve the results in the same work frequency. ## Acknowledgement YG would like to thank Ariel Arza, Hyungjin Kim for useful discussions. This work is supported by the National Natural Science Foundation of China (NNSFC) under grants No. 12275134, No. 12147228, and No. 12150010.
2310.12665
SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
While advanced machine learning (ML) models are deployed in numerous real-world applications, previous works demonstrate these models have security and privacy vulnerabilities. Various empirical research has been done in this field. However, most of the experiments are performed on target ML models trained by the security researchers themselves. Due to the high computational resource requirement for training advanced models with complex architectures, researchers generally choose to train a few target models using relatively simple architectures on typical experiment datasets. We argue that to understand ML models' vulnerabilities comprehensively, experiments should be performed on a large set of models trained with various purposes (not just the purpose of evaluating ML attacks and defenses). To this end, we propose using publicly available models with weights from the Internet (public models) for evaluating attacks and defenses on ML models. We establish a database, namely SecurityNet, containing 910 annotated image classification models. We then analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on these public models. Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models. We share SecurityNet with the research community. and advocate researchers to perform experiments on public models to better demonstrate their proposed methods' effectiveness in the future.
Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, Michael Backes, Mario Fritz, Yang Zhang
2023-10-19T11:49:22Z
http://arxiv.org/abs/2310.12665v1
# SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models ###### Abstract While advanced machine learning (ML) models are deployed in numerous real-world applications, previous works demonstrate these models have security and privacy vulnerabilities. Various empirical research has been done in this field. However, most of the experiments are performed on target ML models trained by the security researchers themselves. Due to the high computational resource requirement for training advanced models with complex architectures, researchers generally choose to train a few target models using relatively simple architectures on typical experiment datasets. We argue that to understand ML models' vulnerabilities comprehensively, experiments should be performed on a large set of models trained with various purposes (not just the purpose of evaluating ML attacks and defenses). To this end, we propose using publicly available models with weights from the Internet (public models) for evaluating attacks and defenses on ML models. We establish a database, namely SecurityNet, containing 910 annotated image classification models. We then analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on these public models. Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models. We share SecurityNet with the research community1 and advocate researchers to perform experiments on public models to better demonstrate their proposed methods' effectiveness in the future. Footnote 1: We publish SecurityNet at [https://github.com/SecurityNet-Research/SecurityNet](https://github.com/SecurityNet-Research/SecurityNet). CISPA Helmholtz Center for Information Security ## 1 Introduction Machine learning (ML) has been gaining momentum in multiple fields and achieving success in real-world deployments. However, in recent years, researchers have shown that ML models are vulnerable to various security and privacy risks, such as membership inference [50], model stealing/extraction [53], and backdoor [21]. Quantifying and mitigating ML models' vulnerabilities thus become increasingly important topics. Currently, most of the research in this field focuses on proposing different attacks and countermeasures. To evaluate these methods, the common practice is that the researchers train models by themselves and treat these models as potential victims' models (target models) in the experiments. Based on some of the well-known papers on two popular attacks against ML models, including membership inference and model stealing [10, 25, 38, 32, 36, 44, 46, 55, 61, 28], we find that all of them conduct experiments on target models trained from scratch by the authors. This practice, however, faces several limitations. The behavior of the models can vary greatly on different architectures and different datasets. Since training state-of-the-art ML models is resource-intensive and time-consuming, the target models used in machine learning security and privacy research tend to be limited to the most popular architectures trained on the common and approachable experiment datasets (e.g., CIFAR-10 [1], CIFAR-100 [1], and SVHN [2]). Also, the number of models used in the evaluation is often small. Furthermore, even with the same model architecture and dataset, different procedures and hyperparameters used for training can still drastically alter the model's behavior. For state-of-the-art models, huge efforts are dedicated to fine-tuning hyperparameters to find the best training procedures, thus maximizing the chosen model architecture's potential on the target task. Since research in security and privacy tends to focus on developing attacks and countermeasures, it is unrealistic for the researchers to have a similar level of dedication to training their victim models. Subsequently, the victim model in the experiments might not be adequately trained, whereby the model's performance on the target task is lower than the given architecture's best result (we show evidence in Section 2). Publishing models with weights on the Internet (_public models_) is becoming a common practice in the machine learning community to increase research reproducibility and provide benchmarks on different ML tasks. These public models cover a wide variety of model architectures and datasets. Moreover, many of these public models are already integrated into companies' products deployed in the real world. For instance, certain transformer models on Hug ging Face2 have been integrated into Amazon SageMaker.3_To fully assess the effectiveness of different attacks and defenses on machine learning models, we argue that the experiments should be conducted on such public models when possible._ Footnote 2: [https://huggingface.co/](https://huggingface.co/). Footnote 3: [https://aws.amazon.com/machine-learning/hugging-face/](https://aws.amazon.com/machine-learning/hugging-face/). ### Our Contributions In this work, we take the first step towards conducting ML models' security and privacy vulnerability evaluation on public models. We collect a large-scale dataset of public models, namely SecurityNet, to evaluate three popular attacks/defenses in this field, including membership inference attack, model stealing attack, and backdoor detection. We omit the popular topic of evasion attacks due to existing benchmarks [16, 24], but we do include baseline results and discussion in Appendix C. Note that we focus on image classification models as they have been extensively studied by the trustworthy machine learning research community. **SecurityNet.** We build a public model database SecurityNet by collecting a large number of public models used for image classification from multiple open-source platforms on the Internet, such as Paper with Code [3], Kaggle [4], and GitHub [5]. Many of our public models come from machine learning libraries that contain models with various architectures trained on multiple datasets. These models are usually trained for performance benchmarks, so they have as high as possible prediction accuracy on the target task (of the given architectures). We refer to such models as _benchmark models_. We further manually search for publicly available models from research papers published in top-tier security, machine learning, and computer vision conferences. Among these models from the research papers, we refer to those related to the topic of machine learning security and privacy as _security models_, and the rest are considered as part of the benchmark models. We notice that most of the security models are used for adversarial example research. In general, benchmark and security models will be the main focus of our analysis. SecurityNet comprises 910 models covering 42 different datasets. For each model in SecurityNet, we further annotate its relevant information from three dimensions, namely dataset (e.g., sample size, split ratio, topic category, class fidelity, etc.), model (e.g., number of parameters, FLOPs, architecture type, etc.), and metadata (e.g., publisher type, published year, venue, model purpose, etc.). We hope this information can help security researchers find appropriate public models promptly. Also, we will continue enlarging SecurityNet with newer models to monitor whether ML models are more or less susceptible to the attacks investigated over time. We plan to share SecurityNet with the research community to facilitate the research in machine learning security and privacy. **Evaluation Results.** With the help of SecurityNet, we are able to perform an extensive analysis of model stealing, membership inference, and backdoor detection on a large set of public models. To the best of our knowledge, this has not been done before. Our experiments confirm some known results from the literature (but on a much larger number of models), uncover some new insights, and show that attacks and defenses can behave differently on public models than on researchers' self-trained models. For model stealing on benchmark models from SecurityNet, we find using a larger and more complex surrogate model architecture to limit the difference between the surrogate and victim models does not improve the attack performance, which differs from previous results [41]. Also, we observe a negative correlation between the attack performance and the victim model's target task performance. Such a negative correlation has been observed in [27] previously; our experiments on a much larger number of (public) models further confirm this. In addition, if the target model is too complex (we test model stealing on a RegNetY-320 model [45] with 145 million parameters for the first time), model stealing is ineffective. The public models trained for security and privacy research (security models) typically perform much worse on their target tasks than the benchmark models. Interestingly, for the low-performing security models, we observe a positive correlation between the attack and the target task performance (the opposite of our observation on benchmark models). All these new insights demonstrate the benefits of performing model stealing experiments on public models. For membership inference, we evaluate two types of attack methods, namely metric-based attacks [51] and MLP-based attacks [39, 46, 50]. We empirically confirm some results shown in previous works [50], e.g., the attack performance positively correlates with the model's overfitting level on the target task on the large-scale public models. On the other hand, we also discover that the attack methods can be dataset-dependent. For instance, previous works show that the MLP-based attack using full posteriors as its input has the same performance as using the top-k (e.g., top-3) posteriors [46]. However, on datasets with a large number of classes, e.g., 1,000 classes (ImageNet-1k [17]), we show that using top-3 posteriors as inputs achieve much better performance than full posteriors (on average, the attack AUC increases by 5.1%). Some of the security models with lower performance on their target tasks appear to be less vulnerable to membership inference attacks than benchmark models, even when they share a similar overfitting level. We do not make the same observation on security models that achieve similar target task performance as benchmark models. Finally, we examine three backdoor detection techniques on the public models from SecurityNet: Neural Cleanse [56], Strong Intentional Perturbation (STRIP) [18], and NEO [54]. Assuming all the benchmark models collected are non-backdoored, we discover that Neural Cleanse has high false positive rates on benchmark models trained on CIFAR-10 and SVHN (20.9% and 13.7%, respectively). By manually checking the generated trigger images, we confirm that the detected triggers are indeed falsely identified. On the other hand, both STRIP and NEO are more robust. They successfully avoid labeling any clean inputs as backdoored samples on CIFAR-10 and SVHN models in our experiments. This again shows the necessity of evaluating attacks and defenses on public models. **Implications.** This work aims to provide a more realistic overview of the landscape of machine learning attacks and defenses. We also want to point out that some of the current evaluation results on researchers' self-trained models might not generalize to public models. We advise researchers to examine their proposed methods on at least a few public models for more comprehensive evaluations in the future. Hence, we will share SecurityNet. We hope our annotations and experiment results on baseline attacks/defenses will greatly minimize the effort for researchers to find appropriate public models for their purpose. ## 2 SecurityNet One of the main contributions of our work is SecurityNet, a database containing publicly available models with weights. We focus on one of the most popular machine learning tasks, image classification, as it is also typically used to demonstrate the effectiveness of attacks and defenses on ML models. ### Model Collection **Datasets to Models.** Our main model collection process, namely datasets to models, consists of two steps, namely dataset searching and model collection. In a nutshell, we first find a diverse set of datasets and then collect public models trained on these datasets. For dataset searching, we focus on image datasets that are mainly used for classification tasks. Note that we also plan to extend to other types of tasks in the future. The diversity of the datasets is critical. We consider two sources for dataset collection, namely Paper with Code [3] and Kaggle [4]. Paper with Code is a website that provides open-source content, including machine learning papers, codes, datasets, methods, and evaluation results. The website's collection of datasets covers the majority of datasets commonly used for machine learning research. Kaggle is a crowd-sourcing platform that is popular in the data science community. It is well-known for hosting data science competitions and challenges in cooperation with many companies and research institutes. The datasets used in these competitions have a huge variety and typically differ from the experiment datasets collected from Paper with Code. After collecting a variety of datasets, we then use these datasets as a starting point to search for publicly available models trained on these datasets. Our search can be summarized in a few directions. First and foremost, pre-trained model libraries are some of the most valuable sources, such as PyTorch's official torchvision library.4 They typically contain a wide range of popular models that are trained to have high performance on the target tasks. We refer to these models as _benchmark models_. Models from reputable sources also provide high confidence in their qualities. These qualities, including no additional data used, a proper partition of training and test data, no malicious data (e.g., backdoor triggers), etc., are especially important for our later analysis. These model libraries, however, also suffer some downsides. Specifically, these libraries typically contain only well-established architectures and benchmark experiment datasets. Footnote 4: [https://github.com/pytorch/vision](https://github.com/pytorch/vision). Furthermore, we extend our benchmark model collection to other sources, such as models from Kaggle competitions and Paper with Code. These sources provide a much wider range of models in several dimensions, such as model variety, purpose, and quality. We emphasize here that due to the wide range of sources in our model collection, the quality of the models cannot be guaranteed in the same way as with reputable sources (e.g., the Pytorch torchvision library). However, this variation in model quality yields a valuable comparative dimension to our analysis.5 Footnote 5: For simplicity, these models are also considered as benchmark models in SecurityNet. Additionally, there are models that we do not include in the database. We exclude models with corrupted weights that cannot be loaded properly or have extremely low performance. Moreover, the same model can appear on multiple platforms. We also exclude these duplicates to avoid over-representation of the same model. **Academic Papers to Models.** Furthermore, to obtain an overview of the current models used in research, we manually search for image classification models provided by authors of papers published in recent top-tier conferences. We consider the following security & privacy, machine learning, and computer vision conferences in the last four years: IEEE S&P, USENIX Security, ACM CCS, NDSS, NeurIPS, ICML, ICLR, CVPR, ICCV, and ECCV. To simplify the process, for machine learning and computer vision conferences, we directly search on GitHub for the corresponding repositories (e.g., using keywords "CVPR 2022"). Considering the popularity of GitHub, we believe we capture the majority of the published models in those venues. Meanwhile, for security conferences, we manually check all papers to obtain the models. We also especially focus on models from papers on the topic of trustworthy machine learning from all the conferences considered and refer to them as _security models_. During our collection process, we noticed that the majority of the security models are derived from papers on adversarial example studies. Unfortunately, we cannot find any public models on model stealing and membership inference.6 Overall, the addition of these models will allow us to conduct a more comprehensive analysis in Section 4. Note that the models from research papers that are not related to security and privacy are considered benchmark models as well. Footnote 6: Note that for backdoor attacks, our goal is to apply the current backdoor detection methods to the public models. Thus, we do not include the backdoored models published with backdoor-related research papers in our database. ### Annotation Once having collected the models, we then annotate their relevant information.7 The annotation serves two purposes. First, it provides a guideline for our analysis of the current landscape of security and privacy attacks/defenses. Second, it serves as a valuable source for future research. The annotation for models in SecurityNet includes quantitative information about their training sets, such as the number of classes, size, image dimensions, etc. Furthermore, we record qualitative information, such as topic categories. Our models' datasets cover a broad range of topics, including natural scenery, medical scans, traffic signs, satellite images, and more. Within each topic category, we further distinguish the dataset's class granularity. For example, while ImageNet-1k and CUB-200-2011 are both categorized as natural scenery, ImageNet-1k includes multiple types of objects ranging from different animals to cars and park benches. We label datasets like ImageNet-1k as coarse-grained datasets. CUB-200-2011 only contains images of different types of birds. Therefore, it is labeled as a fine-grained dataset. Besides models' datasets, we further annotate their intrinsic properties and metadata. Intrinsic model properties include the number of parameters, FLOPs, architecture type, presence of certain elements (e.g., dropout, batch normalization), etc. We also annotate the models' metadata, such as publishing venues (for research-based models), number of authors, etc. Given the large number of models we have collected, we can analyze different attacks and defenses from the metadata dimension, which, to our knowledge, has not been done before. Appendix A in the appendix provides the complete list of categories annotated by us for SecurityNet. ### Summary In total, SecurityNet contains 910 public models, of which 665 are benchmark models and 245 are security models. These models are trained over 42 different datasets from 13 categories. The models cover an extensive set of 220 architectures (e.g., ResNet-18 [23], ResNet-50, DLA-169 [58], BagNet-33 [7], etc.) based on 60 different model types (ResNet, DLA, BagNet, etc.). The oldest model type included was first introduced in 2012 [30] and the latest one [37] in 2022. Note that for benchmark models, we record the year when the model type was first introduced instead of the trained models' publishing time. Figure 1 shows some general statistics of the models in SecurityNet. Model collection for SecurityNet will be a continuous process. We plan to update SecurityNet on a bi-annual basis to add new publicly available ML models. This allows us to keep tracking ML models' security and privacy vulnerabilities over time. We will also make SecurityNet easily accessible to the research community. **Security Models vs. Benchmark Models.** Based on the models collected, we first observe that the majority of the security models are trained on small experiment datasets, such as CIFAR-10, CIFAR-100, and SVHN. Only a small amount of the papers include results on more complex datasets like ImageNet-1k. These four datasets also cover the majority of the papers on security and privacy research that we have found. Besides, the model architectures used in security models are also limited, e.g., the majority of the architectures are the simpler and popular ones, such as ResNet-18, VGG-16, etc. Figure 2 shows the model performance difference between the benchmark models and the security models trained on CIFAR-10, CIFAR-100, and ImageNet-1k. We only use benchmark models that share similar architectures with the security models so that the inherent differences in model architectures do not affect the comparison. For CIFAR-10 and CIFAR-100 (too few models for ImageNet-1k), we notice that there are two distinct clusters Figure 1: SecurityNet statistics. of models, and both have lower performance than benchmark models. From Figure 3, we can observe the overfitting gaps between benchmark models and security models are not too different for CIFAR-10 and ImageNet-1k models, while CIFAR-100 security models have a lower overfitting level in general. Note that the security models here are for adversarial example research, and we cannot find any published target models for model stealing and membership inference. Nevertheless, some of the membership inference and model stealing papers' reported target task accuracy is still lower than the benchmark models' accuracy in SecurityNet. For instance, CIFAR-100 models from the popular papers in Section 1 have an average accuracy of 69.0%, compared to our benchmark models' average accuracy of 78.5%. Additionally, we find the performance gap still exists in recently published papers (CIFAR-10: 79% [13]; CIFAR-10: 77%, CIFAR-100: 20% [57]). We believe these security models are not trained to the architecture's maximum "potential" due to limited hyperparameter tuning efforts. For simplicity, previous works [36, 6] in the security/privacy domain only use one set of batch size, learning rate, optimizer, etc., without further hyperparameter fine-tuning. In conclusion, the above results demonstrate that some security models are not adequately trained compared to benchmark models are ## 3 Attack Methodology and Evaluation Setup In this paper, we study three types of attacks/defenses on machine learning models, namely model stealing attack, membership inference attack, and backdoor detection. They are among the most well-explored subjects in the field of machine learning security and privacy. In the future, we plan to extend our analyses to other types of attacks/defenses with models from SecurityNet. ### Model Stealing **Threat Model.** In this attack, the adversary aims to build a surrogate model that mimics the target model's behavior. Following one of the most popular attacks [53], we consider an adversary with black-box access to the target model that outputs the full posterior. The adversary also has access to an auxiliary dataset for querying the target model. Note that the auxiliary dataset does not necessarily come from the same distribution as the original training set. **Methodology.** The adversary first initiates a surrogate model which can adopt a different architecture than the target model [41]. Then, the adversary queries the samples from their auxiliary dataset to the target model and gets the output posteriors. In the end, the adversary trains their surrogate model leveraging the posteriors as ground truth. In our experiments, we adopt ResNet-18 to initiate the surrogate models. We further show in Section 4.1 that a more complex surrogate model does not increase the attack performance. We consider two settings for the auxiliary datasets, i.e., partial training set8 from the target model's training set (default setting) and a large out-of-distribution dataset (specifically, a subset of ImageNet-1k). We train ResNet-18 for 30 epochs using an SGD optimizer with a learning rate of 0.01. Footnote 8: We use 50% of the total training data. We evaluate how different query budget affects the attack performance in Appendix B. **Metrics.** We adopt two most widely-used metrics, namely attack accuracy and attack agreement [27], in the evaluation. Accuracy measures the performance of the surrogate model on the original task, while agreement calculates the prediction agreement between the surrogate model and the target model. ### Membership Inference **Threat Model.** The adversary aims to determine whether a given sample is used to train a target model [50]. Following the existing work [36, 39], the adversary is assumed to hold a small subset of the training data which is used as member samples9 and an auxiliary dataset that represents non-member samples. Note that we use the test set of the corresponding dataset as this auxiliary dataset. For instance, when experimenting on models trained on ImageNet-1k, the auxiliary dataset is ImageNet-1k's test set. The adversary also has access to the black-box target model that outputs full posteriors. Footnote 9: In many membership inference research, such as [39, 46, 50], the adversary is assumed to have a shadow dataset to train their shadow model. To conduct experiments, the researchers split a dataset into four equal parts: two are used as the target model’s training and test sets, and the other two are for the shadow model. We cannot follow the same setting here as the models in SecurityNet are all trained. Thus, we make a stronger assumption for the adversary having access to a partial training set of the target model [36, 39]. Such a stronger assumption also allows us to assess the worst-case scenario of membership leakage threat. **Methodology.** We use two popular attacks, namely metric-based [51] and MLP-based [46, 50] attacks. The former distinguishes member (training) samples from non-member Figure 3: The model’s overfitting level with respect to benchmark models and security models. Figure 2: The model’s target task performance with respect to benchmark models and security models. (auxiliary) samples based on behavioral differences in prediction statistics, such as prediction correctness and modified prediction entropy [51]. For MLP-based attacks, the adversary feeds the member and non-member samples to the target model and gets the output posteriors. The adversary then trains an attack model (i.e., a binary classifier) based on the posteriors. In our experiments, the attack model is assembled with one layer of 64 neurons and one layer of 32 neurons using the ReLU activation function. We train the attack model for 50 epochs using the Adam optimizer with a learning rate of 0.01. **Metrics.** We use AUC to evaluate the attack performance. The higher the AUC, the better the performance is. Concretely, 0.5 represents random guessing, and 1.0 is a perfect prediction. ### Backdoor Detection In backdoor attacks, the adversary injects the backdoor into the target model without degrading its original performance [12]. Generally, there are two types of backdoor attacks, i.e., untargeted and targeted. Untargeted attacks aim to misclassify triggered images, while targeted attacks misclassify triggered images to one specific class. There exist various types of defenses against backdoors [11, 15, 18, 22, 26, 35, 54, 56]. Since the backdoor injection occurs during training and models in SecurityNet are already trained, we choose to evaluate backdoor detection methods on these models. **Methodology.** We evaluate three popular backdoor detection methods: Neural Cleanse [56], Strong Intentional Perturbation (STRIP) [18], and NEO [54]. The three cover two types of approaches: model inspection and input filtering. Neural Cleanse is a model inspection approach that aims to detect targeted backdoor attacks. The key idea of this method is to find the minimal trigger needed to misclassify all samples into each label and leverage an outlier detection method to detect if any trigger candidate is smaller than all the other candidates. If such an outlier exists, the model is potentially backdoored, and the trigger can be returned for further analysis. We run Neural Cleanse for 50 epochs and use their default threshold value for detecting outliers. STRIP is an input-filtering approach. The key idea is that triggered inputs are less affected by perturbations than normal inputs. By overlaying various images on the incoming input, the detector examines the randomness (prediction entropy) of the model's prediction on the overlaid input. The detection considers the model is backdoored if the prediction entropy is low. We use 2,000 images for testing and the default number of 10 images for overlaying input. NEO is another input-filtering approach. The key idea is that triggers within the input contribute the most to the prediction. The detection first calculates the dominant color of the image, then randomly selects a small region in the input image and replaces it with the dominant color. If the new prediction differs from the original one, NEO assumes the selected region contains a potential trigger and then superimposes it onto the test set. If most of the test sets have different predictions after adding the potential trigger, the current input image is labeled as a backdoored image, and the model is also identified as a backdoored model. We use 200 sampled 4\(\times\)4 regions (for 32\(\times\)32 images) and 3 K-means clusters for calculating the dominant color; 0.8 is adopted as the threshold.10 We use 2,000 images from the test set for evaluation. Footnote 10: If more than 80% of the images are misclassified after adding the trigger, the trigger is then confirmed. **Metrics.** As the models in SecurityNet are supposedly free of backdoors, we adopt the false positive rate to evaluate whether these detection methods can falsely recognize clean models as backdoored ones. ## 4 Experiment Results ### Model Stealing We now evaluate the performance of model stealing attacks on public models from SecurityNet. **The Effect of Target Model's Training Set.** For our evaluation, we primarily use 389 benchmark models trained on CIFAR-10, CIFAR-100, SVHN, ImageNet-1k, and CUB-200-2011 datasets. These models cover a wide variety of architectures that allow us to make more comprehensive observations on attack behaviors. First, as seen in Figure 4, we observe a strong positive correlation between the two evaluation metrics (with a 0.991 Pearson correlation coefficient), i.e., the attack agreement (see Section 3) and the attack accuracy on all 389 models. Due to the page limit, we mainly use attack accuracy as the metric for model stealing in the following analysis. Secondly, Figure 5 shows that the attack achieves exceptionally high performance on datasets with a small number of classes and abundant training data, such as CIFAR-10 and SVHN. For more complex datasets like ImageNet-1k, CIFAR-100, and CUB-200-2011, however, the attack performance significantly deteriorates. For instance, the average attack accuracy for ImageNet-1k models is 36.3% while the average target models' accuracy is 73.1%, i.e., the ratio of the two is 0.496. Meanwhile, the corresponding ratios for CIFAR-10 and SVHN models are 0.796 and 0.953. In addition, we find that while the target task performance of CUB-200-2011 models is similar to that of ImageNet-1k models, the attack accuracy on CUB-200 Figure 4: The relationship between the attack agreement and the attack accuracy for model stealing on benchmark models across multiple datasets. 2011 models is significantly lower.11 Note that ImageNet-1k has more classes in total, but CUB-200-2011 has higher class granularity, which means the two datasets have comparable classification complexity. These results indicate that the model stealing attack is especially ineffective on some outlier datasets, which, to our knowledge, has not been shown previously. Footnote 11: Note that the model stealing performance on CUB-200-2011 in [41] is higher than ours, this is because the authors fine-tune their surrogate model and target model on base models pre-trained with ImageNet-1k. **Out-Of-Distribution Auxiliary Dataset.** While by default, we assume the adversary has access to a partial training set as the auxiliary dataset, we now consider another scenario where the adversary uses an out-of-distribution dataset to initiate their attack. More concretely, we leverage a subset of the large and diverse ImageNet-1k dataset as the auxiliary dataset to steal benchmark models trained on CIFAR-10, SVHN, CIFAR-100, and CUB-200-2011. Figure 6 shows the attack performance deteriorates across a large number of benchmark models when using the out-of-distribution auxiliary dataset. In addition, we find that the attack performance on SVHN models decreases more significantly than on CIFAR-10 models. For instance, the average attack accuracy decreases by 27.8% on SVHN and 19.0% on CIFAR-10, respectively. We suspect this is due to the fact that the images in ImageNet-1k are more similar to the images in CIFAR-10 than to the images in SVHN.12 Further, on a more complex dataset, the attack performance can suffer even greater, e.g., with an average degradation of 41.7% on CIFAR-100. For the previous poor-performing CUB-200-2011 models, the attack accuracy also decreases by 21.4%, even though the auxiliary dataset contains many classes similar to the original dataset (e.g., bird species), and overall has more samples than the original partial training set (100k vs. 5k). In summary, we find that using out-of-distribution data as the auxiliary dataset does not benefit model stealing. Footnote 12: Previous work [41] makes similar observations. **The Effect of Target Model's Performance.** From the perspective of target models' inherent properties, we mainly study the relation between their target task performance and the corresponding model stealing attacks' performance. Figure 5 shows a distinct negative correlation between the two. While the correlation is clear on both smaller datasets, such as CIFAR-10 (-0.693) and SVHN (-0.603), it is more evident on larger and more complex datasets, like ImageNet-1k (-0.844) and CIFAR-100 (-0.873). We present the Pearson correlation coefficients in the parenthesis. This negative correlation has been observed by Jagielsk et al. [27] previously. Our finding, however, differs in magnitude. Due to the wide range and variety of benchmark models, we find that the attack is largely likely to fail on high-performing models. For example, on most ImageNet-1k models with target task accuracy above 75%, the attack accuracy does not surpass 40%. In a more concrete example, our model stealing attack on RegNetY-320 [45] with 146 million parameters only achieves 25.6% attack accuracy. This implies that the model stealing performing well on simpler architectures does not guarantee success on complex and high-performing models. **More Complex Surrogate Model.** All results above are done using the ResNet-18 architecture as the surrogate model. Previous work [41] has shown that using a more complex surrogate model can improve model stealing performance. We now conduct the attack with a larger surrogate model, i.e., WRN-50 [59], to evaluate whether similar observations can be made on benchmark models as well. Figure 7 shows the attack performance actually deteriorates on all three datasets. Specifically, for the SVHN, CIFAR-10, and CIFAR-100 models, the attack accuracy degrades by an average of 5.8%, 29.0%, and 47.3%, respectively. In contrast to previous work, our experiments show that larger and more complex surrogate models do not improve model stealing performance on SecurityNet. **Benchmark vs. Security Models.** As mentioned in Section 2, we also extensively search for public models used in security/privacy research (named security models). Here, we examine whether the attack behaves differently on security models compared to the benchmark models above. Recall that the security models trained on CIFAR-10 can be divided into two clusters in terms of target task performance (see Figure 2). Both Figure 8 and Figure 9 show the cluster of high-performing security models behave very similarly to the benchmark models, where the two evaluation metrics have a high agreement, and the attack performance is negatively correlated (-0.362) with model's target task performance. Interestingly, the low-performing cluster shows drastically different behavior. First, many of these models Figure 5: The relationship between the model stealing performance (attack accuracy) and the target model’s task accuracy across various benchmark models when using a partial training set as the auxiliary dataset. have high attack agreement while the attack accuracy varies. Secondly, the correlation between the attack accuracy and target task accuracy is distinctively positive (0.998). We find models used for studying model stealing attacks in previous works [36] exhibit similar behavior.13 This drastic change in correlation indicates that when the target model is not trained to its maximum "potential," the attack can behave differently. For future research on the security and privacy of machine learning models, we advise the researchers to use benchmark models when possible or train the target models to high performance. Footnote 13: We can infer the same positive correlation from the negative correlation between their models’ overfitting level (the difference between training and test accuracy) and the model stealing performance since all of their models have 100% training accuracy. **Metadata.** Next, we examine how some metadata relates to the attack performance using public models trained on ImageNet-1k. On the time dimension, Figure 10 shows the model stealing attack is generally less effective on newer models, which may be due to the higher performance of the newer models, see Figure 11. Besides, as the violin plot shows in Figure 12, the distribution of the CV domain is much wider than that of other domains, which means that computer vision conferences are the more popular venue for publishing new model architectures. However, the attack shows no significant difference between different types of publishing venues. ### Membership Inference We next evaluate the performance of membership inference attacks on public models from SecurityNet. **The Effect of Target Model's Overfitting Level.** Similar to the previous section, we compare the attack performance on benchmark models trained on several different datasets. First of all, we evaluate the membership inference attack performance on benchmark models with respect to their overfitting level. Here, the overfitting level means the difference between training and test accuracy on models' original datasets [36]. As shown in Figure 13, we make similar observations as in many previous works [36, 50], where membership inference achieves better performance on victim models Figure 8: The relationship between the attack agreement and the attack accuracy for model stealing on CIFAR-10 benchmark and security models. Figure 6: The relationship between the model stealing performance (attack accuracy) and the target model’s task accuracy across various benchmark models when using an out-of-distribution auxiliary dataset. Figure 7: The relationship between the model stealing performance (attack accuracy) and the target model’s task accuracy across various benchmark models when using a more complex surrogate model, i.e., WRN-50. with a higher overfitting level. We can still find such an association even when the overall attack is not very effective. For instance, the correlation is still present on CIFAR-10 (0.204) and ImageNet-1k (0.247) models, even though the AUC is generally lower than 0.6. This correlation is expected since the attack relies heavily on the different distributions of posterior probabilities between member and non-member samples. **The Effect of Target Model's Training Set.** We also compare the membership inference attack performance from the perspective of the target model's training set. For models trained on simpler datasets with a small number of classes and sufficient training data, the attack performs poorly, as seen in Figure 13. More concretely, for the SVHN and CIFAR-10 models, the average attack performance is only slightly better than random guessing, with an AUC around 0.565 for the modified entropy attack, and not better than random guessing in most cases for the MLP-based attack. However, many previous works [46, 50] evaluate their attack performance using these two datasets and achieve much higher performance than random guessing, which is quite inconsistent with our observations on benchmark models. This is likely due to the higher prediction accuracy and the lower overfitting level of the benchmark models in SecurityNet compared to the self-trained ones in previous works. For more complex datasets, e.g., CIFAR-100 and CUB-200-2011, the attack performance is significantly better than the previous two simple datasets. For instance, the modified entropy attack can achieve higher than 0.8 AUC on almost all CUB-200-2011 models. These results indicate that even these heavily fine-tuned benchmark models cannot achieve the low overfitting level as the ones on simpler datasets. Interestingly, the membership inference attack does not achieve high performance on the benchmark ImageNet-1k models, even though the overfitting level is not as low as that of the ones trained on simpler datasets. We suspect the large and diverse training set for these benchmark models makes the task of membership inference more difficult. To our knowledge, this observation has not been made previously. **Different Attack Methods' Effectiveness.** As mentioned in Section 3.2, we evaluate three attack methods, namely prediction correctness (metric-based) attack, modified entropy (metric-based) attack, and MLP-based attack. Our experiments demonstrate that the modified entropy attack and MLP-based attack show varying degrees of success and correlation with overfitting level, depending on the training set of the victim model, as previously seen in Figure 13. The prediction correctness attack, unlike the other two, strictly follows the model's overfitting level across all datasets evaluated,14 which implies its better transferability to unknown models and datasets. This method, however, achieves only limited success even on models with high overfitting levels and generally performs worse than the other two methods. Footnote 14: SVHN models’ overfitting levels are too low for the attack to be stable. We also observe that the MLP-based method yields poor performance on ImageNet-1k models, given the relatively high overfitting level of these models. We suspect that the dimensions of the full posterior inputs commonly used in these models are too large for the MLP-based attack. To accommodate the large input dimension, we select only the top-3 largest posteriors as input to make the attack model more sensitive to important information in the posterior. Different Figure 11: The model’s target task performance with respect to the publication year. Figure 12: The model stealing performance (attack accuracy) with respect to the conference type. Figure 10: The model stealing performance (attack accuracy) with respect to the publication year. Figure 9: The relationship between the model stealing performance (attack accuracy) and the target model’s task accuracy on CIFAR-10 benchmark and security models. from observations in previous work [46], Figure 14 shows the MLP-based attack improves significantly on ImageNet-1k models and reaches similar performance as the modified entropy attack. The improvement is less prominent in the CIFAR-100 and CUB-200-2011 models, where the target datasets have a smaller number of classes. This means that the attack performance of the developed methods may vary significantly when evaluated on more complex datasets. While it can be resource-intensive to conduct experiments on more complex datasets, researchers can consider using a few trained benchmark models for attack evaluation in the future. Benchmark vs. Security Models.Similar to our model stealing analysis, we also compare the membership inference attack performance between benchmark and security models. Figure 15 shows the performance of the modified entropy attacks. We first observe that both types of models generally show a similar positive correlation (0.609) with the overfitting level, indicating that the overfitting level, indeed, is the primary indicator of membership vulnerability. Moreover, we find that the two clusters of security models, which have relatively high and low target task performance, respectively (see Figure 2), also react differently to membership inference attacks. The low-performing security models appear less vulnerable to the attack than both the high-performing ones and benchmark models despite having a similar overfitting level. As a result, the models that are not trained adequately on the target tasks can potentially appear to be less vulnerable and lead to underestimated risks in evaluation. Metadata.For membership inference, we also examine the correlation between the attack performance (modified entropy) and two types of models' metadata, publishing time and conference type, using ImageNet-1k benchmark models. Figure 16 shows that, unlike model stealing attacks, newer models are not more (or less) secure to membership inference attacks compared to older ones. Meanwhile, in Figure 17, we also do not observe significant performance differences in membership inference on models from different venues. ### Backdoor Detection We next focus on backdoor detection on the public models. We first emphasize that while it is crucial for the backdoor detection techniques to identify the backdoored model accurately, the techniques' practicality also depends on hav Figure 14: The performance (AUC) of MLP-based and metric-based membership inference attacks with respect to the target model’s overfitting level. Figure 13: The performance (AUC) of different membership inference attacks with respect to the target model’s overfitting level. Figure 15: The membership inference performance (AUC) with respect to the target model’s overfitting level on CIFAR-10 benchmark and security models. ing an acceptable low false positive rate. Thus, we examine the false positive rate of three widely-used backdoor detections on models from SecurityNet. Note that since these methods rely on finding easily misclassified labels/images iteratively, the computation cost can be very high. Therefore, we only conduct evaluations on CIFAR-10 and SVHN models. Furthermore, since we aim to examine the false positive rate of these techniques, we only consider benchmark models. The reason is that these benchmark models are (almost) unlikely to contain backdoors. **Model Inspection.** For the model inspection method, Neural Cleanse has very high detection rates, specifically 20.9% for CIFAR-10 models and 13.7% for SVHN models, shown in Table 1. To determine whether the detection is false positive, we examine the trigger patterns. The method provides trigger patterns generated through optimization and shows both a trigger image and a mask image of the trigger location. The mask area is selected as an outlier through the detection process, i.e., much smaller than others to cause misclassification. Figure 18 shows two examples detected by Neural Cleanse. The examples, however, do not resemble any trigger patterns. More specifically, the mask area is still too large and generally covers the key areas in the current class of images. For example, the SVHN trigger mask clearly shows the digit 8, which means almost all of the area has to be altered to cause misclassification and is, therefore, not a true trigger. The CIFAR-10 example similarly shows the outline of a bird (which is the label of the class). We can confirm that the detected triggers are indeed all false positives. The false positive rates on these public models greatly exceed the results on their experiment models presented in the original work [56]. The generated trigger patterns do help users eliminate the false positive samples easily, yet they require manual intervention. Besides, since Neural Cleanse iteratively optimizes the trigger pattern and evaluates the change in prediction results, the run time on our GPU cluster (an NVIDIA DGX-A100 server) is at least 25 times longer than the other two methods. The current run-time evaluation actually benefits Neural Cleanse by evaluating a model with a simple architecture (ResNet-18) trained on a small dataset (CIFAR-10). The detection algorithm's run time will scale with not only the model's computation complexity but also the number of classes in the dataset. For datasets with more classes, such as ImageNet-1k, the method will become infeasible since the run time will become at least 100 times more than the current setting, even if we assume the time cost for each label's iteration remains the same. This significant resource requirement can hinder the method's practicality in the real world. **Input Filtering.** For the input filtering methods, i.e., STRIP and NEO, we adopt a subset of the test set for evaluation. Since we choose the images ourselves, we can ensure there is no backdoor, and thus, the method should correctly identify the images as clean. Our experiments show that the two methods are effective in terms of low false positive rates. None of the models is detected as having backdoors. Noticeably, both methods have detection values that are much lower than their respective thresholds, which further indicates the method is effective in avoiding over-detection. The computation cost is also significantly lower compared to Neural Cleanse and realistically allows real-world deployment. \begin{table} \begin{tabular}{c|c c c} \hline \hline **Detection Method** & **CIFAR-10** & **SVHN** & **Runtime** \\ \hline Neural Cleanse & 20.9\% & 13.7\% & 802.1s \\ STRIP & 0.0\% & 0.0\% & 32.1s \\ NEO & 0.0\% & 0.0\% & 18.0s \\ \hline \hline \end{tabular} \end{table} Table 1: Backdoor detection performance (false positive rate) on CIFAR-10 and SVHN models. Runtime is from CIFAR-10’s ResNet-18 model. Figure 16: The membership inference performance (AUC) with respect to the publication year. Figure 17: The membership inference performance (AUC) with respect to the conference type. Figure 18: The trigger masks (a and b) and patterns (c and d) generated by the backdoor detection method Neural Cleanse. ### Result Summary Thanks to SecurityNet, we are able to perform an extensive evaluation for model stealing, membership inference, and backdoor detection on a large set of public models, which, to the best of our knowledge, has not been done before. Our analyses confirm some results from previous works but on a much larger scale, discover some new insights, and show some of the previous results obtained from researchers' self-trained models can vary on public models. First of all, we find that the model stealing attack can perform especially poorly on certain datasets, such as CUB-200-2011, in contrast to target models (with the same architecture) trained on other datasets. Using an out-of-distribution auxiliary dataset also does not improve the attack on our public models. Furthermore, we demonstrate that the model stealing performance negatively correlates with the model's target task performance and is too low to be effective on some modern high-performing models. Unlike previous works [41], we find using a more complex surrogate model does not improve the attack performance. These observations imply that the proposed methods, which perform well under experimental conditions, can become inadequate on public models. As for membership inference, we make a similar observation, as shown in previous works, that the attack performance positively correlates with the victim model's overfitting level. Additionally, we find methods that perform well on experiment datasets do not guarantee similar performance on more difficult datasets. In contrast to previous work's [46] results, the MLP-based attack performs differently on models trained with data that contains a large number of classes (e.g., ImageNet-1k) when using different input methods. Additionally, for both model stealing and membership inference, we compare the behavior of security models to that of benchmark models. We notice the security models with low target task performance can react drastically differently to both attacks. More concretely, model stealing attack positively correlates with the target model's performance, and membership inference is less effective given a similar overfitting level. The two observations cannot be made on security models with similar target task performance as benchmark models, and thus, we suspect the training level mainly causes the different attack behavior. We hope to emphasize the necessity of training the target or victim models "properly," i.e., close to the architecture's maximum performance on the target task or using public models as target models for evaluation. Finally, for backdoor detection, we evaluate the methods' false positive rates on a large number of public models. This allows us to report our observation on the high false positive rate of Neural Cleanse with more confidence, which may be difficult to conclude from just a few test models. The resource requirement or run time of the detection method should also be taken into account when developing detection methods. ## 5 Related Works **Model Stealing.** Several previous works have shown that machine learning models can be vulnerable to model stealing attacks [41, 48, 49, 55, 27, 29, 53, 55, 60]. In general, model stealing attacks focus on either extracting the target model's parameters [53, 8, 27] or functionalities [41, 48, 29, 60, 49, 27]. Tramer et al. [53] propose the first model stealing attacks against black-box ML models with prediction API. Orekondy et al. [41] develop Knockoff Nets that can steal the functionality of the given target model and leverage a reinforcement learning approach to improve the query sample efficiency. Model stealing attacks have been applied to different machine learning applications such as BERT-based APIs [29], Graph Neural Networks (GNNs) [49], and Contrastive Learning [48]. **Membership Inference.** Existing works on membership inference rely heavily on self-trained models to ensure membership information. Shokri et al. [50] develop the first membership inference attacks against ML models. Salem et al. [46] relax such assumptions of [50] by using only one shadow model to establish the attack. Nasr et al. [39] further investigate the membership leakage via the white-box access to the target model. Song and Mittal [51] observe that metric-based attacks can have similar or even better performance than previous attacks that leverage ML-based attack models. Label-only attacks [32, 14] have been proposed for a more difficult scenario where the adversary can only obtain predicted labels instead of the posteriors from the target model. **Backdoor Detection.** Chen et al. [12] propose the first backdoor attack and, more specifically, the first targeted backdoor attack using data poisoning. Recently, numerous works have introduced detection methods for both targeted and untargeted backdoored models. Similar to Neural Cleanse [56] examined in this paper, many previous works detect backdoors by inspecting the models [35, 34, 26, 11, 22]. Others such as Cohen et al. [15] detect trigger inputs at inference time like STRIP [18] and NEO [54]. **Public Model Analysis and Evaluations.** There is not much work on analyzing public models' behaviors, especially from the security and privacy angle. Gavrikov and Keuper [19] analyze the properties of the distribution of 3x3 convolution filter kernels from hundreds of trained models. Schurholt et al. [47] present a dataset of 50,360 systematically generated neural network models for future model property research. This collection of trained models focuses more on providing diverse training trajectories through different combinations of hyperparameters, and thus, models do not necessarily reflect the ones publicly available online. **Large-Scale Evaluation of ML Security and Privacy.** Another related topic is the measurement study on the security and privacy risks of machine learning models. Liu et al. [36] examine four inference attacks using five model architectures trained on four datasets. Pang et al. [42] develop TrojanZoo, an open-source platform for evaluating backdoor attacks/defenses. For evasion attacks, a selection of previous works [16, 33, 43] propose security analyses and bench mark platforms for generating and defending adversarial examples. These works, however, aim at developing the toolbox for future risk assessment but do not include evaluation analyses on a large set of public models. ## 6 Conclusion In this paper, we collect and annotate an extensive database of public models, namely SecurityNet, for privacy and security research in machine learning. We examine these public models with model stealing, membership inference, and backdoor detection. Compared to the results in previous works obtained from researchers' self-trained models, we discover some new insights on ML attacks/defenses with SecurityNet. We will share SecurityNet with the community and recommend future researchers include experiments on public models to demonstrate their methods' efficacy. **Acknowledgments.** We thank all anonymous reviewers for their constructive comments. This work is partially funded by the European Health and Digital Executive Agency (HADEA) within the project "Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D" (D-Solve) (grant agreement number 101057917).
2308.10238
Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit
We study the real-valued combinatorial pure exploration of the multi-armed bandit (R-CPE-MAB) problem. In R-CPE-MAB, a player is given $d$ stochastic arms, and the reward of each arm $s\in\{1, \ldots, d\}$ follows an unknown distribution with mean $\mu_s$. In each time step, a player pulls a single arm and observes its reward. The player's goal is to identify the optimal \emph{action} $\boldsymbol{\pi}^{*} = \argmax_{\boldsymbol{\pi} \in \mathcal{A}} \boldsymbol{\mu}^{\top}\boldsymbol{\pi}$ from a finite-sized real-valued \emph{action set} $\mathcal{A}\subset \mathbb{R}^{d}$ with as few arm pulls as possible. Previous methods in the R-CPE-MAB assume that the size of the action set $\mathcal{A}$ is polynomial in $d$. We introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which is the first algorithm that can work even when the size of the action set is exponentially large in $d$. We also introduce a novel problem-dependent sample complexity lower bound of the R-CPE-MAB problem, and show that the GenTS-Explore algorithm achieves the optimal sample complexity up to a problem-dependent constant factor.
Shintaro Nakamura, Masashi Sugiyama
2023-08-20T11:56:02Z
http://arxiv.org/abs/2308.10238v3
# Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit ###### Abstract We study the real-valued combinatorial pure exploration of the multi-armed bandit (R-CPE-MAB) problem. In R-CPE-MAB, a player is given ad stochastic arms, and the reward of each arm \(s\in\{1,\ldots,d\}\) follows an unknown distribution with mean \(\mu_{s}\). In each time step, a player pulls a single arm and observes its reward. The player's goal is to identify the optimal _action_\(\mathbf{\pi}^{+}=\arg\max_{\mathbf{\pi}\in\mathcal{A}}\mathbf{\mu}^{\top}\mathbf{\pi}\) from a finite-sized real-valued _action set_\(\mathcal{A}\subset\mathbb{R}^{d}\) with as few arm pulls as possible. Previous methods in the R-CPE-MAB assume that the size of the action set \(\mathcal{A}\) is polynomial in \(d\). We introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which is the first algorithm that can work even when the size of the action set is exponentially large in \(d\). We also introduce a novel problem-dependent sample complexity lower bound of the R-CPE-MAB problem, and show that the GenTS-Explore algorithm achieves the optimal sample complexity up to a problem-dependent constant factor. ## Introduction Pure exploration in the stochastic multi-armed bandit (PE-MAB) is one of the important frameworks for investigating online decision-making problems, where we try to identify the optimal object from a set of candidates as soon as possible [1, 1, 13]. One of the important models in PE-MAB is the _combinatorial pure exploration_ task in the multi-armed bandit (CPE-MAB) problem [1, 12, 13, 14, 15]. In CPE-MAB, we have a set of \(d\) stochastic arms, where the reward of each arm \(s\in\{1,\ldots,d\}\) follows an unknown distribution with mean \(\mu_{s}\), and a finite-sized _action set_\(\mathcal{A}\), which is a collection of subsets of arms with certain combinatorial structures. The size of the action set can be exponentially large in \(d\). In each time step, a player pulls a single arm and observes a reward from it. The goal is to identify the best action from action set \(\mathcal{A}\) with as few arm pulls as possible. Abstractly, the goal is to identify \(\mathbf{\pi}^{\star}\), which is the optimal solution for the following constraint optimization problem: \[\begin{array}{ll}\text{maximize}_{\mathbf{\pi}}&\mathbf{\mu}^{\top}\mathbf{\pi}\\ \text{subject to}&\mathbf{\pi}\in\mathcal{A},\end{array} \tag{1}\] where \(\mathbf{\mu}\) is a vector whose \(s\)-th element is the mean reward of arm \(s\) and \(\top\) denotes the transpose. One example of the CPE-MAB is the shortest path problem shown in Figure 1. Each edge \(s\in\{1,\ldots,\ell\}\) has a cost \(\mu_{s}\) and \(\mathcal{A}=\{(1,0,1,0,0,1,0),(0,1,0,1,0,1,0),(0,1,0,0,1,0,1)\}\). In real-world applications, the cost of each edge (road) can often be a random variable due to some traffic congestion, and therefore the cost stochastically changes. We assume we can choose an edge (road) each round, and conduct a traffic survey for that edge (road). If we conduct a traffic survey, we can observe a random sample of the cost of the chosen edge. Our goal is to identify the best action, which is a path from the start to the goal nodes. Although CPE-MAB can be applied to many models which can be formulated as (1), most of the existing works in CPE-MAB [1, 12, 13, 14, 15] assume \(\mathcal{A}\subseteq\{0,1\}^{d}\). This means that the player's objective is to identify the best action which maximizes the sum of the expected rewards. Therefore, although we can apply the existing CPE-MAB methods to the shortest path problem [15], top-\(K\) arms identification [16], matching [17], and spanning trees [14], we cannot apply them to problems where \(\mathcal{A}\subset\mathbb{R}^{d}\), such as the optimal transport problem [13], the knapsack problem [13], and the production planning problem [14]. For instance, the optimal transport problem shown in Figure 2 has a real-valued action set \(\mathcal{A}\). We have five suppliers and four demanders. Each supplier \(i\) has \(s_{i}\) goods to supply. Each demander \(j\) wants \(d_{j}\) goods. Each edge \(\mu_{ij}\) is the cost to transport goods from supplier \(i\) to demander \(j\). Our goal is to minimize \(\sum_{i=1}^{5}\sum_{j=1}^{4}\pi_{ij}\mu_{ij}\), where \(\pi_{ij}(\geq 0)\) is the amount of goods transported to demander \(j\) from supplier \(i\). Again, we assume that we can choose an edge (road) each round, and conduct a traffic survey for that edge. Our goal is to identify the best action, which is a transportation plan (matrix) that shows how much goods each supplier should send to each demander. To overcome the limitation of the existing CPE-MAB methods, Nakamura and Sugiyama (2023) has introduced a real-valued CPE-MAB (R-CPE-MAB), where the action set \(\mathcal{A}\subset\mathbb{R}^{d}\). However, it needs an assumption that the size of the action set \(\mathcal{A}\) is polynomial in \(d\), which is not satisfied in general since in many combinatorial problems, the action set is exponentially large in \(d\). To cope with this problem, one may leverage algorithms from the _transductive linear bandit_ literature [12, 13] for the R-CPE-MAB. In the transductive bandit problem, a player chooses a _probing vector_\(\mathbf{v}\) from a given finite set \(\mathcal{X}\subset\mathbb{R}^{d}\) each round, and observes \(\mathbf{\mu}^{\top}\mathbf{v}+\epsilon\), where \(\epsilon\) is a noise from a certain distribution. Her goal is to identify the best _item_\(\mathbf{z}^{*}\) from a finite-sized set \(\mathcal{Z}\subset\mathbb{R}^{d}\), which is defined as \(\mathbf{z}^{*}=\underset{\mathbf{z}\in\mathcal{Z}}{\arg\max}\mathbf{\mu}^{\top}\mathbf{z}\). The transductive linear bandit can be seen as a generalization of the R-CPE-MAB since the probing vectors are the standard basis vectors and the items are the actions in the R-CPE-MAB. However, in the transductive bandit, we have an assumption that the size of \(\mathcal{Z}\) is polynomial in \(d\). Thus, here again, we suffer from the exponential largeness of the action set \(\mathcal{A}\) with respect to \(d\). In this study, we introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which can identify the best action in the R-CPE-MAB even when the size of the action set is exponentially large in \(d\). This algorithm can be seen as a generalized version of the Thompson Sampling Explore (TS-Explore) algorithm introduced by wang2022learning. Additionally, we show novel lower bounds of the R-CPE-MAB. One is written explicitly; the other is written implicitly and tighter than the first one. We introduce a hardness measure \(\mathbf{H}=\sum_{s=1}^{d}\frac{1}{\Delta_{(s)}^{2}}\), where \(\Delta_{(s)}\) is named _G-Gap_, which can be seen as a generalization of the notion _gap_ introduced in the CPE-MAB literature [1, 13, 14]. We show that the sample complexity upper bound of the Gen-TS-Explore algorithm matches the lower bound up to a factor of a problem-dependent constant term. ## Problem Formulation In this section, we formally define the R-CPE-MAB model similar to chen2014learning. Suppose we have \(d\) arms, numbered \(1,\dots,d\). Assume that each arm \(s\in[d]\) is associated with a reward distribution \(\phi_{s}\), where \([d]=\{1,\dots,d\}\). We assume all reward distributions have an \(R\)-sub-Gaussian tail for some known constant \(R>0\). Formally, if \(X\) is a random variable drawn from \(\phi_{s}\), then, for all \(\lambda\in\mathbb{R}\), we have \(\mathbb{E}[\exp(\lambda X-\lambda\mathbb{E}[X])]\leq\exp(R^{2}\lambda^{2}/2)\). It is known that the family of \(R\)-sub-Gaussian tail distributions includes all distributions that are supported on \([0,R]\) and also many unbounded distributions such as Gaussian distributions with variance \(R^{2}\)[16]. We denote by \(\mathcal{N}(\mu,\sigma^{2})\) the Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). Let \(\mathbf{\mu}=(\mu_{1},\dots,\mu_{d})^{\top}\) denote the vector of expected rewards, where each element \(\mu_{s}=\mathbb{E}_{X\sim\phi_{s}}[X]\) denotes the expected reward of arm \(s\) and \(\top\) denotes the transpose. We denote by \(T_{s}(t)\) the number of times arm \(s\) is pulled before round \(t\), and by \(\hat{\mathbf{\mu}}(t)=\left(\hat{\mu}_{1}(t),\dots,\hat{\mu}_{d}(t)\right)^{\top}\) the vector of sample means of each arm before round \(t\). With a given \(\mathbf{\nu}\), let us consider the following linear optimization problem: \[\begin{array}{ll}\underset{\mathbf{\pi}}{\text{maximize}_{\mathbf{\pi}}}&\mathbf{\nu}^ {\top}\mathbf{\pi}\\ \text{subject to}&\mathbf{\pi}\in\mathcal{C}\subset\mathbb{R}^{d},\end{array} \tag{2}\] where \(\mathcal{C}\) is a problem-dependent feasible region. For any \(\mathbf{\nu}\in\mathbb{R}^{d}\), we denote \(\mathbf{\pi}^{\mathbf{\nu},\mathcal{C}}\) as the optimal solution of (2). Then, we define the action set \(\mathcal{A}\) as the set of vectors that contains optimal solutions of (2) for any \(\mathbf{\nu}\), i.e., \[\mathcal{A}=\left\{\mathbf{\pi}^{\mathbf{\nu},\mathbf{\mathcal{C}}}\in\mathbb{R}^{d}\mid \forall\mathbf{\nu}\in\mathbb{R}^{d}\right\}. \tag{3}\] Note that \(K\) could be exponentially large in \(d\). The player's objective is to identify \(\mathbf{\pi}^{*}=\underset{\mathbf{\pi}\in\mathcal{A}}{\arg\max}\mathbf{\mu}^{\top}\mathbf{\pi}\) by playing the following game. At the beginning of the game, the action set \(\mathcal{A}\) is revealed. Then, the player pulls an arm over a sequence of rounds; in each round \(t\), she pulls an arm \(p_{t}\in[d]\) and observes a reward sampled from the associated reward distribution \(\phi_{p_{t}}\). The player can stop the game at any round. She needs to guarantee that \(\Pr\left[\mathbf{\pi}_{\text{out}}\neq\mathbf{\pi}^{*}\right]\leq\delta\) for a given confidence parameter \(\delta\). For any \(\delta\in(0,1)\), we call an algorithm \(\mathbb{A}\) a \(\delta\)-correct algorithm if, for any expected reward \(\mathbf{\mu}\in\mathbb{R}\), the probability of the error of \(\mathbb{A}\) is at most \(\delta\), i.e., \(\Pr\left[\mathbf{\pi}_{\text{out}}\neq\mathbf{\pi}^{*}\right]\leq\delta\). The learner's performance is evaluated by her _sample complexity_, which is the round she terminated the game. We assume \(\mathbf{\pi}^{*}\) is unique. ### Technical Assumptions To cope with the exponential largeness of the action set, we make two mild assumptions for our R-CPE-MAB model. The first one is the existence of the _offline oracle_, which computes \(\mathbf{\pi}^{*}(\mathbf{\nu})=\underset{\mathbf{\pi}\in\mathcal{A}}{\arg\max}\mathbf{\nu}^{ \top}\mathbf{\pi}\) in polynomial or pseudo-polynomial time once \(\mathbf{\nu}\) is given. We write \(\mathrm{Oracle}(\mathbf{\nu})=\mathbf{\pi}^{*}(\mathbf{\nu})\). This assumption is relatively mild since in linear programming, we have the network simplex algorithm [10] and interior points methods [11], whose computational complexities are both polynomials in \(d\). Moreover, if we consider the knapsack problem, though the knapsack problem is NP-complete [10] and is unlikely that it can be solved in polynomial time, it is well known that we can solve it in pseudo-polynomial time if we use dynamic programming [10, 12]. In some cases, it may be sufficient to use this dynamic programming algorithm as the offline oracle in the R-CPE-MAB. The second assumption is that the set of possible outputs of the offline oracle is finite-sized. This assumption also holds in many combinatorial optimization problems. For instance, no matter what algorithm is used to compute the solution to the knapsack problem, the action set is a finite set of integer vectors, so this assumption holds. Also, in linear programming problems such as the optimal transport problem [23] and the production planning problem [10], it is well known that the solution is on a vertex of the feasible region, and therefore, the set of candidates of solutions for optimization problem (1) is finite. ## Lower Bound of R-CPE-MAB In this section, we discuss sample complexity lower bounds of R-CPE-MAB. In Theorem 1, we show a sample complexity lower bound which is derived explicitly. In Theorem 2, we show another lower bound, which is only written in an implicit form but is tighter than that in Theorem 1. In our analysis, we have several key quantities that are useful to discuss the sample complexity upper bounds. First, we define \(\boldsymbol{\pi}^{(s)}\) as follows: \[\boldsymbol{\pi}^{(s)}=\underset{\boldsymbol{\pi}\in\mathcal{A}\backslash \{\boldsymbol{\pi}^{*}\}}{\arg\min}\ \frac{\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi} \right)}{|\boldsymbol{\pi}^{*}_{s}-\pi_{s}|}. \tag{4}\] Intuitively, among the actions whose \(s\)-th element is different from \(\boldsymbol{\pi}^{*}\), \(\boldsymbol{\pi}^{(s)}\) is the one that is the most difficult to confirm its optimality. We define a notion named _G-gap_ which is formally defined as follows: \[\Delta_{(s)} = \frac{\boldsymbol{\mu}^{\top}(\boldsymbol{\pi}^{*}-\boldsymbol{ \pi}^{(s)})}{|\boldsymbol{\pi}^{*}_{s}-\pi^{(s)}_{s}|} \tag{5}\] \[= \underset{\boldsymbol{\pi}\in\mathcal{A}\backslash\{\boldsymbol {\pi}^{*}\}}{\min}\ \frac{\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi} \right)}{|\boldsymbol{\pi}^{*}_{s}-\pi_{s}|}.\] _G-gap_ can be seen as a natural generalization of _gap_ introduced in the CPE-MAB literature [1, 10, 11]. Then, we denote the sum of inverse squared gaps by \[\mathbf{H} = \sum_{s=1}^{d}\left(\frac{1}{\Delta_{(s)}}\right)^{2}\] \[= \sum_{s=1}^{d}\max_{\boldsymbol{\pi}\in\mathcal{A}\backslash\{ \boldsymbol{\pi}^{*}\}}\frac{|\boldsymbol{\pi}^{*}_{s}-\boldsymbol{\pi}_{s}|^ {2}}{\left(\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi}\right)^{\top} \boldsymbol{\mu}\right)^{2}},\] which we define as a hardness measure of the problem instance in R-CPE-MAB. In Theorem 1, we show that \(\mathbf{H}\) appears in a sample complexity lower bound of R-CPE-MAB. Therefore, we expect that this quantity plays an essential role in characterizing the difficulty of the problem instance. ### Explicit Form of a Sample Complexity Lower Bound Here, we show a sample complexity lower bound of the R-CPE-MAB which is written in an explicit form. **Theorem 1**.: _Fix any action set \(\mathcal{A}\subset\mathbb{R}^{d}\) and any vector \(\boldsymbol{\mu}\in\mathbb{R}^{d}\). Suppose that, for each arm \(s\in[d]\), the reward distribution \(\phi_{s}\) is given by \(\phi_{s}=\mathcal{N}(\mu_{s},1)\). Then, for any \(\delta\in\left(0,\frac{\varepsilon^{-16}}{4}\right)\) and any \(\delta\)-correct algorithm \(\mathbb{A}\), we have_ \[\mathbb{E}\left[T\right]\geq\frac{1}{16}\mathbf{H}\log\left(\frac{1}{4\delta }\right), \tag{6}\] _where \(T\) denotes the total number of arm pulls by algorithm \(\mathbb{A}\)._ Theorem 1 can be seen as a natural generalization of the result in ordinary CPE-MAB shown in Chen et al. (2014). In the CPE-MAB literature, the hardness measure \(\mathbf{H}^{\prime}\) is defined as follows [1, 10, 11]: \[\mathbf{H}^{\prime}=\sum_{s=1}^{d}\left(\frac{1}{\Delta_{s}}\right)^{2}, \tag{7}\] where \[\Delta_{s}=\min_{\boldsymbol{\pi}\in\{\boldsymbol{\pi}\in\mathcal{A}\mid \pi_{s}\neq\pi^{*}_{s}\}}\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}- \boldsymbol{\pi}\right). \tag{8}\] Below, we discuss why the hardness measure in R-CPE-MAB uses \(\Delta_{(s)}\) not \(\Delta_{s}\). Suppose we have two bandit instances \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\). In \(\mathcal{B}_{1}\), \(\mathcal{A}_{1}=\left\{\left(100,0\right)^{\top},\left(0,100\right)^{\top}\right\}\) and \(\boldsymbol{\mu}_{1}=\left(\mu_{1,1},\mu_{1,2}\right)=\left(0.011,0.01\right)^{\top}\). In \(\mathcal{B}_{2}\), \(\mathcal{A}_{2}=\left\{\left(1,0\right)^{\top},\left(0,1\right)^{\top}\right\}\) and \(\boldsymbol{\mu}_{2}=\left(\mu_{2,1},\mu_{2,2}\right)=\left(0.1,0.11\right)^{\top}\). We assume that, for both instances, the arms are equipped with Gaussian distributions with unit variance. Also, for any \(i\in\{1,2\}\) and \(s\in\{1,2\}\), let us denote by \(T_{i,s}(t)\) the number of times arm \(s\) is pulled in the bandit instance \(\mathcal{B}_{i}\) in round \(t\). Let us consider the situation where \(T_{1,1}(t)=T_{2,1}(t)\) and \(T_{1,2}(t)=T_{2,2}(t)\), and we have prior knowledge that \(\mu_{1,1}\in\left[\hat{\mu}_{1,1}-\sigma_{1},\hat{\mu}_{1,1}+\sigma_{1}\right]\), \(\mu_{1,2}\in\left[\hat{\mu}_{1,2}-\sigma_{2},\hat{\mu}_{1,2}+\sigma_{2}\right]\), \(\mu_{2,1}\in\left[\hat{\mu}_{2,1}-\sigma_{1},\hat{\mu}_{2,1}+\sigma_{1}\right]\), and \(\mu_{2,2}\in\left[\hat{\mu}_{2,2}-\sigma_{2},\hat{\mu}_{2,2}+\sigma_{2}\right]\). Here, \(\sigma_{1}\) and \(\sigma_{2}\) are some confidence bounds on the rewards of arms, which may be derived by some concentration inequality. Note that they depend only on the number of times the arm is pulled, and that the confidence bound for each arm is the same in both instances since \(T_{1,1}(t)=T_{2,1}(t)\) and \(T_{1,2}(t)=T_{2,2}(t)\). We can see that \(\mathbf{H}^{\prime}\) are the same in both \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\), which implies that the difficulty in identifying the best actions is the same in \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\). However, this is not true since when we estimate the reward of actions in \(\mathcal{B}_{1}\), the confidence bound will be amplified by 100, and therefore, we are far less confident to determine the best action in \(\mathcal{B}_{1}\) than \(\mathcal{B}_{2}\). On the other hand, \(\mathbf{H}\) reflects this fact. \(\mathbf{H}\) in \(\mathcal{B}_{1}\) is 10000 larger than that of \(\mathcal{B}_{2}\), which implies that identifying the best action in \(\mathcal{B}_{1}\) is much more difficult than \(\mathcal{B}_{2}\). ### Implicit Form of a Lower Bound Here, we show another lower bound, which is only written in an implicit form but is tighter than that of Theorem 1. **Theorem 2**.: _For any \(\delta\in(0,0.1)\) and a \(\delta\)-correct algorithm \(\mathbb{A}\), \(\mathbb{A}\) will pull arms \(\Omega(\operatorname{Low}(\mathcal{A})\log\frac{1}{\delta})\) times, where \(\operatorname{Low}(\mathcal{A})\) is the optimal value of the following mathematical program:_ \[\begin{split}\operatorname{minimize}&\sum_{s=1}^{d} \tau_{s}\\ \operatorname{subject\ to}&\forall\boldsymbol{\pi} \in\mathcal{A},\sum_{s\in\boldsymbol{\pi}^{*}\circ\boldsymbol{\pi}}\frac{ \left|\pi_{s}^{*}-\pi_{s}\right|^{2}}{\tau_{s}}\leq\Delta_{\boldsymbol{\pi}^ {*},\boldsymbol{\pi}}^{2}\\ &\tau_{s}>0,\forall s\in[d],\end{split} \tag{9}\] _where \(\boldsymbol{\pi}^{*}\circ\boldsymbol{\pi}=\{s\in[d]\mid\pi_{s}^{*}\neq\pi_{s}\}\) and \(\Delta_{\boldsymbol{\pi}^{*},\boldsymbol{\pi}}=\boldsymbol{\mu}^{\top}\left( \boldsymbol{\pi}^{*}-\boldsymbol{\pi}\right)\)._ Theorem 2 can be seen as a natural generalization of the result in Chen et al. (2017). In the appendix, we show that the lower bound in Theorem 2 is no weaker than that in Theorem 1, by showing \(\operatorname{Low}(\mathcal{A})\geq\mathbf{H}\). ### Comparison with the Existing Results Here, we compare our lower bound to the result in Fiez et al. (2019) and the result in Nakamura and Sugiyama (2023). From Theorem 1 in Fiez et al. (2019), we have a sample complexity lower bound of \(\mathcal{O}\left(\max_{\boldsymbol{\pi}\in\mathcal{A}\setminus\{\boldsymbol{ \pi}\}}\frac{\left(\sum_{s=1}^{d}\left|\pi_{s}^{*}-\pi_{s}\right|\right)^{2}}{ \left(\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi} \right)\right)^{2}}\log\left(\frac{1}{\delta}\right)\right)\). In general, it is not clear if our result (6) is a tighter bound than their result. However, if actions in \(\mathcal{A}\) are sparse, and we can say that \(\left(\sum_{s=1}^{d}\left|\pi_{s}^{*}-\pi_{s}\right|\right)^{2}\approx\sum_{s= 1}^{d}\left|\pi_{s}^{*}-\pi_{s}\right|^{2}\), their lower bound can be written as \[\mathcal{O}\left(\max_{\boldsymbol{\pi}\in\mathcal{A}\setminus\{\boldsymbol{ \pi}\}}\sum_{s=1}^{d}\frac{\left|\pi_{s}^{*}-\pi_{s}\right|^{2}}{\left( \boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi}\right) \right)^{2}}\log\left(\frac{1}{\delta}\right)\right), \tag{10}\] which is looser than our lower bound in (6), which is written as \[\mathcal{O}\left(\sum_{s=1}^{d}\max_{\boldsymbol{\pi}\in\mathcal{A}\setminus\{ \boldsymbol{\pi}\}}\frac{\left|\pi_{s}^{*}-\pi_{s}\right|^{2}}{\left( \boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi}\right) \right)^{2}}\log\left(\frac{1}{\delta}\right)\right).\] Next, we compare our lower bound (6) with the lower bound shown in Nakamura and Sugiyama (2023). First, let us define \(\operatorname{Gwidth}_{1}=\max_{\boldsymbol{\pi},\boldsymbol{\pi}^{\prime}\in \mathcal{A}}\sum_{s=1}^{d}\left|\pi_{s}-\pi_{s}^{\prime}\right|\) and \(\operatorname{Gwidth}_{2}=\max_{\boldsymbol{\pi},\boldsymbol{\pi}^{\prime} \in\mathcal{A}}\sum_{s=1}^{d}\left|\pi_{s}-\pi_{s}^{\prime}\right|^{2}\). \(\operatorname{Gwidth}_{1}\) was first introduced in Nakamura and Sugiyama (2023), where they introduced it as a generalization of the notion _width_ in Chen et al. (2014), and claimed that it characterizes the difficulty of the problem instance in R-CPE-MAB. Nakamura and Sugiyama (2023) shows that, in the worst case, the lower bound is of \(\mathcal{O}\left(\frac{\operatorname{Gwidth}_{1}^{2}}{32\Delta_{\min}^{2}} \log\left(\frac{1}{4\delta}\right)\right)\), where \(\Delta_{\min}=\min_{s\in[d]}\Delta_{s}\). In the appendix, we show that if \(\operatorname{Gwidth}_{1}^{2}\leq\operatorname{2Gwidth}_{2}\) and \(\Delta_{1}\approx\Delta_{2}\approx\cdots\approx\Delta_{d}\), our upper bound (6) is tighter than that of Nakamura and Sugiyama (2023). ## 6 GenTS-Explore Algorithm In this section, we introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which can identify the best action in the R-CPE-MAB draws \(M(\delta,q,t)\triangleq\left\lceil\frac{1}{q}(\log 12|\mathcal{A}|^{2}t^{2}/ \delta)\right\rceil\) random samples independently from a Gaussian distribution \(\mathcal{N}\left(\hat{\mu}_{s}(t),\frac{C(\delta,q,t)}{T_{s}(t))}\right)\), and \(C(\delta,q,t)\triangleq\frac{4R^{2}\log(\log(12|\mathcal{A}|^{2}t^{2}/ \delta)}{\phi^{2}(q)}\). Intuitively, \(\{\boldsymbol{\theta}^{k}(t)\}_{k=1}^{M(\delta,q,t)}\) is a set of possible values that the true reward vector \(\boldsymbol{\mu}\) can take. Then, it computes \(\tilde{\boldsymbol{\pi}}^{k}(t)=\operatorname{Oracle}(\boldsymbol{\theta}^{k}( t))\) for all \(k\), where \(\boldsymbol{\theta}^{k}(t)=\left(\theta_{1}^{k}(t),\ldots,\theta_{d}^{k}(t)\right)\). We can say that we estimate the true reward gap \(\boldsymbol{\mu}^{\top}\left(\tilde{\boldsymbol{\pi}}^{k}(t)-\tilde{ \boldsymbol{\pi}}(t)\right)\) by computing \(\boldsymbol{\theta}^{k}(t)^{\top}\left(\tilde{\boldsymbol{\pi}}^{k}(t)- \tilde{\boldsymbol{\pi}}(t)\right)\) for each \(k\in[M(\delta,q,t)]\). If all the actions \(\tilde{\boldsymbol{\pi}}^{k}(t)\)'s are the same as \(\tilde{\boldsymbol{\pi}}(t)\), we output \(\tilde{\boldsymbol{\pi}}(t)\) as the best action. Otherwise, we focus on \(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)\), where \[\operatorname*{arg\,max}_{k\in[M(\delta,q,t)]}\boldsymbol{\theta}^{k}(t)^{\top} \left(\tilde{\boldsymbol{\pi}}^{k^{*}}(t)-\hat{\boldsymbol{\pi}}(t)\right)\text{.}\] We can say that \(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)\) is potentially the best action. Then, the most essential question is: "Which arm should we pull in round \(t\), once we obtain the empirically best action \(\hat{\boldsymbol{\pi}}(t)\) and a potentially best action \(\tilde{\boldsymbol{\pi}}^{k^{*}}(t)\)?" We discuss this below. Arm Selection StrategiesHere, we discuss which arm to pull at round \(t\), once we obtain the empirically best action \(\tilde{\boldsymbol{\pi}}(t)\) and a potentially best action \(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)\). For the ordinary CPE-MAB, the arm selection strategy in Wang and Zhu (2022) was to pull the following arm: \[p_{t}^{\text{naive}}=\operatorname*{arg\,min}_{s\in\left[d\right]\;\tilde{ \pi}_{s}(t)\neq\tilde{\pi}_{s}^{k^{*}_{s}}(t)}T_{s}(t). \tag{11}\] Therefore, one candidate of an arm selection strategy is to naively pull the arm defined in (11). We call this the _naive arm selection strategy_. Next, we consider another arm selection strategy as follows. We want to pull the arm that is most "informative" to discriminate whether \(\hat{\boldsymbol{\pi}}(t)\) is a better action than \(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)\) or not. In other words, we want to pull the arm that is most "informative" to estimate the true gap \(\boldsymbol{\mu}^{\top}\left(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)-\hat{ \boldsymbol{\pi}}(t)\right)\). If it is less than 0, \(\hat{\boldsymbol{\pi}}(t)\) is better, and if it is greater than 0, \(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}\) is better. To discuss this more quantitatively, let us assume that \(\boldsymbol{\theta}^{k^{*}_{t}}(t)\approx\hat{\boldsymbol{\mu}}(t)\). From Hoeffding's inequality (Luo, 2017), we obtain the following: \[\Pr\left[\left|\left(\boldsymbol{\mu}-\boldsymbol{\theta}^{k^{* }_{t}}(t)\right)^{\top}\left(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)-\hat{ \boldsymbol{\pi}}(t)\right)\right|\geq\epsilon\right] \tag{12}\] \[\approx \Pr\left[\left|\left(\boldsymbol{\mu}-\hat{\boldsymbol{\mu}}(t )\right)^{\top}\left(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)-\hat{\boldsymbol {\pi}}(t)\right)\right|\geq\epsilon\right]\] \[\leq 2\exp\left(-\frac{\epsilon^{2}}{2\sum\limits_{s=1}^{d}\frac{ \left|\tilde{\pi}_{s}^{k^{*}_{t}}(t)-\hat{\pi}_{s}(t)\right|^{2}}{T_{s}(t)}R ^{2}}\right),\] where \(\epsilon>0\). (12) shows that if we make \(\sum\limits_{s=1}^{d}\frac{\left|\tilde{\pi}_{s}^{k^{*}_{t}}(t)-\tilde{\pi}_{ s}(t)\right|^{2}}{T_{s}(t)}\) small, \(\tilde{\Lambda}_{t}^{k^{*}_{t}}=\boldsymbol{\theta}^{k^{*}_{t}}(t)^{\top} \left(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)-\hat{\boldsymbol{\pi}}(t)\right)\) will become close to the true gap \(\boldsymbol{\mu}^{\top}\left(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)-\hat{ \boldsymbol{\pi}}(t)\right)\). Since we want to estimate the true gap accurately as soon as possible, we pull arm \(p_{t}^{\text{R}}\) that makes \(\sum\limits_{s=1}^{d}\frac{\left|\tilde{\pi}_{s}^{k^{*}_{t}}(t)-\tilde{\pi}_{ s}(t)\right|^{2}}{T_{s}(t)}\) the smallest, which is defined as follows: \[p_{t}^{\text{R}} = \operatorname*{arg\,min}_{e\in[d]}\sum\limits_{s=1}^{d}\frac{ \left|\tilde{\pi}_{s}^{k^{*}_{t}}(t)-\hat{\pi}_{s}(t)\right|^{2}}{T_{s}(t)+ \boldsymbol{1}[s=e]}, \tag{13}\] where \(\boldsymbol{1}[\cdot]\) denotes the indicator function. Then, the following proposition holds. **Proposition 3**.: \(p_{t}^{\text{R}}\) _in (13) can be written as follows:_ \[p_{t}^{\text{R}}=\operatorname*{arg\,max}_{s\in[d]}\frac{\left|\tilde{\pi}_{s }^{k^{*}_{t}}(t)-\hat{\pi}_{s}(t)\right|^{2}}{T_{s}(t)(T_{s}(t)+1)}. \tag{14}\] We show the proof in the appendix. We call pulling the arm defined in (14) the _R-CPE-MAB arm selection strategy_. (14) implies that the larger \(\left|\tilde{\pi}_{s}^{k^{*}_{t}}(t)-\hat{\pi}_{s}(t)\right|\) is, the more we need to pull arm \(s\). Similar to the discussion in the previous section, this is because if \(\left|\tilde{\pi}_{s}^{k^{*}_{t}}(t)-\hat{\pi}_{s}(t)\right|\) is large, the uncertainty of arm \(s\) is amplified largely when we compute \(\tilde{\Delta}_{t}^{k^{*}_{t}}=\boldsymbol{\theta}^{k^{*}_{t}}(t)^{\top} \left(\tilde{\boldsymbol{\pi}}^{k^{*}_{t}}(t)-\hat{\boldsymbol{\pi}}(t)\right)\). Therefore, we have to pull arm \(s\) many times to make the \(\frac{C(\delta,q,t)}{T_{s}(t)}\) small, which is the variance of \(\theta_{s}^{k}\), to gain more confidence about the reward of arm \(s\). Also, the _R-CPE-MAB arm selection strategy_ is equivalent to the _naive arm selection strategy_ in CPE-MAB, since when \(\mathcal{A}\subset\{0,1\}^{d}\), \[p_{t}^{\text{R}} = \operatorname*{arg\,max}_{s\in[d]}\frac{\left|\tilde{\pi}_{s}^{k^{ *}_{t}}(t)-\hat{\pi}_{s}(t)\right|^{2}}{T_{s}(t)(T_{s}(t)+1)}\] \[= \operatorname*{arg\,max}_{s\in\left\{s\in[d]\;|\;\tilde{\pi}_{s}^{ k^{*}_{t}}(t)\neq\hat{\pi}_{s}(t)\right\}}\frac{1}{T_{s}(t)(T_{s}(t)+1)}\] \[= \operatorname*{arg\,min}_{s\in\left\{s\in[d]\;|\;\tilde{\pi}_{s}^{ k^{*}_{t}}(t)\neq\hat{\pi}_{s}(t)\right\}}T_{s}(t).\] \[= p_{t}^{\text{naive}}.\] Therefore, we can say that the _R-CPE-MAB arm selection strategy_ is a generalization of the arm selection strategy in Wang and Zhu (2022). ### Sample Complexity Upper Bounds of the GenTS-Explore Algorithm Here, we show sample complexity upper bounds of the GenTS-Explore algorithm when we use the two arm selection strategies: the _naive arm selection strategy_ shown in (11) and the _R-CPE-MAB arm selection strategy_ shown in (14), respectively. First, in Theorem 4, we show a sample complexity upper bound of the _naive arm selection strategy_. **Theorem 4**.: _For \(q\in[\delta,0.1]\), with probability at least \(1-\delta\), the GenTS-Explore algorithm with the naive arm sampling strategy will output the best action \(\boldsymbol{\pi}^{*}\) with sample complexity upper bounded by_ \[\mathcal{O}\left(R^{2}\mathbf{H}^{\text{N}}\frac{\left(\log\left(\left|\mathcal{ A}\right|\mathbf{H}^{\text{N}}\right)+\log\frac{1}{\delta}\right)^{2}}{\log\frac{1}{q}}\right), \tag{16}\] _where \(\mathbf{H}^{\text{N}}=\sum_{s=1}^{d}\frac{U_{s}}{\Lambda_{(s)}^{2}}\) and \(U_{s}=\max_{\boldsymbol{\pi}^{\prime}\in\mathcal{A},\boldsymbol{\pi}\in\{ \pi\in\mathcal{A}\;|\;\pi_{s}^{*}\neq\pi_{s}\}}\frac{1}{\left|\pi_{s}^{*}-\pi_{s} \right|^{2}}\sum_{e=1}^{d}\left|\pi_{e}-\pi_{e}^{\prime}\right|^{2}\)._ _Specifically, if we choose \(q=\delta\), then the complexity upper bound is_ \[\mathcal{O}\left(R^{2}\mathbf{H}^{\mathrm{N}}\left(\log\frac{1}{ \delta}+\log^{2}\left(\left|\mathcal{A}\right|\mathbf{H}^{\mathrm{N}}\right) \right)\right). \tag{17}\] Next, in Theorem 5, we show a sample complexity upper bound of the _R-CPE-MAB arm selection strategy_. **Theorem 5**.: _For \(q\in[\delta,0.1]\), with probability at least \(1-\delta\), the GenTS-Explore algorithm with the R-CPE-MAB arm sampling strategy will output the best action \(\boldsymbol{\pi}^{*}\) with sample complexity upper bounded by_ \[\mathcal{O}\left(R^{2}\mathbf{H}^{\mathrm{R}}\frac{\left(\log \left(\left|\mathcal{A}\right|\mathbf{H}^{\mathrm{R}}\right)+\log\frac{1}{ \delta}\right)^{2}}{\log\frac{1}{q}}\right), \tag{18}\] _where \(\mathbf{H}^{\mathrm{R}}=\sum_{s=1}^{d}\frac{V_{s}}{\Delta_{s}^{2}}\) and \(V_{s}=\max_{\boldsymbol{\pi}^{\prime}\in\mathcal{A},\boldsymbol{\pi}\in\{ \boldsymbol{\pi}\in\mathcal{A}\mid\boldsymbol{\pi}^{*}_{s}\neq\boldsymbol{ \pi}_{s}\}}\frac{\left|\boldsymbol{\pi}_{s}-\boldsymbol{\pi}^{\prime}_{s} \right|}{\left|\boldsymbol{\pi}^{*}_{s}-\boldsymbol{\pi}_{s}\right|^{2}}\sum_{ e=1}^{d}|\pi_{e}-\pi^{\prime}_{e}|\)._ _Specifically, if we choose \(q=\delta\), then the complexity upper bound is_ \[\mathcal{O}\left(R^{2}\mathbf{H}^{\mathrm{R}}\left(\log\frac{1} {\delta}+\log^{2}\left(\left|\mathcal{A}\right|\mathbf{H}^{\mathrm{R}}\right) \right)\right). \tag{19}\] Comparison to the Lower BoundsLet us define \(U=\max_{s\in[d]}U_{s}\) and \(V=\max_{s\in[d]}V_{s}\). Then, the sample complexity upper bound of the naive arm selection strategy is \(\mathcal{O}\left(U\mathbf{H}\log\left(\frac{1}{\delta}\right)\right)\) and that of the R-CPE-MAB arm selection strategy is \(\mathcal{O}\left(V\mathbf{H}\log\left(\frac{1}{\delta}\right)\right)\). Therefore, regardless of which arm selection strategy is used, the sample complexity upper bound of the GenTS-Explore algorithm matches the lower bound shown in (6) up to a problem-dependent constant factor. Comparison between the Naive and R-CPE-MAB Arm Selection StrategiesIn general, whether the _R-CPE-MAB arm selection strategy_ has a tighter upper bound than the _naive arm selection strategy_ or not depends on the problem instance. Let us consider one situation in which the R-CPE-MAB arm selection strategy may be a better choice than the naive arm selection strategy. Suppose \(\mathcal{A}=\left(\left(100,0\right)^{\top},\left(0,1,1\right)^{\top}\right)\) and \(\boldsymbol{\pi}^{*}=\left(100,0,0\right)^{\top}\). Then, \(U_{1}=1,0002\), \(U_{2}=10002\), and \(U_{3}=10002\). On the other hand, \(V_{1}=1.02\), \(V_{2}=102\), and \(V_{3}=102\). We can see that \(U_{2}\) and \(U_{3}\) are extremely larger than \(V_{2}\) and \(V_{3}\), respectively, and therefore \(\mathbf{H}^{\mathrm{R}}\) is much smaller than \(\mathbf{H}^{\mathrm{N}}\). Eventually, the sample complexity upper bound of the naive arm selection strategy will be looser than that of the R-CPE-MAB arm selection strategy. Comparison with Existing Works in the Ordinary CPE-MABIn the ordinary CPE-MAB, where \(\mathcal{A}\subseteq\{0,1\}^{d}\), a key notion called _width_ appears in the upper bound of some existing algorithms (Chen et al., 2014; Wang and Zhu, 2022), which is defined as follows: \[\mathrm{width}=\max_{\boldsymbol{\pi},\boldsymbol{\pi}^{\prime} \in\mathcal{A}}\sum_{s=1}^{d}\left|\pi_{s}-\pi^{\prime}_{s}\right|. \tag{20}\] The following proposition implies that both \(U\) and \(V\) can be seen as generalizations of the notion _width_. **Proposition 6**.: _Let \(U=\max_{s\in[d]}U_{s}\) and \(V=\max_{s\in[d]}V_{s}\). In the ordinary CPE-MAB, where \(\mathcal{A}\subseteq\{0,1\}^{d}\), we have_ \[U=V=\mathrm{width}. \tag{21}\] Next, recall that the GenTS-Explore algorithm is equivalent to the TS-Explore algorithm in the ordinary CPE-MAB, regardless of which arm selection strategy is used. Proposition 7 shows that our upper bound (17) and (19) are both tighter than that shown in Wang and Zhu (2022), which is \(\mathcal{O}\left(\mathrm{width}\sum_{s=1}^{d}\frac{1}{\Delta_{s}^{2}}\right)\). **Proposition 7**.: _In the ordinary CPE-MAB, where \(\mathcal{A}\subseteq\{0,1\}^{d}\), we have_ \[\mathbf{H}^{\mathrm{N}}=\sum_{s=1}^{d}\frac{U_{s}}{\Delta_{s}^{2 }}\leq\mathrm{width}\sum_{s=1}^{d}\frac{1}{\Delta_{s}^{2}}, \tag{22}\] _and_ \[\mathbf{H}^{\mathrm{R}}=\sum_{s=1}^{d}\frac{V_{s}}{\Delta_{s}^{2 }}\leq\mathrm{width}\sum_{s=1}^{d}\frac{1}{\Delta_{s}^{2}}. \tag{23}\] ## Experiment In this section, we experimentally compare the two arm selection strategies, the _naive arm selection strategy_ and the _R-CPE-MAB arm selection strategy_. ### The Knapsack Problem Here, we consider the knapsack problem (Dantzig and Mazur, 2007), where the action set \(\mathcal{A}\) is exponentially large in \(d\) in general. In the knapsack problem, we have \(d\) items. Each item \(s\in[d]\) has a weight \(w_{s}\) and value \(v_{s}\). Also, there is a knapsack whose capacity is \(W\) in which we put items. Our goal is to maximize the total value of the knapsack not letting the total weight of the items exceed the capacity of the knapsack. Formally, the optimization problem is given as follows: \[\text{maximize}_{\boldsymbol{\pi}\in\mathcal{A}} \sum_{s=1}^{d}v_{s}\pi_{s}\] subject to \[\sum_{s=1}^{d}\pi_{s}w_{s}\leq W,\] where \(\pi_{s}\) denotes the number of item \(s\) in the knapsack. Here, the weight of each item is known, but the value is unknown, and therefore has to be estimated. In each time step, the player chooses an item \(s\) and gets an observation of value \(r_{s}\), which can be regarded as a random variable from an unknown distribution with mean \(v_{s}\). For our experiment, we generated the weight of each item uniformly from \(\{1,2,\ldots,50\}\). For each item \(s\), we generated \(v_{s}\) as \(v_{s}=w_{s}\times(1+x)\), where \(x\) is a sample from \(\mathcal{N}(0,0.1^{2})\). We set the capacity of the knapsack at \(W=50\). Each time we chose an item \(s\), we observed a value \(v_{s}+x\) where \(x\) is a noise from \(\mathcal{N}(0,0.1^{2})\). We set \(R=0.1\). We show the result in Figure 3. We can say that the R-CPE-MAB arm selection strategy performs better than the naive arm selection strategy since the former needs fewer rounds until termination. In some cases, the sample complexity of the R-CPE-MAB arm selection strategy is only 1/3 to 1/2 that of the naive arm selection strategy. ### The Production Planning Problem Here, we consider the production planning problem [10]. In the production planning problem, there are \(m\) materials, and these materials can be mixed to make one of \(d\) different products. We have a matrix \(\mathbf{M}\in\mathbb{R}^{m\times d}\), where \(M_{s}\) represents how much material \(i\in[m]\) is needed to make product \(s\in[d]\). Also, we are given vectors \(\mathbf{v}^{\max}\in\mathbb{R}^{m}\) and \(\mathbf{\mu}\in\mathbb{R}^{d}\). Then, formally, the optimization problem is given as follows: \[\text{maximize}_{\mathbf{\pi}\in\mathcal{A}} \mathbf{\mu}^{\top}\mathbf{\pi}\] subject to \[\mathbf{M}\mathbf{\pi}\leq\mathbf{v}^{\max},\] where the inequality is an element-wise comparison. Intuitively, we want to obtain the optimal vector \(\mathbf{\pi}^{*}\) that maximizes the total profit without using more material \(i\) than \(v_{i}^{\max}\) for each \(i\in[m]\), where \(\pi_{s}^{*}\) represents how much product \(s\) is produced. Here, we assume that \(\mathbf{M}\) and \(\mathbf{v}^{\max}\) are known, but \(\mathbf{\mu}\) is unknown, and therefore has to be estimated. In each time step, the player chooses a product \(s\) and gets an observation of value \(r_{s}\), which can be regarded as a random variable from an unknown distribution with mean \(\mu_{s}\). For our experiment, we have three materials, i.e., \(m=3\). We set \(\mathbf{v}^{\max}=(30,30,30)^{\top}\). Also, we generated every element in \(M\) uniformly from \(\{1,2,3,4\}\). For each product \(s\), we generated \(\mu_{s}\) as \(\mu_{s}=\sum_{i=1}^{m}M_{is}+x\), where \(x\) is a random sample from \(\mathcal{N}(0,1)\). Each time we chose a product \(s\), we observed a value \(\mu_{s}+x\) where \(x\) is a noise from \(\mathcal{N}\left(0,0.1^{2}\right)\). We set \(R=0.1\). We show the result in Figure 4. Again, we can see that the R-CPE-MAB arm selection strategy performs better than the naive arm selection strategy since the former needs fewer rounds until termination. ## Conclusion In this study, we studied the R-CPE-MAB. We showed novel lower bounds for R-CPE-MAB by generalizing key quantities in the ordinary CPE-MAB literature. Then, we introduced an algorithm named the GenTS-Explore algorithm, which can identify the best action in R-CPE-MAB even when the size of the action set is exponentially large in \(d\). We showed a sample complexity upper bound of it, and showed that it matches the sample complexity lower bound up to a problem-dependent constant factor. Finally, we experimentally showed that the GenTS-Explore algorithm can identify the best action even if the action set is exponentially large in \(d\).
2303.05221
SEAM: An Integrated Activation-Coupled Model of Sentence Processing and Eye Movements in Reading
Models of eye-movement control during reading, developed largely within psychology, usually focus on visual, attentional, lexical, and motor processes but neglect post-lexical language processing; by contrast, models of sentence comprehension processes, developed largely within psycholinguistics, generally focus only on post-lexical language processes. We present a model that combines these two research threads, by integrating eye-movement control and sentence processing. Developing such an integrated model is extremely challenging and computationally demanding, but such an integration is an important step toward complete mathematical models of natural language comprehension in reading. We combine the SWIFT model of eye-movement control (Seelig et al., 2020, doi:10.1016/j.jmp.2019.102313) with key components of the Lewis and Vasishth sentence processing model (Lewis & Vasishth, 2005, doi:10.1207/s15516709cog0000_25). This integration becomes possible, for the first time, due in part to recent advances in successful parameter identification in dynamical models, which allows us to investigate profile log-likelihoods for individual model parameters. We present a fully implemented proof-of-concept model demonstrating how such an integrated model can be achieved; our approach includes Bayesian model inference with Markov Chain Monte Carlo (MCMC) sampling as a key computational tool. The integrated Sentence-Processing and Eye-Movement Activation-Coupled Model (SEAM) can successfully reproduce eye movement patterns that arise due to similarity-based interference in reading. To our knowledge, this is the first-ever integration of a complete process model of eye-movement control with linguistic dependency completion processes in sentence comprehension. In future work, this proof of concept model will need to be evaluated using a comprehensive set of benchmark data.
Maximilian M. Rabe, Dario Paape, Daniela Mertzen, Shravan Vasishth, Ralf Engbert
2023-03-09T12:50:34Z
http://arxiv.org/abs/2303.05221v4
# SEAM: An Integrated Activation-Coupled Model of Sentence Processing and Eye Movements in Reading ###### Abstract Models of eye-movement control during reading, developed largely within psychology, usually focus on visual, attentional, and motor processes but neglect post-lexical language processing; by contrast, models of sentence comprehension processes, developed largely within psycholinguistics, generally focus only on post-lexical language processes. We present a model that combines these two research threads, by integrating eye-movement control and sentence processing. Developing such an integrated model is extremely challenging and computationally demanding, but such an integration is an important step toward complete mathematical models of natural language comprehension in reading. We combine the SWIFT model of eye-movement control (Engbert et al., _Psychological Review, 112_, 2005, pp. 777-813) with key components of the Lewis and Vasishth sentence processing model (Lewis and Vasishth, _Cognitive Science, 29_, 2005, pp. 375-419). This integration becomes possible, for the first time, due in part to recent advances in successful parameter identification in dynamical models, which allows us to investigate profile log-likelihoods for individual model parameters. We present a fully implemented proof-of-concept model demonstrating how such an integrated model can be achieved; our approach includes Bayesian model inference with Markov Chain Monte Carlo (MCMC) sampling as a key computational tool. The integrated model, SEAM, can successfully reproduce eye movement patterns that arise due to similarity-based interference in reading. To our knowledge, this is the first-ever integration of a complete process model of eye-movement control with linguistic dependency completion processes in sentence comprehension. In future work, this proof of concept model will need to be evaluated using a comprehensive set of benchmark data. reading, eye-movement control, sentence processing, dynamical models, Bayesian inference, oculomotor control ## Introduction What is the relationship between sentence processing and eye movements during reading? As an answer to this question, Just and Carpenter (1980, pp. 330-331) famously coined the eye-mind assumption, which states that "the eye remains fixated on a word as long as the word is being processed", and that "there is no appreciable lag between what is being fixated and what is being processed". But what does it mean for a word to be "processed"? Just and Carpenter's model of reading has three stages: Encoding of the word form and lexical access, identification of relationships between the words in a sentence (such as agent-action-object), and integration with information from previous sentences. Once these three stages are finished, the eyes proceed to the next word.1 Just and Carpenter's processing model is highly serial, which matches most readers' subjective experience that sentences are processed in an incremental, left-to-right fashion (Snell and Grainger, 2019). However, while readers do tend to make fixations incrementally in the reading direction, fixation sequences are not always in serial order: Instead of systematically shifting the gaze from one word to the next - something that only happens in about 50% of fixations (Seelig, 2021) - readers also skip words, refixate the same word, or regress to previous words (for a comprehensive discussion, see Rayner, 1998). Footnote 1: There is a fourth stage in the model, called wrap-up, which only occurs at the end of a sentence, and whose purpose is to finish any processing that could not be completed at a previous point during reading (but see Warren et al., 2009, for a critical discussion). This more complicated picture of reading aligns with the fact that the structure of many sentences in natural language does not correspond to simple agent-action-object sequences. Consider a sentence like (1), taken from Mertzen et al. (2023): (1) It turned out that the attorney whose secretary had forgotten that the visitor was important frequently complained about the salary at the firm. In this sentence, there are several dependencies between non-adjacent words, most strikingly the long-distance dependency between the noun _attorney_ and the verb _complained_. It is difficult to argue that the processing of the word _attorney_ is finished once the preamble _It turned out that the attorney_... has been read: It is clear that a verb must arrive at some point of which _attorney_ is the subject. Complete integration of _attorney_ can thus only be achieved when _complained_ is read after ten intervening words have been processed. It is therefore clear that the eyes will have to move forward even if the current word has not been completely integrated into the sentence structure. A well-established assumption in sentence processing is that a noun like _attorney_ is held in working memory until the dependency is completed, and needs to be retrieved when the verb is reached (Gibson, 1998, 2000; Lewis et al., 2006). A strong interpretation of the eye-mind assumption would predict that, given that the processing of _attorney_ is finalized at _complained_, readers should refixate _attorney_ once lexical access of _complained_ is complete. However, this is not what usually happens: While readers do make more regressions in more complex sentences that involve memory retrievals (e.g., Gordon et al., 2006; Jager et al., 2015; Lee et al., 2007; Mertzen et al., 2023), regressive eye movement nevertheless occur only in a minority of trials, and the word that is regressed to is not necessarily the word that needs to be retrieved to complete the dependency (Engelmann et al., 2013; Mitchell et al., 2008; von der Malsburg & Vasishth, 2011; von der Malsburg & Vasishth, 2013). Thus, while there is undoubtedly a connection between sentence processing and eye movements (Clifton et al., 2007; Frazier & Rayner, 1982; Rayner, 1998), it is much less direct than posited by the strong version of the eye-mind assumption (Reichle et al., 2009). Psycholinguistic studies of sentence processing typically rely on aggregated reading measures such as total fixation times, and models of language processing during reading, such as the classic Just and Carpenter (1980) model, usually ignore the complexity of eye-movement control. However, highly detailed models of eye-movement control do exist. An important line of work in cognitive psychology seeks to explain reading processes at the level of individual fixations and saccades by unpacking the underlying dynamics of the latent sub-processes involved. Several influential mathematical models of eye-movement control exist; a prominent example is the E-Z Reader model (Reichle et al., 2003). These models have historically focused on the effects of word-level properties such as word length, frequency, and predictability, and do not take into account higher-level processes such as linguistic dependency completion. However, there have been several attempts at integrating models of sentence processing difficulty with eye-movement control, including E-Z Reader (Reichle et al., 2009), the model of Engelmann et al. (2013), and Uber-Reader (Reichle, 2020; Veldre et al., 2020). These models focus on different aspects of sentence processing, and have been evaluated against corpus data, such as the Schilling corpus (Schilling et al., 1998). Two models that investigate the interaction between eye-movement control and sentence comprehension using data from planned experiments are reported in Vasishth and Engelmann (2022) and Dotlacil (2021); both these investigations use a highly simplified version of E-Z Reader, that is, the Eye Movements and Movement of Attention (EMMA) model embedded within the ACT-R architecture (Salvucci, 2001). The simplified EMMA model has important limitations; for example, as discussed in Engelmann et al. (2013), the model only allows regressive eye movements to the preceding word. All of these existing models do capture a range of selected empirical phenomena and furnish important insights into the interaction between eye-movement control and sentence parsing processes. However, to our knowledge, no model exists that uses a fully specified mainstream model of eye-movement control that is integrated with a model of dependency completion in language comprehension; furthermore, as far as we are aware, such a detailed process model has never been evaluated using data from a planned psycholinguistic experiment. A major difficulty in developing a more complex integrated model is that a considerable number of model parameters will need to be estimated using empirical data. For models of such complexity, conventional methods like grid search will lead to intractability. In order to implement such a complex model, Bayesian parameter estimation using the model's likelihood function (or an approximation) provides a rigorous approach to statistical inference (Rabe et al., 2021; Schutt et al., 2017). Two major advantages of the Bayesian approach are that parameters can be regularized or constrained a priori, which makes computation more efficient compared to the traditional grid search method, and that the uncertainty of the parameter estimates can be taken into account when evaluating model fit. Regularization makes parameter estimation more tractable, and incorporating the uncertainty of parameter estimates gives a more realistic picture of model fit (Nicenboim et al., 2023). Although Bayesian model fitting has been implemented for a basic reading model (Dotlacil, 2018), this line of work currently still neglects many low-level physiological and higher-level cognitive aspects of reading. In this context, the major recent advance in Bayesian parameter inference for modeling process-based models has been proposed by Rabe et al. (2021) and Seelig et al. (2020). This line of work relies on the dynamical model of eye movement control developed by Engbert et al. (2005), and demonstrates how the Bayesian approach can be deployed in even highly complex process models, even models that cannot easily be expressed in terms of a likelihood function. Based on the methods applied in the work by Rabe et al. (2021) and Seelig et al. (2020), it becomes possible, for the first time in eye-movement research, to ask the question: Can the complex lower-level cognitive and physiological principles of eye movements in reading be effectively integrated with a computational model of higher-level linguistic processing, taking into account the cost of long-distance dependency completion? Below, we present the Sentence-Processing and Eye-Movement Activation-Coupled Model (SEAM), a novel integrated model of sentence processing and eye movement control in reading. By combining the Saccade-Generation With Inhibition by Foveal Targets (SWIFT) model with the cue-based memory retrieval model (LV05) proposed by Lewis and Vasishth (2005), we can integrate spatially-distributed processing in eye movement control with rule-based dependency completion in a Bayesian model-fitting framework. We carry out model simulation using a principled Bayesian workflow (Schad et al., 2020) to demonstrate the activation-based coupling between SWIFT and the Lewis and Vasishth (2005) model. As a result, our model yields reliable Bayesian parameter estimates by generating gold-standard simulated data with known parameters, and then recovering these parameters using the Bayesian parameter estimation approach. We also fit SEAM to recently-published empirical data from an eye-tracking experiment investigating similarity-based interference (Mertzen et al., 2023), providing model-driven explanations for the observed eye movement patterns. Given that SEAM simulates time-ordered fixation sequences, the model makes predictions for all spatial and temporal summary statistics that are relevant in the reading research literature (e.g., fixation probabilities, landing positions/saccade amplitudes, and fixation durations/reading times). This capability of the SEAM architecture makes it an important candidate model for theory development in psycholinguistics. We will first introduce the Lewis and Vasishth (2005) model of sentence processing, then introduce the basic workings of SWIFT, and finally proceed to our integrated model SEAM. ### The Activation-Based Model of Sentence Processing (Lewis & Vasishth, 2005) During sentence reading, the human sentence processor has to incrementally integrate individual words into a syntactic structure, based on which sentence meaning can be derived. Lewis and Vasishth (2005) proposed a model of sentence processing (hereafter, we refer to this model as LV05) that is based on the cognitive architecture ACT-R (Anderson & Lebiere, 1998; Anderson, 2005). In the LV05 model, incoming words are incrementally integrated into syntactic constituents that are stored in memory as _chunks_. Memory chunks in LV05 carry information in the form of features, which can be used to access them in memory later on. Chunks also have fluctuating activation values that are determined by recency and by cue match during retrieval events. For instance, in a sentence like (2), as the sentence is read word-by-word, the noun phrases _the robber_ and _the _policeman_ are stored as memory chunks as soon as they are read. The verbs _chased_ and _escaped_ then each trigger retrievals of their respective arguments from memory. (2) The robber that the policeman in the patrol car chased escaped. Taking the retrieval at the verb _escaped_ as an example, the dependency needs to be completed by searching working memory for a suitable memory chunk to serve as a syntactic subject. The search process is cue-based, that is, the verb specifies a set of linguistic features such as \(\pm\)noun or \(\pm\)animate to identify the correct dependent, and existing memory chunks are reactivated based on their feature specifications. The best-matching candidate is usually retrieved, but because memory activation is noisy, misretrievals occasionally occur. In addition, processing is slowed when multiple memory chunks, such as _the robber_ and _the policeman_ in (2), match the retrieval cues and compete for activation, which is called the fan effect (e.g., Anderson, 1990). In LV05, the latency of a given retrieval is governed by a set of equations taken from the ACT-R architecture (Anderson et al., 2004), which determine each chunk's activation at a given point in time. Suppose that a noun phrase, say _the robber_ in (2), has been stored in memory as memory chunk \(k\). When a retrieval is triggered while processing word \(n\) (_escaped_) later on, chunk \(k\)'s activation value at word \(n\) is calculated as \[A_{k,n}\left(t\right)=S_{k}\left(t\right)+P_{k}\left(t\right)+B_{k}\left(t \right)\, \tag{1}\] where \(S_{k}\) is the memory association strength, \(P_{k}\) is the mismatch penalty, and \(B_{k}\) is the chunk-specific baseline activation. The fan effects \(\phi_{kl}\left(t\right)\) of competing retrieval candidates of all \(l\) features of memory chunk \(k\) decrease the chunk's activation strength, which also depends on the \(S_{\max}\) (_maximum activation strength_) parameter, i.e., \[S_{k}\left(t\right)=\sum_{l}\left[S_{\max}-\log\phi_{kl}\left(t\right)\right]. \tag{2}\] The fan effect variable \(\phi_{kl}\left(t\right)\) is defined as the number of memory chunks with feature \(l\) at time \(t\), including memory chunk \(k\) itself so that \(\phi_{kl}\left(t\right)\geq 1\). The mismatch penalty decreases activation for all retrieval cues \(l\) that do not match the corresponding feature of memory chunk \(k\), i.e., \[P_{k}\left(t\right)=\sum_{l}\Delta_{kl}\, \tag{3}\] where \[\Delta_{kl}:=\begin{cases}0&\text{if cue}_{l}=\text{feature}_{kl}\\ -p&\text{otherwise}\end{cases} \tag{4}\] and \(p\geq 0\) is a free parameter specifying the mismatch penalty incurred by each unmatched feature. Chunks become active when words are encoded or when retrievals are performed, and then start to decay. The resulting baseline activation at time \(t\) is given by \[B_{k}\left(t\right)=\sum_{i}\exp\left(-d\cdot\left[t-t_{ik}\right]\right) \tag{5}\] where \(d\) is a decay parameter and \(t_{ik}\) is the \(i\)-th memory access (encoding or retrieval) of memory chunk \(k\). Activation values are subject to stochastic noise controlled by the _ans_ (_activation noise_) parameter, so that \[A_{k,n}^{\prime}\left(t\right)\sim\text{Logistic}\left(A_{k,n}\left(t\right),ans \right). \tag{6}\] The memory chunk \(k_{n}^{\star}\) with the highest memory activation \(A_{k,n}^{\prime}\) is matched for the retrieval \(n\), and the retrieval latency is computed as \[t_{k,n}=F\cdot\exp\left[-A_{k,n}^{\prime}\left(t\right)\right]\, \tag{7}\] where \(F\) is the _latency factor_, a free linear scaling parameter. Equation (7) can be used to make quantitative predictions for reading times, and the LV05 model has been used to model a variety of phenomena in the sentence-processing literature (see Engelmann et al., 2019; Vasishth & Engelmann, 2022, for a review). However, the LV05 model can only be straightforwardly applied to paradigms in which sentences are read strictly incrementally, such as self-paced reading: The model can create chunks, track their activations, and integrate them with each other via retrievals, but it does not account for eye fixations, and cannot capture cases in which the order of fixations mismatches the serial word order due to skippings and regressions. To fully capture "natural" sentence reading, the LV05 model thus needs to be interactively integrated with a model that accounts for spatial and temporal aspects of eye movements. The dynamical SWIFT model (Engbert et al., 2002; Engbert et al., 2005) is a good candidate for integration with the LV05 model. Its main advantages are that it (a) has recently been implemented for Bayesian parameter inference (Rabe et al., 2021; Seelig et al., 2020), (b) predicts and explains all empirically observable saccades in sentence reading, and (c) allows for (but does not enforce) parallel processing of words. Even though SWIFT itself does not follow an ACT-R based architecture like EMMA (Engelmann et al., 2013; Salvucci, 2001; Vasishth & Engelmann, 2022), an integration with ACT-R-based models such as LV05 is possible via activation-based coupling, as we will detail below after a brief introduction of SWIFT. ### The SWIFT Model of Eye-Movement Control (Engbert et al., 2005) SWIFT is a model of eye-movement control in reading implemented in a dynamical cognitive modeling framework (Beer, 2000; Engbert, 2021). At its core, its internal timing processes and word activations govern the temporal control and target selection for saccadic eye movements. Words with high activation values are more likely to be selected as saccade targets. SWIFT assumes that all words that fall within a _processing span_ around the current fixation location are processed in parallel (Engbert et al., 2002).2 The processing rate \(\Lambda_{j}\left(t\right)\) of any given word \(j\) at time \(t\) depends on a number of factors such as gaze eccentricity, that is, the distance between word \(j\) and the currently fixated word, such that words that are further away from the visual focus are processed more slowly. In SWIFT, each word in the sentence passes through a _lexical_ and _post-lexical_ processing stage. During lexical processing, word recognition and identification take place. As word recognition is ongoing, the activation \(a_{j}(t)\) associated with the processed word \(j\) rises up to a maximum threshold. The threshold is modulated by the word's corpus frequency, as frequent words generally require less processing than less frequent words. Once the word is identified, post-lexical processing begins and word activation decreases again.3 Post-lexical processing, however, is not explicitly modeled in SWIFT. Although SWIFT keeps track of the processing stage of words in the sentence, it has no higher-level representation of its constituents or of the entire word sequence. Adjacent words may have an influence on processing difficulty, but there is no mechanism to account for difficulty due to dependency completion processes at the sentence level. Footnote 3: It is possible to include more detailed processes of word recognition in models of eye-movement control, e.g., the open bigram model in _OB1-Reader_(Snell et al., 2018), if letter-level effects seem relevant to a specific problem in eye-movement control. While the relative word activations at the time of programming a saccade determine the relative probability of each word to be selected as the upcoming target, the timing of saccades is relatively independent (Findlay Walker, 1999) and involves a cascade of several processes. The cascade starts with a global timer, which triggers the _labile_ and subsequent _non-labile_ saccade stages, a distinction motivated by oculomotor performance in the double-step paradigm (Becker & Jurgens, 1979). During the labile stage, saccades can be canceled and a new target can be selected. During the non-labile stage, cancellation is no longer possible. The execution of the saccade itself is a noisy process subject to systematic (range) and random error (McConkie et al., 1988), where the systematic error component can be explained by a Bayesian-optimal estimation of the saccade target position (Engbert & Krugel, 2010). Target selection in SWIFT is inherently stochastic, as it depends on the dynamic, relative word activations at any given point in time. Words with high activation values are more likely to be selected as targets than words with lower activation. The probability \(\pi_{j}(t)\) to select word \(j\) at time \(t\) as the next saccade target is given as \[\pi_{j}(t)=\frac{[a_{j}(t)]^{\gamma}}{\sum_{k=1}^{N_{\text{W}}}{[a_{k}(t)]^{ \gamma}}} \tag{8}\] where \(N_{\text{W}}\) is the number of words in the sentence, \(a_{j}(t)\) is the activation of word \(j\) at time \(t\). The relation between the activation \(a_{j}(t)\) of a word and its selection probability \(\pi_{j}(t)\) also entails that words requiring little processing (i.e., "easy-to-process" words) pass through lexical and post-lexical processing faster than less frequent (i.e., "difficult-to-process") words. The former words are therefore in a state of higher activation for a shorter time period, consequently less likely to be fixated, and thus often skipped. The free parameter \(\gamma\) modulates the relationship between word activations and selection probabilities. For \(\gamma\to 0\), words are selected randomly with equal probability, regardless of their actual activation values (if greater than zero). If \(\gamma\to 1\), there is a perfect linear relationship between activations and selection probabilities (Luce's choice rule). Higher values \(\gamma\rightarrow\infty\) enforce a winner-takes-all principle so that the word with the highest activation always "wins." Word activations and saccade timers are random walks that increase/decrease over time with different transition rates for different timers and individual word activations. The state of the model at time \(t\) is given by a vector \(n=(n_{1},\,n_{2},\,...,n_{4+N_{w}})\), where the components \(n_{j}\) represent the states of the subprocesses. States 1 to \(N_{w}\) are keeping track of the (post-)lexical processing of words, while state \(N_{w}+1\) to \(N_{w}+4\) are saccade-related and additional stochastic variables (Table 1). In each of the possible transitions from state \(n=(n_{1},n_{2},...)\) to \(n^{\prime}=(n^{\prime}_{1},n^{\prime}_{2},...)\) only one of the sub-processes \(n_{i}\) is changed by one unit. The state of the model at time \(t\) is given by the vector \(n=(n_{1},n_{2},...,n_{4+N_{w}})\), where the components \(n_{j}\) represent the states of the subprocesses. The discrete stochastic variables \(\{n_{j}\}\) at time \(t\) map to the activation variables \(\{a_{j}(t)\}\). For the numerical simulation of the model, an algorithm can be derived from the corresponding stochastic evolution equation (master equation), as detailed by Seelig et al. (2020). Implementation of more detailed assumptions on the post-lexical stage can be achieved by changing the transitions rates \(\{w_{j}(t)\}\) that control the stochastic transitions for the activations \(\{a_{j}(t)\}\). The transition rate is kept for the lexical stage, while it is modified during the post-lexical stage i.e., \[w_{j}(t)=\begin{cases}\alpha\cdot\Lambda_{j}(t)&\text{in lexical stage}\\ \max\left[\alpha\cdot\Lambda_{j}(t)\cdot\text{proc},\omega\right]&\text{in post-lexical stage}\\ 0&\text{otherwise (complete)}\end{cases}, \tag{9}\] where \(\alpha\) is the word's baseline processing difficulty determined by frequency, \(\Lambda\) is the processing rate, _proc_ is the relative processing speed for post-lexical processing, and \(\omega\) is a minimum decay parameter.4 In the integrated SEAM model, word activations in SWIFT are coupled with memory activations in LV05 in a Bayesian modeling framework by adapting the formula in Equation (9). Footnote 4: The transition rate for post-lexical word \(j\) cannot be lower than \(\omega\), which ensures a decaying word activation even if there is no or little processing at a given time \(t\), e.g. when the word is not within the processing span. The fact that the SWIFT implements detailed mechanisms on word processing and saccade preparation is reflected by the number of parameters. Fitting the eye-movement model to experimental data started with hand-picking plausible parameter values, grid search (Reichle et al., 1998), genetic algorithms (Engbert et al., 2002), while optimizing the fit between empirical and simulated summary statistics. Based on the development of a likelihood approximation (Seelig et al., 2020), a fully Bayesian framework is now available for parameter inference (Rabe et al., 2021). The likelihood framework permits objective parameter fitting independent of a set of selected summary statistics, since fixation sequences are involved for likelihood computation. Using large-scale numerical simulations, it has been shown that SWIFT can reliably reproduce fixation durations, fixation prob \begin{table} \begin{tabular}{l c c c c c} \hline \hline Process & \multicolumn{3}{c}{Transition to...} & \multicolumn{3}{c}{Transition rate \(W_{n^{\prime}n}\)} \\ \hline Word processing & \(n^{\prime}_{j}\) & \(=\) & \(n_{j}\pm 1\) & \(w_{j}\) & \(=\) & \(\alpha\cdot\Lambda_{j}(t)\) (for word \(j\)) \\ Saccade timer & \(n^{\prime}_{N_{w}+1}\) & \(=\) & \(n_{N_{w}+1}+1\) & \(w_{N_{w}+1}\) & \(=\) & \(N_{t}/t_{sac}\cdot(1+ha_{k}(t)/\alpha)^{-1}\) \\ Labile program & \(n^{\prime}_{N_{w}+2}\) & \(=\) & \(n_{N_{w}+2}+1\) & \(w_{N_{w}+2}\) & \(=\) & \(N_{l}/\tau_{l}\) \\ Non-labile program & \(n^{\prime}_{N_{w}+3}\) & \(=\) & \(n_{N_{w}+3}+1\) & \(w_{N_{w}+3}\) & \(=\) & \(N_{n}/\tau_{n}\) \\ Saccade execution & \(n^{\prime}_{N_{w}+4}\) & \(=\) & \(n_{N_{w}+4}+1\) & \(w_{N_{w}+4}\) & \(=\) & \(N_{x}/\tau_{x}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Stochastic Transitions Between Internal States From \(n=(n_{1},n_{2},...)\mapsto n^{\prime}=(n^{\prime}_{1},n^{\prime}_{2},...)\) abilities and saccade amplitudes at the level of global and by-participant summary statistics, without using those summary statistics for the purpose of parameter fitting. ### SEAM: Activation-Based Coupling of SWIFT and LV05 In baseline SWIFT, processing a word always starts out in the lexical processing stage. Once the word activation \(n_{j}\left(t\right)\) has reached its threshold \(N_{j}\) at time \(t\), it begins post-lexical processing, and activation starts to decrease. When the activation has returned to zero, the word is completely processed. Figure 1 abstractly shows the activation histories of three hypothetical words. The figure assumes that the eyes move sequentially from word (a), to (b), to (c), leading to a somewhat sequential onset of their first processing (\(t_{1}\), \(t_{2}\), and \(t_{5}\)). The first stage of processing is the lexical stage. During this stage, activations rise until they reach their respective maxima (\(N_{\text{A}}\), \(N_{\text{B}}\), and \(N_{\text{C}}\)), which depend on printed word frequency. Given that saccade targeting depends on activation, the words in question are most likely to be selected as a saccade target if the upcoming saccade is programmed at times \(t_{3}\), \(t_{4}\), and \(t_{6}\). This happens as well when the words enter the post-lexical processing stage. During post-lexical processing, activations decrease again, making it in turn less likely for the re spective word to be selected as a target. Once the activation returns to zero (\(t_{5}\), \(t_{8}\), and \(t_{9}\)), the word is assumed to have completed processing. A feature common to the SWIFT and LV05 is that both models use activation values to guide processing. SWIFT uses word activations to select words as saccade targets, while LV05 uses memory activations to select memory chunks as retrieval targets. Our integrated model SEAM keeps these activations separate, but implements an interaction, so that memory activations in LV05 modulate word activations in the SWIFT model. Therefore, rather than assuming that the sentence processor has direct control of the eye-movement targeting system, we propose an indirect, stochastic influence on saccade targeting via memory activations. This is in good agreement with eye-tracking studies carried out with larger-than-usual sample sizes that show that the effects of sentence processing cost on fixation and other measures have relatively small magnitudes (e.g., Jager et al., 2020); the largest effect sizes are generally driven by low level factors such as frequency and word length (Boston et al., 2008). In SEAM, activations in the LV05 component reflect the construction of a sentence representation, which affect word activations and thereby stochastically influences target selection in the eye-movement component. As in SWIFT, the activation gradient of a word in SEAM is mainly determined by the transition rate, which varies between processing stages. Compared to SWIFT, the sequence of processing stages in SEAM is extended by stages that reflect the cost of memory retrieval, which can account for post-lexical processing difficulty. Possible interactions of memory retrieval and the word activations include: (a) post-lexical processing of the retrieval trigger is delayed by the retrieval process; and (b) retrieval candidates are reactivated so that they attract regressions from the currently fixated region that caused the retrieval (that is, the retrieval trigger). In Figure 2, activation histories of the same three words from the SWIFT example in Figure 1 are shown. Like the baseline SWIFT model, words in SEAM go through a lexical and post-lexical processing stage before they are considered completely processed. However, SEAM additionally accounts for the resolution of a linguistic dependency during post-lexical processing of word C. Once the words are lexically accessed (\(t_{3}\), \(t_{4}\), and \(t_{6}\)), they are encoded as chunks in SEAM's memory module, along with their features, as in the LV05 model. Words A and B are assumed to not trigger a dependency completion process; this is the case for most nouns. However, when word C, which could be a verb, is processed and the associated chunk is stored in memory, a subject-verb dependency must be resolved. A retrieval is thus triggered. The assumption that nouns do not trigger a dependency completion process is obviously an oversimplification; but this simplification is reasonable for the data being modeled in this paper, as in the experiment design of Mertzen et al. (2023), the theoretically interesting dependency completion occurs at the verb. During retrieval, all words that are fully processed before the processing of word C completes are counted as retrieval candidates. Candidate words enter into a retrieval stage in which activation increases until the retrieval process finishes.5 The activation increase differs by the degree to which the retrieval candidate features match the retrieval cues, implementing a core assumption of the LV05 model. Footnote 5: A word can also become a candidate after the retrieval process has started. Word A, for example, is already a candidate at the time the post-lexical processing of word C starts at time \(t_{6}\), given that it was already completely processed at time \(t_{5}\). Therefore, the retrieval stage of word A starts immediately with the start of the post-lexical stage of word C. The retrieval stage ends when one candidate reaches a threshold value, which is a fraction \(\mu_{3}\) of the maximum activation of the retrieval trigger \(N_{\text{C}}\). Because post-lexical processing in SEAM is only finished after all dependencies have been resolved, the post-lexical activation of the retrieval ## Figure 2 ### Word Activation in SEAM _Note._ Theoretical activation history of three words (A, B, and C). Colors of line segments correspond to the processing stage active at that given time. Activation maxima are \(N_{\text{A}}\), \(N_{\text{B}}\), and \(N_{\text{C}}\), respectively, for the transition from lexical to post-lexical processing, \(\mu_{2}N_{\text{C}}\). Activations are displayed as continuous but are actually implemented as discrete counters. trigger is guaranteed not to fall below a fraction \(\mu_{2}\) of its maximum activation during retrieval. This is why the post-lexical activation of word C does not change between \(t_{7}\) and \(t_{10}\). In this example, despite entering the retrieval phase at a later time, word B reaches the retrieval threshold at time \(t_{10}\) before word A, thereby concluding the retrieval process. Consequently, the post-lexical processing of word C continues and all retrieval candidates, that is, word A and word B, enter a post-retrieval stage, which is equivalent to an additional post-lexical processing stage. This also entails that the retrieval phase of word A is aborted, which would otherwise have reached threshold at time \(t_{11}\). The transition rates (Equation 9) of the baseline SWIFT model for word \(j\) are replaced by \[w^{\prime}_{j}(t)=\begin{cases}\alpha\cdot\Lambda_{j}\left(t\right)&\text{in lexical stage}\\ \max\left[\alpha\cdot\Lambda_{j}\left(t\right)\cdot\text{proc},\omega\right]& \text{as retrieval trigger (}j=m\wedge n_{4+j}\left(t\right)>\mu_{2}N_{4+j}\text{)}\\ 0&\text{as retrieval trigger (}j=m\wedge n_{4+j}\left(t\right)\leq\mu_{2}N_{4+j}\text{)}\\ \max\left[\alpha\cdot\Lambda_{j}\left(t\right)\cdot\text{proc},\omega\right]& \text{in post-lexical stage}\\ \frac{\mu_{1}N_{4+m}}{F}\exp\left[A^{\prime}_{j,m}\left(t\right)\right]&\text{ as retrieval candidate (}j\neq m\text{)}\\ \max\left[\alpha\cdot\Lambda_{j}\left(t\right)\cdot\text{proc},\omega\right]& \text{in post-retrieval stage}\\ 0&\text{otherwise (complete)}\end{cases} \tag{10}\] where \(m\) is the current retrieval trigger that needs to form a dependency. Altogether, SEAM extends the baseline SWIFT model parameters (Rabe et al., 2021; Seelig et al., 2020) with seven additional model parameters. The parameters \(d\) (decay), \(S_{\text{max}}\) (maximum memory activation strength), \(F\) (retrieval latency scaling factor) and \(p\) (mismatch penalty), which modulate \(w^{\prime}\left(t\right)\) through \(A^{\prime}_{j,m}\left(t\right)\), are directly based off their LV05 implementations (Lewis & Vasishth, 2005). Moreover, the link between word activations in LV05 and processing rate in SWIFT is complemented by the three new model parameters \(\mu_{1}\), \(\mu_{2}\), and \(\mu_{3}\), as detailed below. Some parameters of the LV05 model, in particular for goal activation and noise (\(G\), and _ans_), are ignored in the present implementation. Variation in the goal activation parameter is usually used to model individual-level capacity differences (e.g., Daily et al., 2001; Matzig et al., 2018; Vasishth & Engelmann, 2022), which is not of interest in the present work. The goal activation is fixed at 1.0, which gives equal weight to all retrieval cues. The noise parameter _ans_ is replaced by the built-in stochasticity of SWIFT. Moreover, the parameters \(S_{\text{max}}\) and \(F\) are not independent in terms of the resulting memory activation, which is why we will only estimate \(F\) as a free parameter and keep \(S_{\text{max}}\) at a fixed default value of 1.5. In the present study, we also exclude \(\mu_{1}\), the fixed time needed to execute a production rule, by setting it to 0, because we assume this time to overlap with some of the oculomotor processes already present in the model. Since \(S_{\text{max}}\) is fixed, we also decided to fix mismatch penalty \(p\) at its default value, as the relation of the two parameters is critical. Thus, the only parameters that were fit to the Mertzen et al. (2023) data were \(F\), \(d\), \(\mu_{2}\), and \(\mu_{3}\). For a complete list of model parameters and default values in SEAM, see Appendix A. For our implementation of SEAM, we opted for a simplified version of the LV05 model (Engelmann, 2015) and the latest version of SWIFT (Rabe et al., 2021).6 SEAM connects the baseline eye-movement control architecture of SWIFT with the interactive working memory module of LV05 via activation-based coupling: reading words in SWIFT leads to the creation of memory chunks and can trigger retrievals in LV05, whereas chunk activations computed by LV05 modulate word activations in SWIFT. Footnote 6: The principal reason for using the simplified version of the LV05 model is tractability. Using the full ACT-R architecture, which is LISP-based, would require much more complex engineering decisions, and would make the model inaccessible to researchers unfamiliar with Lisp but who are interested in exploring its behavior with novel data. ## Data Availability All experimental and simulated data, analysis code, and computational models (SEAM and SWIFT) reported in this paper are available at the Open Science Framework ([https://doi.org/10.17605/OSF.IO/AD5NX](https://doi.org/10.17605/OSF.IO/AD5NX)) and at GitHub ([https://github.com/mmrabe/SEAM-2023-Paper](https://github.com/mmrabe/SEAM-2023-Paper)). ### Experimental Study (Mertzen et al., 2023) To test the predictions of the integrated model, we use data from a memory interference experiment conducted with 61 English native speakers (Mertzen et al., 2023). This experiment was originally planned with 120 participants, but due to the pandemic, data collection had to be aborted. Our inability to reach the target number of participants has consequences for model evaluation, as discussed later. The Mertzen et al. (2023) experiment employed a fully crossed distractor subjecthood (2) \(\times\) animacy (2) design that closely mirrored an experiment reported in Van Dyke (2007). Examples of the four conditions are shown below in example (3). 1. [label=(3)] 2. It turned out that the \(\textbf{attorney}^{+subj}_{+anim}\) whose secretary had forgotten about the important \(\underline{\text{meeting}^{-subj}_{-anim}}\) frequently \(\textbf{complained}\left\{\!\!\begin{array}{l}subj\\anim\end{array}\!\!\right\}\) about the salary at the firm. 3. It turned out that the \(\textbf{attorney}^{+subj}_{+anim}\) whose secretary had forgotten about the important \(\underline{\text{visitor}^{-subj}_{+anim}}\) frequently \(\textbf{complained}\left\{\!\!\begin{array}{l}subj\\anim\end{array}\!\!\right\}\) about the salary at the firm. 4. It turned out that the \(\textbf{attorney}^{+subj}_{+anim}\) whose secretary had forgotten that the \(\underline{\text{visitor}^{+subj}_{+anim}}\) was important frequently \(\textbf{complained}\left\{\!\!\begin{array}{l}subj\\anim\end{array}\!\!\right\}\) about the salary at the firm. In the example above, processing the verb _complained_ is expected to trigger a retrieval for an animate subject noun phrase. In all sentences, _attorney_ is the grammatically correct subject of _complained_, and should thus be retrieved. However, the distractor noun phrase (_meeting_ or _visitor_) may interfere with the retrieval of _attorney_. The distractor is _visitor_ in the \(+\)animate or _meeting_ in the \(-\)animate condition, and it is either a subject (\(+\)subject) or an object (\(-\)subject) of the embedded clause. According to cue-based retrieval theory, both subjecthood and animacy of the distractor should lead additional difficulty for resolving the critical dependency. This is due to the fan effect (e.g., Anderson, 1990), which is also known as similarity-based interference (Jager et al., 2017): When the feature specification of a distractor overlaps with that of the retrieval target, it diverts some of the retrieval activation from the target to itself. The activation of both the target and distractor are reduced, leading to longer retrieval time; what ends up being retrieved in a particular simulation run (target or distractor) depends on which chunk happens to have higher activation (this can vary in simulation runs due to stochastic noise in the activation). It is therefore possible that the distractor is sometimes erroneously retrieved. As indices of increased processing difficulty, we expect additive effects of animacy and subjecthood of the distractor on regression path duration and outgoing regression. probabilities on the critical verb (_complained_). The primary region of interest where the effect of the subjecthood and animacy manipulation should manifest is the verb; however, because similarity-based interference effects have been shown to occur in the region just before the verb (Lago et al., 2021; Van Dyke, 2007), Mertzen et al. (2023) also investigated the effect at the adverb (_frequently_) that preceded the critical verb. For this reason, in our investigations we also report model fits for this pre-critical region. In summary, similarity-based interference accounts predict that conditions (3b,d) should be more difficult to process than conditions (3a,c) due to the animacy of _visitor_, and conditions (3c,d) should be more difficult to process than conditions (3a,b) due to the distractor being in subject position. As indices of increased processing difficulty, additive effects of distractor animacy and distractor subjecthood were expected in reading times and outgoing regression probabilities. An interaction of distractor subjecthood and animacy was not predicted but is reported in Mertzen et al. (2023) for completeness; in the Mertzen et al. (2023) analysis, there was no evidence for an interaction. In our simulations, we focus on the main effects of subjecthood and animacy, but we note here that, although not shown in the simulations below, SEAM also predicts no interaction between these factors. In this summary of the Mertzen et al. (2023) results, we report only regression path duration and first-pass regressions out from the pre-critical adverb and the critical verb; for full details of all experimental results, please see the original paper. The effects of animacy and subjecthood (coded as sum contrasts) were analyzed using Bayesian mixed-effects models. Subject and item were specified as random effects in the models, with a full variance-covariance matrix for subject and item random effects. The models were implemented with _brms_(Burkner, 2017, 2018, 2021), an interface to _Stan_(Carpenter et al., 2017). Priors were mildly informative Gaussian distributions for the linear model coefficients (intercept and slopes) and mildly informative regularizing Lewandowski-Kurowicka-Joe (LKJ) priors (Lewandowski et al., 2009) for random effects correlation matrices; setting the LKJ prior's parameter \(\nu\) to \(2\) downweights extreme correlations like \(\pm 1\). For a detailed tutorial on linear mixed models in the Bayesian setting, see chapter 5 of Nicenboim et al. (2023), or Sorensen et al. (2016). The results in Mertzen et al. (2023) showed reading time patterns consistent with effects of subjecthood (syntactic interference) and effects of animacy (semantic interference). Figure 3 shows that on the pre-critical adverb, the effect of subjecthood shows longer regression path duration (RPD) and more first-pass regressions out for conditions that have a +subject distractor (95% credible intervals (CrIs): RPD \([17,63]\) ms, FPR \([3,11]\)%). Similarly, the effect of animacy shows longer regression-path duration and an increase in first-pass regressions out for conditions with animate distractors compared to conditions with inanimate distractors (95% CrIs: RPD \([8,57]\) ms, FPR \([2,8]\)%). The subjecthood \(\times\) animacy interaction in regression-path duration is centered on zero; for first-pass regressions, the interaction has a negative sign (\([-7,0]\)%). On the critical verb, the effects of subjecthood and animacy show a similar pattern of longer regression path duration and an increase in first-pass regressions out (Subjecthood 95% CrIs: RPD \([3,52]\) ms, FPR \([1,8]\)%; Animacy 95% CrIs: RPD \([0,39]\) ms, FPR \([-1,5]\)%). The interaction is centered around zero for regression path duration and regressions out. The increased reading times and regressions for conditions that have subject or animate distractors indicate that syntactically and semantically-similar distractors can interfere during long-distance dependency formation. ### Simulation Study The reliability of computational cognitive models critically depends on the availability of appropriate methods for statistical inference (Engbert et al., 2022; Schutt et al., 2017). We previ ously applied a broader principled Bayesian workflow (Schad et al., 2020) for the baseline SWIFT model in Rabe et al. (2021), which is used as the eye-movement platform in SEAM. Without proper checks, it is not self-evident that Bayesian model fitting of SEAM can be carried out in the same way as for SWIFT. However, we expect that our implementation of SEAM will exhibit correct inference because it meets the following three critical conditions: First, for all observables that were taken into account (i.e., fixation positions and durations), a model likelihood has already been implemented in SWIFT (Seelig et al., 2020). Secondly, both SWIFT and LV05 are dynamic in the sense that they describe activation values as a function of time, which allows us to let them interact dynamically without a significant modification of their initial conceptualization. Thirdly, the dynamics of eye movements and sentence processing interact in the integrated SEAM model and will thus affect the observable temporal and spatial aspects of fixation sequences due to the activation coupling of the constituent SWIFT and LV05 components. The coupling via word activations permits indirect fitting of model parameters related to memory retrieval, as long as they have some probabilistic effect on the outcome variables captured by SWIFT. Given these properties, we tested the computational faithfulness of SEAM using the Markov Chain Monte Carlo (MCMC) sampling algorithm DREAM\({}_{\text{zs}}\)Laloy and Vrugt (2012) based on profile log-likelihoods and model parameter recovery, similar to the approach taken in Rabe et al. (2021). The DREAM\({}_{\text{zs}}\)(Laloy and Vrugt, 2012; ter Braak and Vrugt, 2008; Vrugt et al., 2009) sampler has previously been successfully used with complex dynamical models of eye-movement control, including SWIFT for reading (Rabe et al., 2021) and SceneWalk for scene viewing (Schwetlick et al., 2022; Schwetlick et al., 2020). After confirming the computational faithfulness of the model, we fitted the model to a training subset of the experimental data and compared predictions for a withheld test portion using relevant global summary statistics and the predicted experimental effects of similarity-based interference described in the previous section. ### Method #### Data Assimilation In eye-movement research, the experimental (observed) data are fixation sequences consisting of time-ordered sequential observations. In such a case, the identification of model parameters is possible within the field of _data assimilation_(Engbert et al., 2022; Reich & Cotter, 2015). Data assimilation refers to the integration of complex mathematical models with time-series data (see Morzfeld & Reich, 2018, for an introduction). In this framework, the SWIFT model has previously been implemented for Bayesian model fitting (Seelig et al., 2020). Rabe et al. (2021) showed that, in a principled Bayesian workflow (Schad et al., 2020), SWIFT can be reliably fitted to simulated and experimental data even with many free parameters and sparse data that resulted from splitting by participant and experimental condition. #### Sequential Likelihood The time-ordered nature of fixational eye movements make them a suitable target for data assimilation (Engbert et al., 2022). To exploit the sequential information of the the data, some of those models use _sequential likelihoods_ for parameters \(\theta\in\Theta\) such that \[L_{\text{M}}\left(\theta\mid X_{n}\right)=\prod_{i=1}^{n}L_{\text{M}}\left( \theta\mid X_{i}\right)\;, \tag{11}\] where \(X_{n}=\left(x_{1},\ldots,x_{n}\right)\) is the entire sequence of \(n\) events and \(L_{\text{M}}\left(\theta\mid X_{i}\right)\) is the likelihood of the \(i\)-th event of the sequence given all previous events \(X_{i-1}=\left(x_{1},\ldots,x_{i-1}\right)\), \[L_{\text{M}}\left(\theta\mid X_{i}\right)=P_{\text{M}}\left(x_{i}\mid X_{i-1},\theta\right)\;. \tag{12}\] Successful examples of applying data assimilation for visual tasks are, for example, SceneWalk (Schwetlick et al., 2022; Schwetlick et al., 2020) for scene viewing and SWIFT (Rabe et al., 2021; Seelig et al., 2020) for reading. There, each event of the sequence, \(x_{i}\), is a fixation. Since the location of the first fixation is typically known due to the experimental paradigm, e.g., sequences always starting at a fixation cross, the likelihood for \(x_{1}\) is given by \(L_{\text{M}}\left(\theta,X_{1}\right)=1\). SceneWalk and SWIFT further decompose the likelihood into spatial and temporal components, since each fixation has a spatial location on the screen and a duration. As SEAM is based on SWIFT and we only changed the latent transition rates rather than the saccade execution itself, we can easily use the data assimilation methods implemented for SWIFT. This is especially useful because we fit the model on a by-participant basis and hence only have little data for parameter estimation. The decomposition of temporal and spatial likelihood components is also theoretically interesting since we can expect the modification of the transition rates to affect both the temporal control and target selection of the (simulated) saccadic eye movements. ### Profile Likelihoods As SEAM modifies model dynamics and thus the likelihood function of SWIFT, a reevaluation of the _profile log-likelihoods_ is crucial. Those are generated by first simulating data with known parameter values, and then systematically varying parameter values and inspecting the likelihood of the data for each value. Ideally, the likelihood of the data should be highest for the true parameter values. In order to assess whether the modifications introduced in SEAM are appropriately captured in its likelihood, it should be ensured that the newly introduced free parameters affect the outcome likelihood. Thus, the behavior of the likelihood as a function of each of the new parameters represents a necessary condition for identifiability and statistical inference of the full model (Rabe et al., 2021; Seelig et al., 2020). Parameters were inspected if they were going to be fitted later on and/or were added in this model implementation compared to the reference SWIFT implementation (Rabe et al., 2021). This was the case for a total of 11 parameters (see Figure 4). Parameters \(\mu_{1}\) and \(S_{\text{max}}\) were also inspected even though they were not selected to be fitted to the recovery and experimental data. This is because the parameters themselves are identifiable, as can be seen in Figure 4, but they are not independent from other model parameters in terms of an effect on model behavior. All other shown model parameters are also fitted to simulated data for parameter recovery as well as to experimental data. ### Parameter Estimation and Recovery As a last step for the verification of the computational faithfulness of the approach, we applied a sampling algorithm to simulated data with known true parameter values in order to ensure the validity of the computational approach. We generated 100 unique data sets with different sets of true parameters \(\theta^{\star}\) randomly sampled from the prior distribution later used for parameter estimation. Parameters would be considered successfully recovered if the correlation between true and recovered parameters was sufficiently high and the normalized root mean squared error (NRMSE) was sufficiently low. ### Summary Statistics and Experimental Effects Even though we are using an objective likelihood-based approach for model fitting, it is important that simulated and empirical data are in good agreement at the level of relevant summary statistics, especially with regard to comparability with competitor models and theory testing (Roberts and Pashler, 2000). Because the goal for SEAM is to explain both spatial and temporal aspects of eye movements in reading, we consider a number of different spatial and temporal summary statistics frequently used in reading research. For the spatial dimension, we are looking at several fixation probabilities, that is, probabilities to fixate (or skip) specific words under different conditions. For the quantification of the temporal aspects of the model fit, we evaluate different fixation durations, that is, average reading times under different conditions. A subset of the experimental test data set is withheld from parameter estimation, and this held-out set will then compared on the basis of summary statistics against predicted data from SEAM and SWIFT using estimated parameters. Specifically, we first split the experimental data into a training and test subset, fitting the model to 70% of the data (training set) of each participant and condition, subsequently predicting eye trajectories for the other 30% (test set). For each withheld trial, we generated a fixation sequence using the HPDI (highest posterior density interval) midpoint of the sampled posterior distribution of a given participant and parameter (Rabe et al., 2021). We also present the predictions of SEAM and SWIFT for the experimental memory interference effects, which can be similarly derived from the simulated and experimental data alike. ## Results ### Profile Likelihoods We evaluated the likelihood for a typically-sized simulated data set where all parameters had been set to default values (see Appendix A). For each parameter, the respective true value, that is, the value used for simulating the data set, is shown with a vertical dashed red line. Then, for each parameter, for 50 equidistant parameter values in the intervals shown, the likelihood for the data given the model was evaluated. Ideally, the likelihood should be maximal around the true value. In Figure 4 we observe that the likelihood peaks, as expected, around the true value for most of the parameters. This means that (i) the parameters affect the likelihood and (ii) the likelihood may be used to recover their values. Individual likelihood evaluations are represented by dots. The plotted line smooths are just for guidance and do not represent the true likelihoods. The important observation here is that the highest evaluated likelihoods are always relatively close to the true value, even for the case of \(\mu_{2}\), where the smoothed lines falsely suggest a flat likelihood. Since not every fixation involves a retrieval, the new SEAM parameters can only have a very limited effect on the likelihood. Therefore, effects observed in the likelihood function are less pronounced than for the established SWIFT parameters such as processing span \(\delta_{0}\). The fact that that higher likelihood evaluations nevertheless cluster around the true values is an indication that the parameters are identifiable, but their fitted values should be interpreted with caution. ### Parameter Recovery Analogous to the inspection of the profiles log-likelihoods, we simulated data from the known model but generated 50 data sets, each with a unique combination of random parameter values within the bounds of the previously inspected intervals, effectively sampling from the prior distribution. Then, we fitted the model to the each of the data sets, using uninformative uniform priors over the bounds shown in Figure 4. Each fit is represented with one point per panel in Figure 5, showing 95% credible intervals (CrIs) on the y-axis and the true parameter value on the x-axis. Ideally, CrIs would be narrow intervals spanning around the identity diagonal. We can see that the 95% CrIs almost always include the true value but are relatively wide, especially for the added parameters \(F\), \(d\), \(\mu_{2}\), and \(\mu_{3}\). Nevertheless, the agreement is generally good, as can be seen in the low normalized root mean square error or NRMSE values7 and high correlations between true parameter values and CrI midpoints. This suggests that in general, true parameter values of simulated data sets can be recovered sufficiently well or at least with an acceptable level of uncertainty. As before, we note that parameter values, especially point estimates, should be interpreted with caution. Footnote 7: The NRMSE is the mean root mean squared deviation from the true value across all samples of the posterior, normalized on the sample range. The reason for the high uncertainty for the new parameters is very similar to that for the profile log-likelihoods: Over the course of the entire fixation sequence, there are only very few retrieval events where these parameters could possibly have an effect on model behavior. Additionally, even when there is a retrieval, it is not guaranteed that it actually affects the **Figure 4** _Example Profile Log-likelihoods_ _Note._ Centered profiles log-likelihood \(\log L_{\text{M}}\left(\theta\mid X\right)\) for a simulated data set \(X\) with known/true parameters \(\theta^{\star}\). Profiles are generated by varying one parameter (dimension) of \(\theta\) at a time while holding the others constant at their respective true parameter value. True parameter values are denoted by the vertical red line. Dots in the background are individual stochastic pseudo-likelihood evaluations, each with a spatial and temporal likelihood component, and their combination (sum). Curves are GAM smooths on those individual evaluations. firacted word, as the eyes may, for instance, already have continued past the retrieval trigger. Given these limitations, the recovery performance is surprisingly good, and the high correlations between true and recovered parameters appear very promising. _Summary Statistics_ So far, we have demonstrated that SEAM, like SWIFT in its most current version (Rabe et al., 2021), can be successfully fitted to simulated data: The true parameter values are in the vicinity of profile log-likelihood peaks and are contained within parameter recovery CrIs. This means that if we assume the true underlying cognitive architecture to be similar to SEAM, we can reliably use fitted parameters (or their credible intervals) to make inferences about it. However, as the true underlying cognitive architecture is unknown, such checks are per se impossible on experimental data. Instead, we compare simulated and experimental behavior on the basis of relevant summary statistics. For this, as explained earlier, we first split the experimental data into a training and test subset, fitting the model to 70% of the data of each participant and condition (training set), subsequently predicting eye trajectories for the other 30% (test set). Rabe et al. (2021) had previously noted that SWIFT, with the cross-validation method described above, is unable to make reliable predictions for regressive eye movements. However, given that SEAM now incorporates processes for cue-based memory encoding and retrieval, and given that memory retrieval processes are specifically hypothesized to trigger regressions by modulating the activation of retrieval candidates, in SEAM we should see an improvement in regression-related statistics such as incoming/outgoing regression probabilities, as well as regression path durations. These are also two important dependent measures in which effects were found in the experimental data set (see _Experimental Study_, for a short summary; see Mertzen et al., 2023, for details). In Figure 6, we show the comparison of summary statistics between experimental data and simulated data from the baseline SWIFT model (without memory retrieval) and SEAM (with memory retrieval). In all cases, SEAM predicts regression-related fixation probabilities and fixation durations more reliably than SWIFT. It is also noteworthy that not only the average across all word frequency bins but even word-frequency effects on summary statistics are reliably predicted. _Experimental Effects of Memory Interference_ Arguably the most critical test for the SEAM architecture is to evaluate whether the model can predict differences in summary statistics between experimental conditions in the design of Mertzen et al. (2023), which manipulates effects of memory retrieval on reading. Based on a different experimental design, Rabe et al. (2021) were previously successful in demonstrating that SWIFT can be used to predict and explain differences in reading behavior when fitted to each participant and experimental condition separately. In our study presented here, however, we are only fitting one model at a time to each participant's data across all conditions, thereby considerably reducing the degrees of freedom. If the model is able to predict differences between experimental conditions, these do not originate from different parameter values for each condition but from the model dynamics, which are affected by the different feature specifications of the memory chunks across conditions. Therefore, capturing differences between conditions is a direct test of SEAM's added memory module. To illustrate the gain in empirical fit over baseline SWIFT, we also report predictions from SWIFT for reference. In SWIFT, no differences between experimental conditions are expected, because SWIFT has no parameters that could account for the processing cost of memory retrievals. ## Figure 5 _Parameter Recovery of SEAM Parameters_ _Note._ Results of a parameter recovery for 50 simulated data sets, for which the parameters were randomly drawn from a uniform distribution with the bounds shown on the x-axes. 95% credible intervals (CrIs) are shown as error bars, centered around a point which is the mean of their lower and upper bounds. The diagonal is the identity line. Parameter recoveries with error bars spanning around the diagonal predict the true value within their CrI. Moreover, each panel shows the correlation between the true value and the point estimate as well as the normalized root mean squared error (NRMSE) of the CrI vs. the true value. In order to evaluate the empirical fit of SEAM and baseline SWIFT, we conducted the same set of analyses for the observed experimental data and for the data predicted by SEAM and by SWIFT, after fitting each of the models to the training data sets. For both sets of data, we conducted a Bayesian mixed-effects regression for regression-path durations and outgoing regression probabilities as predicted by region and experimental condition (syntactic/semantic interference). Table 2, and Figures 7 and 8 summarize the comparisons between the held-out empirical data and the predictions of SEAM and SWIFT. In order to interpret these comparisons, we compare SEAM and SWIFT against the empirical estimates from the held-out data using a region of practical equivalence (ROPE) approach (Freedman et al., 1984; Kruschke, 2014; Spiegelhalter et al., 1994) rather than formal model comparison methods such as k-fold cross validation, Bayes factors, or the like (for tutorial introductions to these topics, see Nicenboim et al., 2023). The ROPE approach is a graphical model comparison method that involves comparing model predictions against observed estimates from data; overlap in the posterior distribution of estimates provides an informal basis for deciding whether a model approximately matches observed estimates. In this approach, there is no notion of _statistical significance_; rather, the focus is on whether the model predictions are approximated by the model predictions. Figure 6: Spatial and Temporal Summary Statistics ## Figure 7 _Posterior Distributions of Estimated Experimental Effects_ _Note._ Experimental effects on outgoing first-pass regression probabilities (FPR, top row) and first-pass regression path durations (RPD, bottom row), as found in the experimental data (gray), baseline SWIFT (purple), and SEAM (orange). Violin plots are posterior distributions of mixed-effects models. ## Figure 8 _Distribution of Absolute Prediction Errors for Estimated Experimental Effects_ _Note._ Prediction errors of experimental effects on outgoing regression probabilities (top row) and regression-path durations (bottom row). Violin plots are paired differences of posterior distributions of baseline SWIFT vs. experimental data (purple), and SEAM vs experimental data (orange). ## Figure 9 _Effects of Experimental Condition on SEAM Word Activations at Encoding of the Critical Verb_ _Note._ SEAM word activation of the target, distractor, pre-critical adverb, critical verb, and post-critical region of a sentence grouped by experimental condition. Activations are averaged across 500 independent simulations of the same item in all four conditions. For each simulation, \(t=0\) is adjusted to the time of the start of post-lexical processing of the critical verb, that is, the start of the retrieval. imately consistent with the data. One important reason for taking this informal model comparison approach is the fact that the held-out data are relatively sparse. For this reason, the present evaluation should be seen rather as a proof-of-concept rather than a comprehensive evaluation. Such an evaluation would require significant amounts of benchmark data (for examples of such extensive evaluations, see Engelmann et al., 2020; Nicenboim et al., 2020; Yadav et al., 2023) and must be left for future work. Table 2, and Figures 7 and 8 show that the predictions for the experimental effects of animacy (semantic interference) and subjecthood (syntactic interference) in the experimental data are generally more in agreement with SEAM than with SWIFT: the violin plots in Figure 7 from SEAM have a better overlap than the observed data than the predictions from SWIFT. This is true in both the pre-critical and critical regions, in both the first-pass regression and regression path duration measures. One exception is the subjecthood effect at the critical verb (see the bottom right facet in Figure 7); SEAM predicts essentially no effect of subjecthood, just like SWIFT. We return to this in the Discussion section. Given that SWIFT does not have any mechanism that accounts for cue-based memory retrieval, it is expected that the model predicts no effects for memory interference. Notice that the violin plots for the data as well as the SEAM and SWIFT predictions shown in Figures 7 and 8 are relatively wide; this is due to the fact that only 30% of the test portion of experimental data (the held-out data) are compared to the model predictions. As SEAM and SWIFT are nested models,8 the fact that SEAM but not SWIFT can predict different summary statistics is a first indicator that the differences in predictive power between the models may be due to the added memory retrieval submodule. To verify this and to attempt an \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Region of interest & Empirical estimates & \multicolumn{2}{c}{SEAM} & \multicolumn{2}{c}{SWIFT} \\ & subj & anim & subj & anim & subj & anim \\ \hline \multicolumn{8}{c}{Regression-path duration (ms)} \\ pre-critical & \([7,77]\) & \([-12,60]\) & \([-5,55]\) & \([-14,50]\) & \([-27,12]\) & \([-33,6]\) \\ critical verb & \([-6,64]\) & \([-16,57]\) & \([-49,58]\) & \([-37,70]\) & \([-26,16]\) & \([-25,15]\) \\ \hline \multicolumn{8}{c}{First-pass regressions (percentage)} \\ pre-critical & \([-2,11]\) & \([-4,9]\) & \([-3,6]\) & \([-3,7]\) & \([-2,1]\) & \([-1,3]\) \\ critical verb & \([2,15]\) & \([-2,11]\) & \([0,16]\) & \([-1,16]\) & \([-3,2]\) & \([-4,1]\) \\ \hline \hline \end{tabular} _Note._ Shown are the 95% credible intervals of the estimated effects from the data and from the two models. The empirical estimates are from the held-out data (30% of the data). _subj_ = Effect of subjecthood, _anim_ = Effect of animacy. \end{table} Table 2: Summary of Empirical vs. Model Estimates From SEAM and SWIFT of the Subjecthood and Animacy Effects on Regression Path Durations and First-Pass Regressions. explanation of the differences in observed behavior, we look at the differences in the internal model dynamics under the different experimental conditions. In particular, we can examine the word activation field, which is the main driver for target selection probabilities in SEAM and SWIFT (Equation 8), including regressive saccades. In Figure 9, we show word activations in SEAM, averaged across 500 independent simulations, using the mean estimated model parameters across all model fits. Before averaging across simulations, all word activations are centered on the temporal dimension so that \(t=0\) is the time when the activation of the critical verb reaches its maximum, that is, when post-lexical processing of the critical verb starts and triggers the memory retrieval. First, it is important to note that the activations of the critical verb, when normalized in time, do not vary substantially between experimental conditions. Although some conditions seem to have a slower decrease than others, overall the curves are very similar in all conditions. When the retrieval starts at \(t=0\), retrieval candidates are reactivated, with their memory activation \(A^{\prime}_{j,n}\left(t\right)\) modulating the transition rate \(w^{\prime}_{j}(t)\) (see Equation 10) of word/memory chunk \(j\). While the activation for the target word seems to be very similar over time between conditions as well, there is some variability in the time course of the activations of the distractor noun and of the adverb around the retrieval. Regarding the adverb, the main reason it is reactivated during retrieval is that it has the highest baseline activation \(B(t)\), as it was most recently encoded/accessed in memory before the retrieval started. The later processing of object noun distractors also attenuates the processing that the adverb receives, which leads to weaker reactivation of the adverb during the retrieval. We can also observe that the distractor word activations prior to the retrieval peak earlier for the two conditions where the distractor is a subject noun, that is, in the conditions where there is syntactic interference. This effect is not related to the retrieval at the critical verb (which has not started at this time), but is due to the distractor appearing earlier in the sentence when it is a syntactic subject. Interestingly, the distractor noun only significantly peaks during the retrieval in the \(+\text{animate}/+\text{subject condition}\), that is, when both features match the retrieval cues. The distractor thus only attracts regressions when both the animacy and subjectiond features match, i.e., when there is both syntactic and semantic interference. Despite this difference in word activations, there is no significant difference between the proportions of observable targeted regressions from the critical verb to the distractor noun between any of the experimental conditions. This is true for the experimental data as well as for the data simulated by SEAM and SWIFT, as shown in Table 3. As the estimates show, there is no indication in the experimental data that the distractor is regressed to more often in the \(+\text{animate}/+\text{subject condition}\). The distractor's activation pattern in Figure 9 is simply a consequence of the hard-coded assumption in LV05 that it has the highest feature match in this condition. Interestingly, however, the predicted data from SEAM do not show an increase in incoming regressions either. An increase in word activation thus does not necessarily translate into a change in observed eye movements. The lack of a direct effect on distractor refixations is likely due to oculomotor error, which is more influential for long-range saccades, and due to upcoming words having even higher activations than the distractor. #### Summary We showed that both SEAM and SWIFT can be fitted to the Mertzen et al. (2023) experimental data set. In contrast to SWIFT, however, SEAM's predictions are in good agreement with the overall and by-frequency regression probabilities and regression-path durations. SEAM shows the more specific memory interference effects, that is, differences in regression probabilities and regression-path durations due to differences in the animacy and subjecthood of a distractor noun. Given that the compared models SEAM and SWIFT only differ in the supplemental cue-based memory retrieval processes contributed by the LV05 component, we can attribute the better performance of SEAM in these metrics to LV05 principles with the four additional parameters that were fit to the training data from Mertzen et al. (2023) (\(F\), \(d\), \(\mu_{2}\), and \(\mu_{3}\)). It is also noteworthy that these parameters were estimated based on a restricted training data set for each participant, and that the model can make reasonable predictions on the held-out test data for all experimental conditions with a single model fit for each participant. Furthermore, even though the models are compared to each other and to the experimental data using summary statistics and predicted experimental effects, neither SWIFT nor SEAM was directly optimized to reproduce these measures. Instead, both models were fitted directly to the raw, unbiased fixation sequences of each participant. Therefore, the models can make reasonably accurate predictions for summary statistics and experimental effects although they are not specifically fitted to them. ## 5 Discussion We showed that adding a memory interference mechanism in the SWIFT architecture--resulting in the SEAM model--allows us to bring together eye-movement control theory and a psycholinguistic account of dependency completion. We demonstrated that that the key regressive eye-movement related patterns in an experimental psycholinguistic data set can be accounted for by the SEAM architecture. Specifically, we showed that first-pass regressions and regression path duration patterns that occur due to the interference manipulation in the Mertzen et al. (2023) data can be accounted for by SEAM, but not by SWIFT; in SEAM, as in the data, both syntactic and semantic interference have an impact on the two dependent measures at the pre-critical region and the critical verb. The main results of our simulations are summarized in Table 2 and Figures 7, 8, and 9. There were three interesting patterns in the SEAM fit that deserve discussion. First, as shown in Figure 7, at the critical verb, regression path durations from SEAM show essentially no effect of \begin{table} \begin{tabular}{l c c c} \hline \hline Distractor features & Empirical estimates & SEAM predictions & SWIFT predictions \\ \hline \(-\)subject/\(-\)animate & \([0.2,1.7]\) & \([0.5,3.1]\) & \([0.1,1.1]\) \\ \(-\)subject/\(+\)animate & \([0.2,1.9]\) & \([0.6,3.5]\) & \([0.0,0.8]\) \\ \(+\)subject/\(-\)animate & \([0.1,1.4]\) & \([0.7,4.3]\) & \([0.1,1.0]\) \\ \(+\)subject/\(+\)animate & \([0.1,1.5]\) & \([0.6,3.7]\) & \([0.0,0.8]\) \\ \hline \hline \end{tabular} _Note._ Shown are the 95% credible intervals of the proportions (in %) of trials with first-pass regressions from the critical verb to the distractor noun, as estimated by a nested linear mixed-effects regression. The empirical estimates are based on the held-out data (30% of the original experimental data) and the simulated estimates are based on predictions for those held-out data. \end{table} Table 3: Incoming Regression Probabilities for the Distractor Noun subjecthood; this is surprising because the data do show such an effect. At the same time, in SEAM, first-pass regressions at the verb show a clear subjecthood effect. This means that even though regressions were triggered at the verb, in the SEAM model, the simulated eye did not spend much time in the region(s) preceding the critical verb or on the verb itself. This pattern could be due to a complex interaction between the behavior of pure SWIFT and the memory module; it would be difficult to pinpoint exactly why this happens.9 However, notice that Figure 9 the subjecthood manipulation causes a peak in the distractor's activation; this could in principle trigger a regression, but does not necessarily entail a long dwell time in the region preceding the verb. Thus, it is possible that the increased activation of the distractor in the +subject conditions could lead to increased regressions from the verb, but not increased regression path durations. Footnote 9: As the regression path duration is the sum of gaze durations on the current word and on all preceding regions until the (simulated) eyes leaves to the right of the current word, an effect in regression path durations could be due to (a) an effect on gaze durations on preceding regions, (b) an effect on gaze duration on the current word, (c) a combination of both. Likewise, a null effect could be a masked effect of gaze durations on the regression source vs. gaze durations on preceding regions. The second interesting pattern relates to the effects observed at the pre-critical adverb region. Recall that in the original LV05 model, sentences are processed in strictly serial order. Effects of similarity-based interference at the pre-critical adverb are thus unexpected under this model: Given the assumption that the verb is the retrieval trigger, there should be no retrieval-related effects before it is read. Nevertheless, Mertzen et al. (2023) did observe interference effects at the pre-critical adverb (others have found similar patterns in the pre-critical region; see Lago et al., 2021; Van Dyke, 2007). Mertzen and colleagues discuss several possible reasons for these effects: Differential processing spillover from previous regions due to differences in sentence complexity between conditions, lingering memory interference during encoding of the noun phrases, and predictive processing of the verb. A final important possibility considered by Mertzen et al. (2023) is parafoveal preview of the verb while the adverb is being processed, so that the verb can trigger the retrieval prior to being fixated. Our SEAM simulations are partly consistent with this last account: In 25% of our simulations, the verb reaches the retrieval stage while the adverb is being fixated. However, there is also processing spillover in the form of residual word activation in SEAM. Especially in the +subject conditions, where there is an additional retrieval in the embedded sentence at _was important_, and the activation of the retrieval target may not have fully decayed yet when the adverb is read, leading to more regressions. Based solely on the Mertzen et al. (2023) data and the small sample size of the held-out data, it is difficult to quantify the relative contributions of preview and spillover, and we leave this issue to future research. Nevertheless, SEAM provides a promising starting pointing for tackling possible pre-critical retrieval effects. A third noteworthy pattern occurs in Figure 9; the +subject/+animate condition causes a large increase in the distractor's word activation after the critical verb is encoded. This suggests that the probability of the distractor to attract regressions should be much higher in that condition than the sum of the +subject/-animate and -subject/+animate conditions. Even though the combination of the two retrieval cues is additive at the level of the LV05 memory activation (see Equation 1), the exponential transformation of \(A(t)\) in Equation (9) significantly amplifies it. Nevertheless, the superadditive effect on the distractor's activation when it matches both retrieval cues does not generate any detectable overadditive effects in the analyzed regression-related dependent measures (regression path duration and first-pass regressions). As discussed in the previous section, the spike in activation does not necessarily translate into observed regressions, partly because the large dis tance between the verb and the distractor amplifies the influence of oculomotor error. With less complex sentences, it is thus possible that SEAM would show effects on the observed regression probabilities. ### General discussion From the very beginning of eye-movement research in reading, a dominant idea has been that the eye and mind are tightly coupled (e.g., Just & Carpenter, 1980). After psycholinguists started looking at fixation patterns in reading as a function of language comprehension difficulty, an important idea that was expressed in a now-classic paper by Frazier and Rayner (1982) was the selective reanalysis hypothesis: this was the idea that increased comprehension difficulty (e.g., due to garden-pathing) leads to targeted regressions to a preceding region that caused the processing difficulty. Although the strongest version of the selective reanalysis is difficult to uphold given subsequent investigations (e.g., Mitchell et al., 2008; von der Malsburg & Vasishth, 2011), it is nevertheless well-established that increased regressions are triggered when language processing difficulty occurs (e.g., Clifton et al., 2007). Most of the psycholinguistic work carried out on reading until now has side-stepped the underlying complex latent processes involved in reading, and instead focused only on key events involved in linguistic dependency completion. Abstracting away from these underlying latent reading processes has had many advantages, a major one being that it allows us to focus exclusively on the psycholinguistically interesting aspects of processing at the level of the sentence representation. On the other hand, the simplification comes at a cost, because interactions between constraints on eye-movement control and language comprehension end up being ignored. Interestingly, cognitive psychology has gone in a completely different direction than psycholinguistics: there, the focus has been on spelling out detailed process models of eye-movement control that rely primarily on relatively low-level drivers of eye movements, such as frequency and word length. Models of eye-movement control such as E-Z Reader (Reichle et al., 1998) and SWIFT (Engbert et al., 2005) have shown excellent performance in explaining benchmark data in reading, without modeling the higher-level cognitive processes such as linguistic dependency completion in any great detail. One major gap in the literature is that these two threads--psycholinguistic explanations of reading difficulty versus cognitive psychology models of reading--have only rarely been considered to be joint actors in explaining key effects observed in experimental data from psycholinguistics. Our paper makes an attempt to fill this gap: using data from a classic similarity-based interference design, we demonstrate one way in which an eye-movement control model, SWIFT, can be extended to include dependency completion processes. We show that such an extended model (SEAM) can produce regressive eye movements triggered by retrieval that occurs during linguistic dependency completion. Developing such models is the only way to unpack the latent processes involved in reading and to investigate how low- and high-levels of cognitive processes interface dynamically. To our knowledge, SEAM is the only model to date that extends a complete model of eye-movement control with a detailed model of linguistic dependency completion, using data from a planned experiment in psycholinguistics and rigorous statistical inference. Apart from using SWIFT as the eye-movement module, SEAM differs in important ways from previous integrative models of eye movement control and higher-level sentence processing. For instance, Uber-Reader (Reichle, 2020), whose eye movement module is highly similar to that of E-Z Reader (Reichle et al., 1998), has a parsing module that builds syntactic structure, but each parsing step is assumed to take the same amount of time. In SEAM, by contrast, completing syntactic dependencies takes a variable amount of time that is determined by the LV05 Equations (which originally come from ACT-R). Furthermore, regressive saccades are not captured by Uber-Reader, but are modeled dynamically in SEAM. Another integrative model proposed by Dotlacil (2021), whose eye movement module is also based on E-Z reader, makes use of ACT-R Equations, but in a different way from SEAM: In Dotlacil's model, the latency with which a given dependent word is integrated into the sentence's syntactic representation depends on the retrieval time for the dependent words and additionally on the retrieval time for the relevant parsing rule from declarative memory. SEAM does not assume retrieval of parsing rules, which are assumed to be represented as procedural knowledge, as in the LV05 model. Another salient difference between the models is that regressions in Dotlacil's (2021) model are only triggered when parsing failure occurs, while regressions in SEAM are driven by the dynamic target selection processes taken over from SWIFT. As a final comparison, the model of Engelmann et al. (2013) and Vasishth and Engelmann (2022) combines an LV05 sentence processing module with eye movement control based on EMMA (Salvucci, 2001), but also does not provide a detailed model of saccade targeting, unlike SEAM. There are of course several limitations to the present work. First and foremost, the current implementation of SEAM and its evaluation are only a proof-of-concept. Because of the absence of large-scale data sets with psycholinguistically interesting manipulations, it is difficult to present a comprehensive evaluation of the proposed SEAM architecture. However, such an investigation is in principle possible to carry out, given (i) the progress on Bayesian inference for process-based models and (ii) the fact that more and more researchers are releasing data and code associated with their published papers. We expect that in future work, more comprehensive evaluations of architectures like ours can be carried out, using large-scale data from a broad range of phenomena in psycholinguistics. At a minimum, such an investigation would need to include cross-linguistic data from garden-path sentences of different types (e.g., Frazier, 1979), predictability manipulations (e.g., Levy, 2008), the full spectrum of similarity-based interference effects (e.g., Jager et al., 2017), underspecification effects (e.g., Swets et al., 2008), etc. This would be a sizable project, but one which would significantly advance our understanding of how eye-movement control and parsing interface during reading. A second limitation is that, due to the computational complexity of investigating such a detailed model of reading, formal model comparison between the baseline SWIFT model and the SEAM model is difficult to carry out. We avoided overfitting the models to data by separating the empirical data into a training set and a held-out set, and evaluating the model fit only on the held-out set. This is already a significant advance over conventional approaches to model evaluation; in both cognitive psychology and psycholinguistics, it is common to evaluate a model on the same data that it is trained on. In principle, it is possible to go even further than we did in this paper, and to evaluate predictive performance by using k-fold cross-validation. This would involve creating \(k\) (usually, in machine learning, \(k=10\)) subsets of the data to train on, and then use the \(k\) held-out data sets for evaluation; this would allow us to compute a quantitative measure of average fit, such as expected log pointwise density (e.g., Gelman et al., 2014). We did not carry out such a quantitative evaluation because it would have been computationally extremely costly. For example, just the pure SWIFT model discussed in Rabe et al. (2021) required a high-performance computing environment, and the total computing time was approximately 10,000 core hours, amounting to 3.5 hours run time on 72 independent parallel nodes with 40 cores per node. Our goal in the present work was to get as close as possible to the underlying processes involved in reading, but obviously this comes with an unavoidable computational cost. ## 6 Conclusion We present an integrated model of eye-movement control and linguistic dependency completion while reading. The model, called SEAM, is an integration of the SWIFT model of eye-movement control and the Lewis-Vasishth model of sentence processing. SEAM is evaluated using experimental data from a similarity-based interference experiment. We show that the SEAM model can account for empirically observed regressive eye movement-based measure; in the model, regressive eye movements are shown to be triggered by retrieval processes that result from higher-level dependency completion during sentence parsing. To our knowledge, this is the first demonstration of how eye-movement control and sentence comprehension processes can interact in explaining data from a psycholinguistically controlled experiment. ## 7 Acknowledgements This work was supported by a grant from Deutsche Forschungsgemeinschaft (DFG) to Ralf Engbert and Shravan Vasishth (SFB 1297 _Variability in Language_, project no. 317633480). We also acknowledge support by Norddeutscher Verbund fur Hoch- und Hochstleistungsrechnen (HLRN, project no. bbx00001) for providing high-performance computing resources.
2310.05319
Surface ferromagnetism in rhombohedral heptalayer graphene moire superlattice
The topological electronic structure of crystalline materials often gives rise to intriguing surface states, such as Dirac surface states in topological insulators, Fermi arc surface states in Dirac semimetals, and topological superconductivity in iron-based superconductors. Recently, rhombohedral multilayer graphene has emerged as a promising platform for exploring exotic surface states due to its hosting of topologically protected surface flat bands at low energy, with the layer-dependent energy dispersion. These flat bands can promote electron correlations, leading to a plethora of quantum phenomena, including spontaneous symmetry breaking, superconductivity, ferromagnetism, and topological Chern insulators. Nevertheless, the intricate connection between the surface flat bands in rhombohedral multilayer graphene and the highly dispersive high-energy bands hinders the exploration of correlated surface states. Here, we present a method to isolate the surface flat bands of rhombohedral heptalayer (7L) graphene by introducing moire superlattices. The pronounced screening effects observed in the moire potential-modulated rhombohedral 7L graphene indicate its essential three-dimensional (3D) nature. The isolated surface flat bands favor correlated states on the surface in the regions away from charge-neutrality points. Most notably, we observe tunable surface ferromagnetism, manifested as an anomalous Hall effect with hysteresis loops, which is achieved by polarizing surface states using finite displacement fields. Our work establishes rhombohedral multilayer graphene moire superlattice as a unique 3D system for exploring correlated surface states.
Wenqiang Zhou, Jing Ding, Jiannan Hua, Le Zhang, Kenji Watanabe, Takashi Taniguchi, Wei Zhu, Shuigang Xu
2023-10-09T00:54:58Z
http://arxiv.org/abs/2310.05319v1
# Surface ferromagnetism in rhombohedral heptalayer graphene moire superlattice ###### Abstract The topological electronic structure of crystalline materials often gives rise to intriguing surface states, such as Dirac surface states in topological insulators[1], Fermi arc surface states in Dirac semimetals[2], and topological superconductivity in iron-based superconductors[3]. Recently, rhombohedral multilayer graphene has emerged as a promising platform for exploring exotic surface states due to its hosting of topologically protected surface flat bands at low energy, with the layer-dependent energy dispersion[4; 5; 6; 7; 8]. These flat bands can promote electron correlations, leading to a plethora of quantum phenomena, including spontaneous symmetry breaking[9], superconductivity[10; 11; 12], ferromagnetism[13; 14], and topological Chern insulators[15; 16; 17]. Nevertheless, the intricate connection between the surface flat bands in rhombohedral multilayer graphene and the highly dispersive high-energy bands hinders the exploration of correlated surface states. Here, we present a method to isolate the surface flat bands of rhombohedral heptalayer (7L) graphene by introducing moire superlattices. The pronounced screening effects observed in the moire potential-modulated rhombohedral 7L graphene indicate its essential three-dimensional (3D) nature. The isolated surface flat bands favor correlated states on the surface in the regions away from charge-neutrality points. Most notably, we observe tunable surface ferromagnetism, manifested as an anomalous Hall effect with hysteresis loops, which is achieved by polarizing surface states using finite displacement fields. Our work establishes rhombohedral multilayer graphene moire superlattice as a unique 3D system for exploring correlated surface states. \({}^{1}\) Key Laboratory for Quantum Materials of Zhejiang Province, Department of Physics, School of Science, Westlake University, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China \({}^{2}\) Institute of Natural Sciences, Westlake Institute for Advanced Study, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China \({}^{1}\) Research Center for Electronic and Optical Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan \({}^{2}\) Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan \({}^{3}\)These authors contributed equally to this work. \({}^{*}\)Correspondence to: [email protected], [email protected] In rhombohedral multilayer graphene, the low-energy electrons are entirely concentrated on the two surface layers, while the bulk states exhibit an energy gap[18]. This distinctive characteristic provides an ideal platform for the exploration of diverse surface states. The surface electronic bands of rhombohedral graphene can be approximately described by \(E\sim\pm p^{N}\) in a two-band model, where \(E\) is the kinetic energy, \(p\) the momentum, and \(N\) the layer number[18; 19]. With increasing \(N\), these surface bands become extremely flat at low energy. Due to the instability to electronic interactions endowed by their large density of states (DOS), these surface flat bands hypothetically host strongly correlated states, such as spontaneous quantum Hall states[20], high-temperature superconductivity[21], ferromagnetism[22, 23, 24, 25]. Furthermore, in rhombohedral multilayer graphene, the low-energy surface states, characterized by alternating intralayer and interlayer hopping, is a tailor-made simulator of one-dimensional topological insulator in the Su-Schrieffer-Heeger model[26]. Moreover, the chiral stacking in rhombohedral graphene gives rise to large momentum-space Berry curvatures, accompanied by a giant intrinsic magnetic moment inherited from the multivalley features of graphene. This feature positions it as a promising platform for exploring topological non-trivial states, such as anomalous Hall effect (AHE)[5]. Experimentally, with recent advancement of technique for producing hexagonal boron nitride (h-BN) encapsulated structures[27], correlation-driven insulating states, magnetic states, and superconductivity have been reported in bilayer (\(N=2\))[28], rhombohedral trilayer (\(N=3\))[7, 8, 29, 30, 31], tetralayer[32], pentalayer[33, 34], and multilayers (\(N\geq 7\)) graphene[6, 35]. The power-law energy dispersion in rhombohedral multilayer graphene suggests that the low-energy surface flat bands are connected to highly dispersive high-energy bands. Consequently, the observations of strong correlations in intrinsic rhombohedral graphene have been restricted to very low carrier density (\(n\)) regimes (\(n\to 0\))[6, 7, 8]. To isolate these surface flat bands from the high-energy dispersive bands is not only beneficial for exploring correlated states in high \(n\) regimes, but also indispensable for recurring Chern band. One versatile approach for achieving this isolation is through the stacking of van der Waals materials with a twist angle and/or a lattice mismatch, which constructs moire superlattices at two-dimensional (2D) interfaces[36, 37]. These moire superlattices impose a long-range periodic potential, resulting in band folding and the formation of a mini-Brillouin zone. This process typically leads to bandwidth reduction, thereby enhancing the effects of electronic correlations. Consequently, many unique band structures emerge at low energy near the Fermi surface, accompanied by the appearance of exotic states, such as superconductivity[10], correlated insulating states[38], orbital magnetism[13], and Hofstadter butterfly[39, 40]. Here, we introduce moire superlattices into rhombohedral multilayer graphene, to separate the low-energy surface flat bands away from high-energy dispersive bands. These moire superlattices were constructed by crystallographically aligning rhombohedral multilayer graphene with h-BN during the van der Waal assembly. Thanks to the small lattice mismatch (\(\delta\approx 1.6\%\)) between graphene and h-BN, a moire superlattice can be formed with a long-range wavelength given by \(\lambda=\frac{(1+\delta)a_{G}}{\sqrt{2(1+\delta)(1-\cos\theta)+\delta^{2}}}\), where \(a_{G}=0.246\) nm is the in-plane lattice constant of graphite, and \(\theta\) the relative misalignment angle between the two lattices. Our band calculations confirm the presence of an isolated surface flat band at the conduction band, as shown in Fig. 1c and Extended Data Fig. 12. To probe the electronic transport of rhombohedral graphene, we have employed a dual-gate structure, as depicted schematically in Fig. 1b, which enables us to independently control \(n\) and displacement field (\(D\)). Our devices were fabricated through the mechanical exfoliation of natural graphite. We chose rhombohedral heptalayer (7L) graphene as the building block, since our previous work indicates that it preserves the three-dimensional (3D) characteristics of graphite meanwhile exhibiting strong correlations[6]. Raman spectra and mapping techniques were employed to identify the stacking order and select rhombohedral (also described as ABC) domains for device fabrication (see Fig. 1d and Extended Data Fig. 2). Fig. 1f shows low-temperature (\(T\) = 50 mK) longitudinal (\(R_{xx}\)) and Hall (\(R_{xy}\)) resistances as a function of \(n\), with carriers concentrated at one of the surfaces under a fixed \(D\) = 1 V nm-1. Besides the peak at charge-neutrality point (\(n\) = 0), \(R_{xx}\) exhibits two additional prominent peaks at high-density region. The corresponding \(R_{xy}\) exhibits sign reversals, indicative of Fermi surface reconstruction[41]. This phenomenon can be attributed to either band folding caused by the moire superlattice or strong correlations, which we will discuss in detail later. In either case, with the assistance of the moire superlattice, we have succeeded in isolating the surface band from high-energy band, resulting in the opening of a band gap in high \(n\) regions. To reveal the electronic transport behavior influenced by the moire potential in rhombohedral 7L graphene, we also fabricated a device using intrinsic rhombohedral 7L graphene without alignment with h-BN (device D1) for reference. Fig. 2a and 2b show color maps of \(R_{xx}(n,D)\) for devices without and with moire superlattice, respectively. In the absence of moire, two distinct insulating states emerge at \(n=0,D=0\), and \(n=0,|D|>0.4\) V nm-1 as illustrated in Fig. 2a. This behavior closely resembles what has been observed in rhombohedral nonalayer (9L) graphene[6]. The insulating state at \(|D|>0.4\) V nm-1 is attributed to the opening of an energy gap in the surface states, Figure 1: **Rhombohedral 7L graphene moiré superlattice**. **a**, Schematic of rhombohedral 7L graphene. Left and right represent side view and cross-section view along the in-plane armchair direction, respectively. The two curves in the right schematic illustrate the wavefunctions of low-energy states concentrate at the sublattices located at each surface. **b**, Schematic of a dual-gate h-BN encapsulated devices with moiré superlattices at the interfaces between h-BN and graphene. **c**, Optical image of a typical device with a Hall bar geometry. **d**, Raman spectra of ABA-stacked and ABC-stacked 7L graphene. **e**, Calculated band structure of rhombohedral 7L graphene with moiré superlattice at both top and bottom surface. The interlayer potential used in the calculation is 12 meV. **f**, Longitudinal (R\({}_{xx}\)) and Hall (R\({}_{xy}\)) resistances as a function of total carrier density measured at \(D=1\) V nm-1 and \(T=50\) mK. resulting from inversion symmetry breaking induced by a large electric field. Differently, the insulating state at \(n=0,D=0\) cannot be explained in a single-particle picture and is believed to be a correlated gap as a result of spontaneous symmetry breaking favored by surface flat band [20]. It's noted that the insulating states at \(n=0,D=0\) strongly rely on the electronic coupling between top and bottom surfaces, which only occurs in thin-layer (roughly \(N\leq 10\)) rhombohedral graphene [6]. In rhombohedral 7L graphene, this correlated gap is highly reproducible and has been observed in multiple devices (see Extended Data Fig. 7). Introducing moire superlattice into rhombohedral 7L graphene significantly modifies its transport Figure 2: **Low-temperature transport characteristics of rhombohedral 7L graphene without and with moiré superlattice.****a, b**, Color maps of longitudinal resistance \(R_{xx}\) plotted in logarithmic scales as a function of carrier density \(n\) and displacement field \(D\) measured at \(T=50\) mK and \(B=0\) T for the devices without **(a,** device D1) and with **(b,** device D2) moiré superlattice. **c**, \(R_{xx}\) as a function of magnetic field \(B\) and total carrier density \(n\) at \(T=\)50 mK and \(D=0\) V nm-1. Quantum oscillations independent on \(n\) were observed. We attribute it as Brown-Zak oscillations arising from moiré potentials at the two interfaces between graphene and h-BN. There are two sets of oscillatory, indicating this sample is doubly aligned with two decoupled moiré superlattices at each interface. The labels near y axis denotes the \(\frac{\phi}{\phi_{\phi}}=1/q\), when the integer number \(q\) of superlattice unit cells are commensurate with the magnetic flux quantum \(\phi_{0}\). **d**, Temperature dependence of \(R_{xx}\) as a function of \(n\) at \(D=1.1\) V nm-1, \(B=0\) T. Inset: Arrhenius plot (\(\ln\!R_{xx}\) versus \(T^{-1}\)) for charge-neutrality point (\(\nu=0\)) at high temperature region. The dashed line represents the linear fit, from which the transport gaps \(\Delta\) can be extracted by \(\ln\!R_{xx}\propto\Delta/2k_{B}T\). The linear fits give \(\Delta=12.9\) meV, \(4.7\) meV, and \(0.8\) meV at \(\nu=0,1,\text{and }2\), respectively. The data in **(c)** and **(d)** were acquired in the sample with moiré superlattice (device D2). behavior, as shown in Fig. 2b (device D2). First, the correlated gap at \(n=0,D=0\) disappears, indicating moire potential at the interface between h-BN and graphene effectively decouples the two surface states. Second, the critical field (\(D_{c}\)), above which a band insulator gap is opened, increases to approximately 0.8 V/nm. Applying \(D\) via asymmetric dual gates generates a potential difference between the two surfaces, resulting in a carrier redistribution that strongly screens out the external field. The larger \(D_{c}\) in Fig. 2b indicates the moire potential favors carrier localization at the surfaces, thus enhancing the screening effect. This enhanced screening effect is further evident from the presence of a series of horizontal and vertical lines in the region below \(D_{c}\), when plotting \(R_{xx}\) as a function of \(n_{t}\) and \(n_{b}\) (see Extended Data Fig. 4). It serves to electronically decouple the two surface states and suppress their interactions, which explains the absence of correlated states at \(n=0,D=0\). These features collectively indicate that moire potential modified-rhombohedral 7L graphene essentially behaves as a 3D system. Third, we also observed additional gap states at large \(D\) beyond charge-neutrality point (\(n\neq 0\)). When \(|D|>|D_{c}|\), the finite band overlap between conduction and valence bands is lifted due to inversion symmetry breaking. The surface states become fully polarized, such that charge carriers concentrate on only one of the two surfaces. Namely, for positive \(D>D_{c}\), only electrons (holes) in the conduction (valence) band at the bottom (top) surface contribute to the conductance (see Extended Data Fig. 4). The screening effect vanishes, manifested as both gates are effective, accompanied by the disappearance of the horizontal and vertical lines in Extended Data Fig. 4a. Unlike device D1 in Fig. 2a, device D2 exhibits additional resistance peaks at \(n_{1}=1.0\times 10^{12}\) cm' \({}^{2}\) and \(n_{2}=2.1\times 10^{12}\) cm\({}^{-2}\) for \(D>0\). Similar extra prominent peaks also appear for \(D<0\), but at slightly different \(n\). Notably, when comparing these features to those in device D1 without moire, the peaks appearing at non-zero \(n\) stem from the formation of moire minibands. The observation of remarkable Brown-Zak oscillations[38; 40; 42], as shown in Fig. 2c, further confirms the formation of moire superlattices in Device D2. In Fig. 2c, there are two distinct sets of oscillatory behavior periodic in \(1/B\), which indicates that our device has doubly aligned configuration[43; 44]. From the oscillation period, we can extract two twist angles \(\theta_{1}=0.88^{\circ}\) and \(\theta_{2}=0.90^{\circ}\) at the two interfaces. With this, we can assign \(n_{1}\) and \(n_{2}\) to the quarter filling (\(\nu=1\)) and half filling (\(\nu=2\)) of the moire miniband, respectively. The double alignment is consistent with that two sets of extra peaks at \(n>0\) appear at both \(D>0\) and \(D<0\). The possibility of double alignment can be further confirmed by the optical image of the stack showing the alignment of the straight edges of h-BN and graphene flakes (see Extended Data Fig. 3). As our graphene is sufficiently thicker, the moire superlattices from the two interfaces remain decoupled, a fact supported by the aforementioned several features. It's reasonable to treat the moire effects as independence and disregard the super-moire effects. In the following part, we mainly focus on the \(n>0\), \(D>0\) region, namely on the conduction band modulated by the moire superlattice at the bottom surface. Similar behavior can be found in the other regions (see Extended Data Fig. 10). The temperature dependence of resistance peaks at \(\nu=1\) and \(\nu=2\) exhibits typical insulating behavior, where the resistance increases as the temperature decreases. These insulating states at partial fillings are correlated insulators, arising from strong electron-electron interactions induced spontaneous symmetry breaking, facilitated by the further flattening of surface bands through zone folding. From the Arrhenius plots in Fig. 2d and Extended Data Fig. 8, we estimate that the single particle gap and correlated gaps at \(\nu=1\) and \(\nu=2\) are approximately 12.9 meV, 4.7 meV, 0.8 meV, respectively. One remarkable feature of rhombohedral 7L graphene, compared with thinner one, lies in its 3D characteristic, which can be further evident from Landau quantization at high \(B\). Fig. 3 shows the Landau diagrams at \(B=14\) T of rhombohedral 7L graphene both without and with moire superlattices. In both devices, a series of horizontal and vertical features emerge at small \(D\), which arise from the coexistence of two surface states with a high carrier density, effectively screening out their influence on each other. As \(D\) increases above a critical value, the surface states become polarized, with carriers concentrating on only one of the two surfaces. In this regime, the carriers can be effectively tuned by both gates, resulting in the appearance of diagonal Landau levels (LLs) features at high \(D\). The prominent screening features observed in Landau diagrams resemble those seen in Bernal (ABA) -stacked graphite, albeit with opposite distributions[45, 46, 47] (see detailed discussions in Methods). **Fig. 3\(|\) Landau quantization in rhombohedral 7L graphene.****a, b,** Color maps of longitudinal resistance \(R_{xx}\) as a function of top and bottom gates induced carrier density \(n_{t}\) and \(n_{b}\), measured at \(B=14\) T. The data were taken from the devices without **(a,** device D3 at \(T=50\) mK) and with **(b,** device D2 at \(T=1.5\) K) moire superlattice. **c, d,** Wannier diagram depicting the LLs according to the raw data in **a** and **b**, respectively. The red lines represent the insulating states at zero field. The black lines represent LLs from polarized surface states, manifested as diagonal lines tunable by both gates. The blue lines represent screened LLs, where two surface states are strongly screened by each other, manifested as a series of horizontal and vertical features. Further insights into the broken symmetry can be gained from the Hall resistance \(R_{xy}\). Fig. 4**a** and 4**b** provide high-resolution maps of \(R_{xx}\) and \(R_{xy}\) in the vicinity of the correlated insulating states at \(D>0\). Near \(\nu=1\) and \(\nu=2\), maxima in \(R_{xx}\) are accompanied with rapid sign changes in \(R_{xy}\), indicating a change of carrier type. These sign reversals result from Fermi surface restructuring driven by correlations and the formation of a new band edge, similar to that in twisted graphene [41]. Additionally, the evolution of \(R_{xy}\) as a function of \(n\) and \(D\) at low \(B\) in Fig. 4**b** exhibits gradual sign changes within the \(n\) filling from \(\nu=0\) to \(\nu=1\) and \(\nu=1\) to \(\nu=2\). These sign changes correspond to divergences in \(n\) and are associated with saddle points in the energy dispersion at the Fermi surface, known as van Hove singularities (vHSs) [48]. Hall resistance changes its sign at gap states and vHSs with different ways. **c, d**, Anti-symmetrized Hall resistance \(\rho_{xy}\), defined in the Methods, as a function of \(B\) swept back and forth at (**c**) a fixed \(D=0.96\) V nm-1, varying \(n\) from \(1.18\times 10^{12}\) cm-2 to \(2.12\times 10^{12}\) cm-2, and (**d**) a fixed \(n=1.20\times 10^{12}\) cm-2, varying \(D\) from \(0.88\) V nm-1 to \(0.98\) V nm-1. The absolute values are manually offset for clarity. The AHE with both nonlinear features and hysteresis loops manifests itself as a ferromagnetic state, which is tunable by both \(n\) and \(D\). **e**, Color plots of residual resistance \(\Delta\rho_{xy}^{AH}\)as a function of \(n\) and \(D\). The individual dots were extracted from the measurements of AHE at corresponding \(n\) and \(D\). The colors represent the values of \(\Delta\rho_{xy}^{AH}\). The red and blue curves denote two types of sign reversal of \(R_{xy}\) near gaps and vHSs, respectively, which are sketched in the right schematic. **f,** Temperature dependence of AHE. \(\rho_{xy}\) as a function of \(B\) swept back and forth, showing ferromagnetic hysteresis at different temperatures, with fixed \(n=2.10\times 10^{12}\) cm-2 and \(D=0.91\) V nm-1. The inset shows the evolution of residual resistance \(\Delta\rho_{xy}^{AH}\) as a function of temperature. All the data of **a-f** were taken in device D2 at \(T=50\) mK. **Fig. 4**: **Tunable surface ferromagnetism in rhombohedral 7L graphene moiré superlattice.****a,** Fine maps of \(R_{xx}\) plotted in linear scales as a function of \(n\) and \(D\) near the conduction band modulated by the moiré superlattice at the bottom surface for \(B=0\). **b,** Corresponding anti-symmetrized Hall resistance \(R_{xy}=\frac{R_{xy}(+B)-R_{xy}(-B)}{2}\) at a fixed small magnetic field \(B=\pm 1\) T. The Hall resistance changes its sign at gap states and vHSs with different ways. **c, d**, Anti-symmetrized Hall resistance \(\rho_{xy}\), defined in the Methods, as a function of \(B\) swept back and forth at (**c**) a fixed \(D=0.96\) V nm-1, varying \(n\) from \(1.18\times 10^{12}\) cm-2 to \(2.12\times 10^{12}\) cm-2, and (**d**) a fixed \(n=1.20\times 10^{12}\) cm-2, varying \(D\) from \(0.88\) V nm-1 to \(0.98\) V nm-1. The absolute values are manually offset for clarity. The AHE with both nonlinear features and hysteresis loops manifests itself as a ferromagnetic state, which is tunable by both \(n\) and \(D\). **e**, Color plots of residual resistance \(\Delta\rho_{xy}^{AH}\)as a function of \(n\) and \(D\). The individual dots were extracted from the measurements of AHE at corresponding \(n\) and \(D\). The colors represent the values of \(\Delta\rho_{xy}^{AH}\). The red and blue curves denote two types of sign reversal of \(R_{xy}\) near gaps and vHSs, respectively, which are sketched in the right schematic. **f,** Temperature dependence of AHE. \(\rho_{xy}\) as a function of \(B\) swept back and forth, showing ferromagnetic hysteresis at different temperatures, with fixed \(n=2.10\times 10^{12}\) cm-2 and \(D=0.91\) V nm-1. The inset shows the evolution of residual resistance \(\Delta\rho_{xy}^{AH}\) as a function of temperature. All the data of **a-f** were taken in device D2 at \(T=50\) mK. When the Fermi energy approaches vHSs, the large DOS may lead to Fermi-surface instabilities, potentially giving rise to various exotic phases, such as superconductivity, ferromagnetism, and charge density waves. One particular example is the ferromagnetic instability, governed by the Stoner criterion [49]: \(UD_{F}>1\), where \(U\) is the Coulomb energy, \(D_{F}\) the DOS at the Fermi energy. The highly tunable vHSs, as shown in Fig. 4b, allow us to observe the Stoner ferromagnetism. Fig. 4c and 4d display the \(n\) and \(D\)-dependent anti-symmetrized Hall resistance \(\rho_{xy}\) (see Methods) when sweeping the out-of-plane \(B\) back and forth between -25 mT and 25 mT. At \(n=2.21\times 10^{12}\) cm\({}^{-2}\) and \(D=0.96\) V nm\({}^{-1}\), \(\rho_{xy}\) exhibits normal linear behavior and remains independent of the sweep direction. But within a large region, \(\rho_{xy}\) displays a remarkable AHE accompanied by hysteresis loops. The hysteresis becomes narrower with increasing \(B\) and vanishes above a coercive field of \(B_{c}=7\) mT. At \(B=0\), \(\rho_{xy}\) shows a nonzero value with the sign depending on the sweep direction of \(B\), indicating the presence of remanent magnetization in the sample. These series of features are the hallmark of ferromagnetism, stemming from spontaneous time-reversal symmetry breaking within this system. We note that the observed hysteresis here is different from that in our previous work on intrinsic rhombohedral graphite near \(n=0\) and \(D=0\), where the hysteresis origins from electronic phase inhomogeneities [6]. Whereas, in the present system, strong interactions and a large DOS within the low-energy surface flat band are responsible for the emergence of ferromagnetism. Furthermore, the hysteresis displays no Barkhausen jumps upon sweeping \(B\), a phenomenon often seen in twisted graphene systems [13, 14], indicating the cleanness of the graphene/h-BN moire superlattice system. The Hall signal comprises both a linear component originating from the normal Hall effect and an anomalous component arising from the magnetization. After subtracting the linear component, we plot the anomalous residual resistance \(\Delta\rho_{xy}^{AH}\) as a function of \(n\) and \(D\), shown in Fig. 4c, which reflects the evolution of the remanent magnetization strength. The \(\Delta\rho_{xy}^{AH}\) values, whether positive or negative, are marked by red and blue colors, respectively, with the intensity of the colors representing the magnitude of AHE. From Fig. 4e, obviously the AHE is highly tunable by \(n\) and \(D\), with the largest values appearing in the vicinity of vHSs. Fig. 4f shows the temperature-dependent hysteresis loops for a p-type-like carrier. The hysteresis of \(\rho_{xy}\) disappears above a critical temperature, which further confirms the phase transition from ferromagnetism to paramagnetism. The Curie temperature, defined by the onset of hysteresis, is 4 K at an optimized position. The ferromagnetism observed in rhombohedral 7L graphene moire superlattice differs from those previously reported in other graphene system [7, 13, 28, 30]. First, our system exhibits a pronounced 3D nature as aforementioned. Ferromagnetism in our system occurs only when electrons are entirely localized at one of the surface layers by applying high \(D\). Namely, ferromagnetism observed here arises from electron interactions within individual surface layer. We refer to this as surface ferromagnetism or layer-polarized ferromagnetism. Second, Stoner ferromagnetism other than Chern band governs the AHE observed in our system. On the one hand, the emergence of ferromagnetism instabilities in our system spans a wide range, including non-integer moire band fillings, and is enhanced near vHSs within the flat moire bands. In contrast, in twisted bilayer graphene, the observed ferromagnetism typically occurs in a narrow region near an insulator at integer filling[13]. On the other hand, the residual \(\Delta\rho_{xy}^{AH}\) near \(B=0\) in our system is relatively small (a few hundred Ohm), far from the quantized value of \(h/e^{2}\). Third, the ferromagnetism in our system is exclusive to the conduction band, which is consistent with the calculated band structure in Fig. 1e showing an extremely narrow isolated conduction band. This contrasts with the ferromagnetism observed in the valence band of rhombohedral trilayer moire superlattice[30]. To summarize, our results promote the observation of ferromagnetism in graphene systems from 2D to 3D. The emergence of ferromagnetism at the surface states is facilitated by the presence of flat surface bands, favored by both band structures of intrinsic rhombohedral graphene and the moire superlattice. This work establishes rhombohedral multilayer graphene as a fertile platform for exploring novel surface states. The surface flat band in rhombohedral multilayer graphene moire systems, when interplaying with nontrivial topological electronic states, may give rise to exotic correlated and topological physics, such as surface superconductivity[21] and quantum anomalous Hall effect in 3D system. The tunability of the layer number in rhombohedral graphene provides great potential for observing amazing quantum states. For example, during the preparation of this manuscript, the observation of fractional quantum anomalous Hall effect in rhombohedral pentalayer graphene has been reported[50].
2303.13443
Cliques, Chromatic Number, and Independent Sets in the Semi-random Process
The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. In this paper, we investigate the following three properties: containing a complete graph of order $k$, having the chromatic number at least $k$, and not having an independent set of size at least $k$.
David Gamarnik, Mihyun Kang, Pawel Pralat
2023-03-23T17:07:19Z
http://arxiv.org/abs/2303.13443v2
# Cliques, chromatic number, and independent sets in the semi-random process ###### Abstract. The semi-random graph process is a single player game in which the player is initially presented an empty graph on \(n\) vertices. In each round, a vertex \(u\) is presented to the player independently and uniformly at random. The player then adaptively selects a vertex \(v\), and adds the edge \(uv\) to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. In this paper, we investigate the following three properties: containing a complete graph of order \(k\), having the chromatic number at least \(k\), and not having an independent set of size at least \(k\). ## 1. Introduction and Main Results ### Definitions In this paper, we consider the **semi-random graph process** suggested by Peleg Michaeli, introduced formally in [5], and studied recently in [3, 4, 8, 11, 13, 14, 15, 22] that can be viewed as a "one player game". The process starts from \(G_{0}\), the empty graph on the vertex set \([n]:=\{1,\ldots,n\}\) where \(n\in\mathbb{N}\). In each **round**\(t\in\mathbb{N}\), a vertex \(u_{t}\) is chosen uniformly at random from \([n]\). Then, the player (who is aware of graph \(G_{t}\) and vertex \(u_{t}\)) must select a vertex \(v_{t}\) and add the edge \(u_{t}v_{t}\) to \(G_{t}\) to form \(G_{t+1}\). The goal of the player is to build a (multi)graph satisfying a given property \(\mathcal{P}\) as quickly as possible. It is convenient to refer to \(u_{t}\) as a **square**, and \(v_{t}\) as a **circle** so every edge in \(G_{t}\) joins a square with a circle. We say that \(v_{t}\) is paired to \(u_{t}\) in step \(t\). Moreover, we say that vertex \(x\in[n]\) is **covered** by the square \(u_{t}\) arriving at round \(t\), provided \(u_{t}=x\). The analogous definition extends to the circle \(v_{t}\). Equivalently, we may view \(G_{t}\) as a directed graph where each arc directs from \(u_{t}\) to \(v_{t}\), and thus we may use \((u_{t},v_{t})\) to denote the edge added in step \(t\). For this paper, it is easier to consider squares and circles for counting arguments. A **strategy**\(\mathcal{S}\) is defined by specifying for each \(n\geq 1\), a sequence of functions \((f_{t})_{t=1}^{\infty}\), where for each \(t\in\mathbb{N}\), \(f_{t}(u_{1},v_{1},\ldots,u_{t-1},v_{t-1},u_{t})\) is a distribution over \([n]\) which depends on the vertex \(u_{t}\), and the history of the process up until step \(t-1\). Then, \(v_{t}\) is chosen according to this distribution. If \(f_{t}\) is an atomic distribution, then \(v_{t}\) is determined by \(u_{1},v_{1},\ldots,u_{t-1},v_{t-1},u_{t}\). We then denote \((G_{i}^{\mathcal{S}}(n))_{i=0}^{t}\) as the sequence of random (multi)graphs obtained by following the strategy \(\mathcal{S}\) for \(t\) rounds; where we shorten \(G_{t}^{\mathcal{S}}(n)\) to \(G_{t}\) or \(G_{t}(n)\) when clear. ### Notation Results presented in this paper are asymptotic by nature. We say that some property \(\mathcal{P}\) holds **asymptotically almost surely** (or **a.a.s.**) if the probability that the semi-random process has this property (after possibly applying some given strategy) tends to \(1\) as \(n\) goes to infinity. Given two functions \(f=f(n)\) and \(g=g(n)\), we will write \(f(n)=\mathcal{O}(g(n))\) if there exists an absolute constant \(c\in\mathbb{R}_{+}\) such that \(|f(n)|\leq c|g(n)|\) for all \(n\), \(f(n)=\Omega(g(n))\) if \(g(n)=\mathcal{O}(f(n))\), \(f(n)=\Theta(g(n))\) if \(f(n)=\mathcal{O}(g(n))\) and \(f(n)=\Omega(g(n))\), and we write \(f(n)=o(g(n))\) or \(f(n)\ll g(n)\) if \(\lim_{n\to\infty}f(n)/g(n)=0\). In addition, we write \(f(n)\gg g(n)\) if \(g(n)=o(f(n))\) and we write \(f(n)\sim g(n)\) if \(f(n)=(1+o(1))g(n)\), that is, \(\lim_{n\to\infty}f(n)/g(n)=1\). We will use \(\log n\) to denote a natural logarithm of \(n\). As mentioned earlier, for a given \(n\in\mathbb{N}:=\{1,2,\ldots\}\), we will use \([n]\) to denote the set consisting of the first \(n\) natural numbers, that is, \([n]:=\{1,2,\ldots,n\}\). Finally, as typical in the field of random graphs, for expressions that clearly have to be an integer, we round up or down but do not specify which: the choice of which does not affect the argument. ### Main Results--Complete Graphs In this paper, we investigate three monotone properties. The first one is the property of containing \(K_{k}\), a complete graph of order \(k\). In the very first paper on the semi-random process [5], it was proved that a.a.s. one may construct a complete graph of a constant order \(k\) once there are vertices with at least \(k\) squares on them. On the other hand, if no vertex receives at least \(k\) squares, it is impossible to achieve it a.a.s. Specifically, the following result was proved. **Observation 1.1** ([5]).: _Fix an integer \(k\geq 3\) and any function \(\omega=\omega(n)\) that tends to infinity as \(n\to\infty\). Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(K_{k}\) _at time_ \(t=\omega n^{(k-2)/(k-1)}\)_._ 2. _There is no strategy that a.a.s. creates_ \(K_{k}\) _at time_ \(t=n^{(k-2)/(k-1)}/\omega\)_._ In fact, part (a) of the above observation was proved for a larger family of graphs that are \(k-1\) (\(\geq 2\)) degenerate (see Section 4 for the definition of degeneracy). Moreover, it was conjectured that part (b) can be generalized to such large family of graphs. The conjecture was proved recently in [3]. As a result, creating graphs of a constant size is well-understood--essentially, creating a fixed graph with degeneracy \(d\) is possible once the process lasts long enough so that there are vertices with at least \(d-1\) squares. On the other hand, constructing complete graphs of order \(k\gg\log n\) is very simple and can be done in almost optimal way. It follows immediately from Chernoff's bound (see (3) and (4)), together with the union bound over all vertices, that if \(t\gg n\log n\), then a.a.s. all vertices receive \[\frac{t}{n}\left(1+\mathcal{O}\left(\sqrt{\frac{\log n}{t/n}}\right)\right) \sim\frac{t}{n}\] squares. One may try to create a complete graph on the vertex set \([k]\) for \(k=2\ell+1\) (\(\ell\in\mathbb{N}\)) by connecting the \(j\)th square (\(j\in[\ell]\)) landing on vertex \(i\) with vertex \((i-1+j)\pmod{2\ell+1}+1\). This simple algorithm yields a lower bound for the size of the complete graph. To get an upper bound, we simply observe that it is impossible to create \(K_{k^{\prime}}\) if the maximum degree is smaller than \((k^{\prime}-1)/2\). After combining the two observations, we get the following. **Observation 1.2**.: _Suppose that \(t=t(n)\geq\omega n\log n\), where \(\omega=\omega(n)\) is any function that tends to infinity as \(n\to\infty\). Let_ \[k = k(n)\ :=\ \frac{2t}{n}\left(1-\omega^{-1/3}\right)\ \sim\ \frac{2t}{n}\] \[k^{\prime} = k^{\prime}(n)\ :=\ \frac{2t}{n}\left(1+\omega^{-1/3}\right)\ \sim\ \frac{2t}{n}\ \sim\ k.\] _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(K_{\min\{k,n\}}\) _at time_ \(t\)_._ 2. _There is no strategy that a.a.s. creates_ \(K_{k^{\prime}}\) _at time_ \(t\)_._ In fact, much stronger property holds. Let \(H\) be an \(n\)-vertex graph of maximum degree \(\Delta=\Delta(n)\gg\log n\). In [4] it was proved that there exists a strategy to build \(H\) in \((\Delta n/2)(1+o(1))\) rounds. In light of Observations 1.1 and 1.2, it remains to investigate how large complete graphs one can build in \(t\) rounds, provided that \(t=t(n)=n^{1+o(1)}\) and \(t=\mathcal{O}(n\log n)\). If \(t=o(n\log n)\), then one may create a complete graph of size which is asymptotic to the maximum number of squares on a single vertex--see Lemma 3.1(b). More importantly, asymptotically, this is the best one can do. This is our first main result which we state below. **Theorem 1.3**.: _Suppose that \(t=t(n)\) is such that \(t=n^{1+o(1)}\) and \(t\ll n\log n\). Let \(\beta=\beta(n)\ :=\ n\log n/t\to\infty\), as \(n\to\infty\). Define_ \[\ell\ =\ \ell(n)\ :=\ \frac{\log n}{\log\beta-2\log\log\beta}\ \sim\ \frac{\log n}{\log\beta},\] _and_ \[\epsilon\ =\ \epsilon(n)\ :=\ \begin{cases}e^{2}(\log\beta)/\beta&\text{ if }\ \ \beta\leq\log n/\log\log n,\\ 15(\log\ell)/\ell&\text{ if }\ \ \log n/\log\log n<\beta\leq\log^{2}n,\\ e/\ell&\text{ if }\ \ \ \beta>\log^{2}n.\end{cases}\] _(In particular, \(\epsilon=o(1)\), regardless of \(\beta\).) Finally, let_ \[k = k(n)\ :=\ \frac{\log n-2\log\log n-t/n}{\log\beta}\ \sim\ \frac{\log n }{\log\beta}\] \[k^{\prime} = k^{\prime}(n)\ :=\ \ell(1+4\epsilon^{1/4})\ \sim\ \frac{\log n}{\log\beta}\ \sim\ k.\] _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(K_{k}\) _at time_ \(t\)_._ 2. _There is no strategy that a.a.s. creates_ \(K_{k^{\prime}}\) _at time_ \(t\)_._ Unfortunately, when \(t=\Theta(n\log n)\), then our bounds do not asymptotically match but they are at most a multiplicative factor of \(2+o(1)\) away from each other, as we will show later (see Figure 2). Suppose that \(t=t(n)=\gamma n\log n\) for some \(\gamma\in(0,\infty)\). We will derive an asymptotic lower bound of \(k_{1}=\ell=\xi\gamma\log n\), where constant \(\xi=\xi(\gamma)\in(1,\infty)\) is defined to be the unique solution to the following equation \[1-\xi\gamma(\log\xi-1)-\gamma=0, \tag{1}\] which is equivalent to \[\xi(\log\xi-1)=\frac{1-\gamma}{\gamma}=\frac{1}{\gamma}-1\in(-1,\infty) \tag{2}\] or to \[\xi\gamma=\frac{1-\gamma}{\log\xi-1}.\] (The left hand side of (2) is a bijection from \((0,\infty)\) to \((-1,\infty)\) which proves the uniqueness of \(\xi\).) As we will see in Lemma 3.1(c) below, \(\ell\) defined in Theorem 1.3 is asymptotic to the maximum number of squares on a single vertex. It is easy to see that \(\xi\) is a decreasing function of \(\gamma\). If \(\gamma\to 0\), then \(\xi\sim(1/\gamma)/\log(1/\gamma)\), which is consistent with Theorem 1.3 (applied with \(\beta=1/\gamma\to\infty\)). If \(\gamma=1\), then \(\xi=e\). More importantly, if \(\gamma=\gamma_{\ell}=(2\log 2-1)^{-1}\approx 2.59\), then \(\xi=2\). Finally, if \(\gamma\to\infty\), then \(\xi\to 1\). The constant \(\gamma_{\ell}\) will play a special role in the lower bound in the statement of our result. We will show another asymptotic lower bound of \(k_{2}\sim 2\gamma\log n\) which is stronger than the previous one, provided that \(\gamma>\gamma_{\ell}\) (see Figure 1, right side). We will also show two upper bounds. The first one, \(k^{\prime}_{2}\sim 2\ell\sim 2\xi\gamma\log n\) is trivial but is best possible when \(\gamma\to\infty\) (recall that \(\xi\to 1\) as \(\gamma\to\infty\)). The second one, \(k^{\prime}_{1}\sim(1+2\sqrt{2}(e/\xi)^{1/4})\ell\), is stronger provided that \(\gamma<\gamma_{u}=(64e\log 64-1)^{-1}\approx 0.00139\) (see Figure 1, left side). Indeed, if \(\gamma<\gamma_{u}\), then \(\xi>64e\) and, as a consequence, \(1+2\sqrt{2}(e/\xi)^{1/4}<2\). This bound is best possible when \(\gamma\to 0\). **Theorem 1.4**.: _Suppose that \(t=t(n)=\gamma n\log n\) for some \(\gamma\in(0,\infty)\). Let \(\xi=\xi(\gamma)\in(1,\infty)\) be defined as in (1). Define_ \[\ell\ =\ \ell(n)\ :=\ \frac{1-\gamma}{\log\xi-1}\log n\ =\ \xi\gamma\log n.\] _Let_ \[k_{1} = k_{1}(n)\ :=\ \frac{(1-\gamma)\log n-2\log\log n}{\log\xi-1}\ \sim\ \ell\ =\ \xi\gamma\log n,\] \[k_{2} = k_{2}(n)\ :=\ 2\gamma\log n-4\sqrt{\gamma\log n\log\log n}\ \sim\ 2\gamma\log n,\] \[k^{\prime}_{1} = k^{\prime}_{1}(n)\ :=\ \bigg{(}1+\frac{3}{\log^{1/2}n}\bigg{)}\, \bigg{(}1+2\sqrt{2}(e/\xi)^{1/4}\bigg{)}\xi\gamma\log n\ \sim\ \Big{(}1+2\sqrt{2}(e/\xi)^{1/4}\bigg{)}\xi\gamma\log n,\] \[k^{\prime}_{2} = k^{\prime}_{2}(n)\ :=\ 2\xi\gamma\log n+1\ \sim\ 2\xi\gamma\log n.\] _Finally, let_ \[k = k(n)\ :=\ \max\{k_{1},k_{2}\}\ \sim\ \max\{\xi,2\}\gamma\log n,\] \[k^{\prime} = k^{\prime}(n)\ :=\ \min\{k^{\prime}_{1},k^{\prime}_{2}\}\ \sim\ \min\{1+2\sqrt{2}(e/\xi)^{1/4},2\}\xi\gamma\log n.\] _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(K_{k}\) _at time_ \(t\)_._ 2. _There is no strategy that a.a.s. creates_ \(K_{k^{\prime}}\) _at time_ \(t\)_._ Figure 1. The upper (\(k^{\prime}\)) and the lower (\(k\)) bound for the order of a largest complete graph: small (left figure) and large (right figure) values of \(\gamma\). Figure 2. The ratio between the upper (\(k^{\prime}\)) and the lower (\(k\)) bound for the order of a largest complete graph: small (left figure) and large (right figure) values of \(\gamma\). ### Main Results--Chromatic Number A **proper colouring** of a graph is a labeling of its vertices with colours such that no two vertices sharing the same edge have the same colour. The smallest number of colours in a proper colouring of a graph \(G=(V,E)\) is called its **chromatic number**, and it is denoted by \(\chi(G)\). Since this graph parameter is not well-defined for (multi)graphs with loops, we simply ignore them if they are present in \(G_{t}\). Potential parallel edges do not cause any problems but, of course, can be ignored too. The second monotone property we investigate in this paper is the property that \(\chi(G_{t})\geq k\) for some value of \(k=k(n)\). Trivially, the player can achieve this property by constructing \(K_{k}\) so earlier results immediately imply the corresponding lower bounds. We will prove matching upper bounds (up to a multiplicative factor of \(2+o(1)\)) yielding the following three results. Hence, in all regimes, the chromatic number is of order of the clique number. In the first regime, the ratio between the upper and the lower bound is \(2+o(1)\). **Theorem 1.5**.: _Suppose that \(t=t(n)\) is such that \(t=n^{1+o(1)}\) and \(t\ll n\log n\). Let \(\beta=\beta(n)\ :=\ n\log n/t\to\infty\), as \(n\to\infty\). Define \(\ell=\ell(n)\) and \(k=k(n)\) as in Theorem 1.3._ _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(G_{t}\) _such that_ \(\chi(G_{t})\geq k\)_._ 2. _There is no strategy that a.a.s. creates_ \(G_{t}\) _such that_ \(\chi(G_{t})\geq 2\ell+2\sim 2k\)_._ In the second regime, the ratio between the upper and the lower bound is at most \(2+o(1)\) (see Figure 3, right side). **Theorem 1.6**.: _Suppose that \(t=t(n)=\gamma n\log n\) for some \(\gamma\in(0,\infty)\). Let \(\xi=\xi(\gamma)\in(1,\infty)\) be defined as in (1). Define \(\ell=\ell(n)\) and \(k=k(n)\) as in Theorem 1.4._ _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(G_{t}\) _such that_ \(\chi(G_{t})\geq k\)_._ 2. _There is no strategy that a.a.s. creates_ \(G_{t}\) _such that_ \(\chi(G_{t})\geq 2\ell+2=\Theta(k)\)_._ Note that if \(\gamma\to\infty\) in the above result, then \(\xi\to 1\) and so both bounds are asymptotically tight: \(\chi(G_{t})\sim 2\gamma\log n\). **Theorem 1.7**.: _Suppose that \(t=t(n)\geq\omega n\log n\), where \(\omega=\omega(n)\) is any function that tends to infinity as \(n\to\infty\). Define \(k=k(n)\) and \(k^{\prime}=k^{\prime}(n)\) as in Observation 1.2._ _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(G_{t}\) _such that_ \(\chi(G_{t})\geq\min\{k,n\}\)_._ 2. _There is no strategy that a.a.s. creates_ \(G_{t}\) _such that_ \(\chi(G_{t})\geq k^{\prime}\sim k\)_._ Figure 3. The upper \((2\ell+2)\) and the lower \((k)\) bound for \(\chi(G_{t})\) (left figure) as well as the ratio between the two (right figure). ### Main Results--Independent Sets An **independent set** is a set of vertices in a graph, no two of which are adjacent. The **independence number**\(\alpha(G)\) of a graph \(G=(V,E)\) is the cardinality of a maximum independent set of vertices. As for the chromatic number, we simply ignore loops if they are present in \(G_{t}\). The last monotone property we investigate in this paper is the property that \(\alpha(G_{t})\leq k\) for a given value of \(k=k(n)\). We have a good understanding of the independence number of \(G_{t}\) when the average degree tends to infinity together with \(n\). **Theorem 1.8**.: _Suppose that \(t=t(n)\) is such that \(n\ll t\ll n^{2}\). Let \(\lambda=\lambda(n)=t/n\)._ _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(G_{t}\) _such that_ \[\alpha(G_{t})\ \leq\ \frac{n}{2\lambda}\left(1+\mathcal{O}(\sqrt{\log\lambda/ \lambda})+\mathcal{O}(\lambda/n)\right)\ \sim\ \frac{n}{2\lambda}.\] 2. _There is no strategy that a.a.s. creates_ \(G_{t}\) _such that_ \[\alpha(G_{t})\ <\ \frac{n}{2\lambda+1}.\] Suppose now that the average degree is of the same order as the order of a graph, that is, \(t=t(n)\sim cn^{2}\). In this case, we determine the independence number precisely unless \(c=1/(2q)\) for some \(q\in\mathbb{N}\). If \(c=1/(2q)\) for some \(q\in\mathbb{N}\), then the upper and the lower bounds may be off by one. **Theorem 1.9**.: _Suppose that \(t=t(n)\sim cn^{2}\) for some constant \(c\in(0,1]\). Let \(\lambda=\lambda(n)=t/n\sim cn\)._ _Then, the following hold._ 1. _There exists a strategy that a.a.s. creates_ \(G_{t}\) _such that_ \[\alpha(G_{t})\ \leq\ \left|\,\frac{n}{2\lambda}\left(1+\mathcal{O}(\sqrt{ \log\lambda/\lambda})\right)\right|.\] 2. _There is no strategy that a.a.s. creates_ \(G_{t}\) _such that_ \[\alpha(G_{t})\ <\ \frac{n}{2\lambda+1}.\] On the other extreme case, if \(t=t(n)\ll n\), then the number of vertices that are _not_ isolated is (deterministically) at most \(2t=o(n)\) and so \(\alpha(G_{t})\sim n\). Understanding \(\alpha(G_{t})\) seems to be more challenging when \(t\sim cn\) for some constant \(c\in(0,\infty)\). It is easy to see that \(\alpha(G_{t})=\Theta(n)\) but determining the constants hidden in the \(\Theta(\cdot)\) notation appears to be difficult. Indeed, in this regime we do not even know the behaviour of the independence number of the binomial random graph [2]. This random graph, much easier model to analyze, is a special case of the semi-random process and can be easily mimicked by it. We provide a few natural upper and lower bounds later on but more work needs to be done to have a better understanding of this graph parameter. ### Structure of the Paper We first introduce some basic concentration tools and present known results about the semi-random process (Section 2). Complete graphs are investigated in Section 3, chromatic number in Section 4, and independent sets in Section 5. We conclude the paper with summarizing open problems that are left to be investigated (Section 6). ## 2. Preliminaries ### Concentration Tools Let us first state a few specific instances of Chernoff's bound that we will find useful. Let \(X\in\operatorname{Bin}(n,p)\) be a random variable distributed according to a Binomial distribution with parameters \(n\) and \(p\). Then, a consequence of **Chernoff's bound** (see e.g., [17, Theorem 2.1]) is that for any \(t\geq 0\) we have \[\mathbb{P}(X\geq\mathbb{E}X+t) \leq \exp\left(-\frac{t^{2}}{2(\mathbb{E}X+t/3)}\right) \tag{3}\] \[\mathbb{P}(X\leq\mathbb{E}X-t) \leq \exp\left(-\frac{t^{2}}{2\mathbb{E}X}\right). \tag{4}\] Moreover, let us mention that the bound holds in a more general setting as well, that is, for \(X=\sum_{i=1}^{n}X_{i}\) where \((X_{i})_{1\leq i\leq n}\) are independent variables and for every \(i\in[n]\) we have \(X_{i}\in\text{Bernoulli}(p_{i})\) with (possibly) different \(p_{i}\)-s (again, see e.g., [17] for more details). Finally, it is well-known that the Chernoff bound also applies to negatively correlated Bernoulli random variables [10]. ### The Differential Equation Method For one of our bounds, we will use the differential equation method (see [6] for a gentle introduction) to establish dynamic concentration of our random variables. The origin of the differential equation method stems from work done at least as early as 1970 (see Kurtz [21]), and which was developed into a very general tool by Wormald [26, 27] in the 1990's. Indeed, Wormald proved a "black box" theorem, which gives dynamic concentration so long as some relatively simple conditions hold. Warnke [24] recently gave a short proof of a somewhat stronger black box theorem. ### Literature Review Since the semi-random process is still a relatively new model, let us highlight a few results on the model. #### 2.3.1. Perfect Matchings In the very first paper [5], it was shown that the semi-random process is general enough to approximate (using suitable strategies) several well-studied random graph models, including an extensively studied \(k\)-out process (see, for example, Chapter 18 in [18]). In the \(k\)-out process, each vertex independently connects to \(k\) randomly selected vertices which results in a random graph on \(n\) vertices and \(kn\) edges. Since the 2-out process has a perfect matching _a.a.s._[12], we immediately get that one can create a perfect matching in \((2+o(1))n\) rounds. By coupling the semi-random process with another random graph that is known to have a perfect matching _a.a.s._[19], the upper bound can be improved to \((1+2/e+o(1))n<1.73576n\). This bound was consecutively improved by investigating a fully adaptive algorithm [15]. The currently best upper bound is \(1.20524n\). On the other hand, the lower bound observed in [5] (\((\ln(2)+o(1))n>0.69314n\)) was improved as well, and now we know that one needs at least \(0.93261n\) rounds to create a perfect matching [15]. #### 2.3.2. Hamilton Cycles It is known that a.a.s. the famous 3-out process is Hamiltonian [7]. Since the semi-random process can be coupled with the \(k\)-out process [5] (for any \(k\in\mathbb{N}\)), we get that a.a.s. one may create a Hamilton cycle in \((3+o(1))n\) rounds. A new upper bound was obtained in [13] in terms of an optimal solution to an optimization problem whose value is believed to be \(2.61135n\) by numerical support. The upper bound of \((3+o(1))n\) obtained by simulating the 3-out process is _non-adaptive_. That is, the strategy does _not_ depend on the history of the semi-random process. The above mentioned improvement proposed in [13] uses an adaptive strategy but in a weak sense. The strategy consists of 4 phases, each lasting a linear number of rounds, and the strategy is adjusted _only_ at the end of each phase (for example, the player might identify vertices of low degree, and then focus on connecting circles to them during the next phase). In [14], a fully adaptive strategy was proposed that pays attention to the graph \(G_{t}\) and the position of \(u_{t}\) for every single step \(t\). As expected, such a strategy creates a Hamilton cycle substantially faster than the weakly adaptive or non-adaptive strategies, and it allows to improve the upper bound from \(2.61135n\) to \(2.01678n\). One more trick was observed recently which further improves an upper bound to \(1.84887n\)[11]. After combining all the ideas together, the currently best upper bound is equal to \(1.81701n\). Let us now move to the lower bounds. As observed in the initial paper introducing the semi-random process [5], if \(G_{t}\) has a Hamiltonian cycle, then \(G_{t}\) has minimum degree at least 2. Thus, a.a.s. it takes at least \((\ln 2+\ln(1+\ln 2)+o(1))n\geq 1.21973n\) rounds to achieve this property as this is exactly how many rounds are needed to get the property of having the minimum degree at least \(2\)[5]. In [13], the lower bound mentioned above was shown to not be tight. The lower bound was increased by \(\varepsilon n=10^{-8}n\) and so numerically negligible. Better bound was obtained in [14] and now we know that a.a.s. it takes at least \(1.26575n\) rounds to create a Hamilton cycle. #### 2.3.3. Spanning Subgraphs Let us now discuss what is known about the property of containing a given spanning graph \(H\) as a subgraph. It was asked by Noga Alon whether for any bounded-degree \(H\), one can construct a copy of \(H\)_a.a.s._ in \(O(n)\) rounds. This question was answered positively in a strong sense in [4], in which it was shown that any graph with maximum degree \(\Delta\) can be constructed _a.a.s._ in \((3\Delta/2+o(\Delta))n\) rounds. They also proved that if \(\Delta=\omega(\log(n))\), then this upper bound improves to \((\Delta/2+o(\Delta))n\) rounds. Note that both of these upper bounds are asymptotic in \(\Delta\). When \(\Delta\) is constant in \(n\), such as in both the perfect matching and Hamiltonian cycle setting, determining the optimal dependence on \(\Delta\) for the number of rounds needed to construct \(H\) remains open. Moving to this direction, \(k\)-factors and \(k\)-connectivity was studied recently in [20]. #### 2.3.4. A Few Other Directions Finally, let us mention that sharp thresholds were studied recently in [22]. In fact, the results from there apply to a more general class of processes including the semi-random process. Moreover, some interesting variants of the semi-random process are already considered. In [8], a random spanning tree of \(K_{n}\) is presented to the player who needs to keep one of the edges. In [16], squares presented by the process follow a random permutation. In [23], the process in which \(k\) random squares are presented and the player needs to select one of them before creating an edge is considered. Hypergraphs are investigated in [3]. ## 3. Complete Graphs Let us start with a well-known observation, closely related to the coupon collector problem, that the number of squares on a given vertex is a random variable that is asymptotically distributed as the Poisson random variable. **Lemma 3.1**.: _Suppose that \(t=t(n)\) is such that \(t=n^{1+o(1)}\) and \(t=\mathcal{O}(n\log n)\). For any vertex \(v\in[n]\), let \(X_{v}=X_{v}(n)\) be the number of squares that land on \(v\) until time \(t\) and let_ \[\lambda\ :=\ \mathbb{E}[X_{v}]=t/n.\] _Then, the following properties hold._ 1. _For any vertex_ \(v\in[n]\) _and for any_ \(k\leq n^{1/3}\)_,_ \[\mathbb{P}(X_{v}=k)\ \sim\ \frac{\lambda^{k}}{k!}\exp(-\lambda).\] 2. _Suppose that_ \(t=t(n)=n\log n/\beta\) _for some_ \(\beta=\beta(n)\to\infty\) _as_ \(n\to\infty\)_. As in Theorem_ 1.3_, define_ \[\ell\ =\ \ell(n)\ :=\ \frac{\log n}{\log\beta-2\log\log\beta}\ \sim\ \frac{\log n}{\log\beta}\ =\ \frac{\beta}{\log\beta}\cdot\lambda.\] _Then, a.a.s._ \[\max\Big{(}X_{v}:v\in[n]\Big{)}\ \leq\ \ell\ \sim\ \frac{\beta}{\log\beta}\cdot\lambda.\] 3. _Suppose that_ \(t=t(n)=\gamma n\log n\) _for some_ \(\gamma\in(0,\infty)\)_. Let_ \(\xi=\xi(\gamma)\in(1,\infty)\) _be as in (_1_). As in Theorem_ 1.4_, define_ \[\ell\ =\ \ell(n)\ :=\ \frac{1-\gamma}{\log\xi-1}\log n\ =\ \xi\gamma\log n\ =\ \xi\lambda.\] _Then, a.a.s._ \[\max\Big{(}X_{v}:v\in[n]\Big{)}\ \leq\ \ell\ =\ \xi\lambda.\] Proof.: Note that for any \(k=\mathcal{O}(n^{1/3})\), \[\mathbb{P}(X_{v}=k) = \binom{t}{k}\left(\frac{1}{n}\right)^{k}\left(1-\frac{1}{n}\right)^ {t-k}\] \[= \frac{t^{k}}{k!}\left(1+\mathcal{O}(k/t)\right)^{k}\left(\frac{1}{ n}\right)^{k}\exp\left(-\frac{1}{n}+\mathcal{O}(1/n^{2})\right)^{t-k}\] \[= \frac{(t/n)^{k}}{k!}\left(1+\mathcal{O}(k^{2}/t)\right)\exp\left( -t/n+\mathcal{O}(k/n)+\mathcal{O}(t/n^{2})\right)\] \[\sim \frac{\lambda^{k}}{k!}\exp(-\lambda).\] The property (a) holds. To show property (b), first note that, since \(t=n^{1+o(1)}\), we get that \(\beta=n^{o(1)}\) (but also, by definition, \(\beta\to\infty\) as \(n\to\infty\)) and so \(\ell\to\infty\) as \(n\to\infty\). It follows from part (a) and the Stirling's formula (\(\ell!\sim\sqrt{2\pi\ell}(\ell/e)^{\ell}\)) that for any vertex \(v\in[n]\), \[\mathbb{P}\Big{(}X_{v}=\ell\Big{)} \sim \frac{\lambda^{\ell}}{\ell!}e^{-\lambda}\leq\frac{\lambda^{\ell} }{\ell!}=o\left(\frac{(t/n)^{\ell}}{(\ell/e)^{\ell}}\right)=o\left(\left(\frac {e\log n}{\ell\beta}\right)^{\ell}\right)=o\left(\left((1+o(1))\frac{e\log \beta}{\beta}\right)^{\ell}\right)\] \[= o\left(\exp\left(-\ell\big{(}\log\beta-\log\log\beta-\mathcal{O }(1)\big{)}\right)\right)\] \[= o\left(\exp\left(-\log n\right)\right)=o(1/n).\] Now, note that for any \(\ell\leq k<n^{1/3}\) we have \[\frac{\mathbb{P}(X_{v}=k+1)}{\mathbb{P}(X_{v}=k)}\sim\frac{\lambda}{k+1}\leq \frac{\lambda}{\ell}\sim\frac{\log n/\beta}{\log n/\log\beta}=\frac{\log\beta }{\beta}=o(1),\] and so \(\mathbb{P}(\ell\leq X_{v}\leq n^{1/3})\sim\mathbb{P}(X_{v}=\ell)\). Finally, note that \[\mathbb{P}(X_{v}\geq n^{1/3})\leq\binom{t}{n^{1/3}}\left(\frac{1}{n}\right)^{ n^{1/3}}\leq\left(\frac{et}{n^{4/3}}\right)^{n^{1/3}}=\exp\Big{(}-\Theta(n^{1/3} \log n)\Big{)}=o(1/n).\] Combining all of these properties together we get that \[\mathbb{P}(X_{v}\geq\ell)\leq\mathbb{P}(\ell\leq X_{v}\leq n^{1/3})+\mathbb{P }(X_{v}\geq n^{1/3})=o(1/n), \tag{5}\] and the property (b) holds by the union bound over all vertices \(v\). The same argument can be used to show property (c). This time, for any vertex \(v\in[n]\), \[\mathbb{P}\Big{(}X_{v}=\ell\Big{)} \sim \frac{\lambda^{\ell}}{\ell!}e^{-\lambda}=o\left(\left(\frac{ \gamma\log n}{\ell/e}\right)^{\ell}\exp\Big{(}-\gamma\log n\Big{)}\right)\] \[= o\left(\exp\left(\ell\log(e/\xi)-\gamma\log n\Big{)}\right)\] \[= o\left(\exp\Big{(}-\ell(\log\xi-1)-\gamma\log n\Big{)}\right)\] \[= o\left(\exp\Big{(}-(1-\gamma)\log n-\gamma\log n\Big{)}\right)\] \[= o(1/n).\] Since \(\xi>1\), for any \(\ell\leq k<n^{1/3}\) we have \[\frac{\mathbb{P}(X_{v}=k+1)}{\mathbb{P}(X_{v}=k)}\sim\frac{\lambda}{k+1}\leq \frac{\lambda}{\ell}=\frac{1}{\xi}<1,\] and so \(\mathbb{P}(\ell\leq X_{v}\leq n^{1/3})=\mathcal{O}(\mathbb{P}(X_{v}=\ell))\). The conclusion is the same, \(\mathbb{P}(X_{v}\geq\ell)=o(1/n)\) (see (5)), and the property (c) holds by the union bound over all vertices \(v\). We independently consider lower and upper bounds for the order of complete graphs one may build during the semi-random process. ### Lower Bounds We present two algorithms that can be used to build complete graphs. Both can be used for any value of \(t\) but the first algorithm will turn out to be better than the second one, provided that \(t\leq\gamma_{\ell}n\log n\), where \(\gamma_{\ell}=(2\log 2-1)^{-1}\approx 2.59\). **Algorithm 3.2**.: _The algorithm consists of \(k-1\) phases. The first phase has only one round in which the player creates \(K_{2}\), a single edge._ _At the beginning of phase \(i\), \(i\geq 2\), a complete graph \(K_{i}\) on the vertex set \(v_{1},v_{2},\ldots,v_{i}\) is already constructed. Any other vertex \(v\) that is covered by \(s=s(v)\geq 1\) squares has the following property: for any \(j\in[s]\), the \(j\)-th square is connected to a circle on vertex \(v_{j}\). The player maintains this property by applying the following strategy. If a square lands on vertex \(v\notin\{v_{1},v_{2},\ldots,v_{i}\}\) that is already covered by \(s\) squares, then she connects \(v\) to vertex \(v_{s+1}\). (If \(v=v_{j}\) for some \(j\in[i]\), then she plays arbitrarily--that edge is ignored anyway.) The \(i\)-th phase ends when a square lands on a vertex with \(i-1\) squares and a copy of a complete graph \(K_{i+1}\) is created._ _The algorithm ends at the end of phase \(k-1\) when a copy of \(K_{k}\) is created._ The analysis of Algorithm 3.2 proves Theorem 1.3(a) and the first part of Theorem 1.4(a), namely, the lower bound of \(k_{1}\). Proof of Theorem 1.3(a) and the first part of Theorem 1.4(a).: Let us first prove Theorem 1.3(a). Recall that \[k=k(n)=\frac{\log n-2\log\log n-\lambda}{\log\beta}\sim\frac{\log n}{\log\beta} =\frac{\beta}{\log\beta}\cdot\lambda,\] where \(\lambda=\lambda(n)=t/n\ll\log n\) is the expected number of squares on a given vertex and \(\beta=\beta(n)=n\log n/t\to\infty\), as \(n\to\infty\). Note that \(\beta=n^{o(1)}\) (but also, by definition, \(\beta\to\infty\) as \(n\to\infty\)) and so \(k\to\infty\) as \(n\to\infty\). On the other hand, since \(\beta\to\infty\) as \(n\to\infty\), we get that \(k=o(\log n)\). Suppose that the player uses Algorithm 3.2 to play the game against the semi-random process. We will show that at time \(t\) a.a.s. there are at least \(k\) vertices that are covered by at least \(k\) squares. It is easy to see that the algorithm has to end before that round. This will imply the lower bound of \(k\). Let \(Y=Y(t)\) be the number of vertices that are covered by \(k\) squares at time \(t\). Using Lemma 3.1 and the Stirling's formula (\(k!\sim\sqrt{2\pi k}(k/e)^{k}\)), we get that \[\mathbb{E}[Y] \sim n\cdot\frac{\lambda^{k}}{k!}\exp(-\lambda) \tag{6}\] \[= \Theta(k^{-1/2})\cdot n\cdot\left(\frac{e\lambda}{k}\right)^{k} \exp(-\lambda)\] \[= \Theta(k^{-1/2})\cdot\exp\left(\log n-k\log\left(\frac{k}{e \lambda}\right)-\lambda\right).\] Since \(k\leq\frac{\beta}{\log\beta}\cdot\lambda\), we get that \(\frac{k}{e\lambda}\leq\frac{\beta}{e\log\beta}\leq\beta\) and so \[\mathbb{E}[Y] = \Omega(k^{-1/2})\cdot\exp\left(\log n-k\log\beta-\lambda\right)\] \[= \Omega(k^{-1/2})\cdot\exp\left(\log n-(\log n-2\log\log n-\lambda )-\lambda\right)\] \[= \Omega\left(\frac{\log^{2}n}{k^{1/2}}\right)\gg k^{3/2}\gg k,\] as \(k=o(\log n)\). Finally, note that events "vertex \(v\) is covered by \(k\) squares" associated with different vertices are negatively correlated. As a result, it follows immediately from Chernoff's bound (4) and the comment right after it that a.a.s. \(Y\geq k\). (Alternatively, one could us the second moment method to get the same conclusion.) As argued above, this finishes the proof of Theorem 1.3(a). The same argument implies the lower bound of \(k_{1}\) in Theorem 1.4(a). Recall that this time \(\lambda=\lambda(n)=t/n=\gamma\log n\) and \[k_{1}=k_{1}(n)=\frac{(1-\gamma)\log n-2\log\log n}{\log\xi-1}\sim\frac{1- \gamma}{\log\xi-1}\log n=\xi\gamma\log n.\] Computations performed in (6) still apply. Since \(k_{1}\leq\xi\gamma\log n\), we get that \[\mathbb{E}[Y] = \Omega(k_{1}^{-1/2})\cdot\exp\left(\log n-k_{1}\log(\xi/e)- \gamma\log n\right)\] \[= \Omega(k^{-1/2})\cdot\exp\left(\log n-((1-\gamma)\log n-2\log \log n)-\gamma\log n\right)\] \[= \Omega\left(\frac{\log^{2}n}{k^{1/2}}\right)=\Omega\left(k^{3/2 }\right)\gg k,\] as \(k=\Theta(\log n)\). As before, we conclude that a.a.s. \(Y\geq k\) and we are done. To prove the second part of Theorem 1.4(a) (that is, the lower bound of \(k_{2}\)), we need to analyze the second algorithm which performs better when \(t\geq\gamma\ell n\log n\). **Algorithm 3.3**.: _Suppose that \(k=2\ell+1\) for some \(\ell\in\mathbb{N}\). Before the game starts, select arbitrarily \(k\) vertices \(v_{0},v_{1},\ldots,v_{2\ell}\) from the vertex set \([n]\). For each \(i\in[2\ell]\cup\{0\}\) and for each \(j\in[\ell]\), the player connects the \(j\)th square landing on vertex \(v_{i}\) with vertex \(v_{(i+j)\pmod{2\ell}}\). The algorithm ends when each vertex \(v_{i}\) is covered by at least \(\ell\) squares, and a copy of \(K_{k}\) is created._ Clearly, the player could partition the vertex set into \(n/k\) sets and then use a slightly more sophisticated algorithm in which she tries to simultaneously create \(n/k\) complete graphs, by applying Algorithm 3.3 independently to each part. Such algorithm would finish once at least one set induces a complete graph. We do not use and analyze it as it would give an asymptotically negligible improvement on the lower bound. However, it will be useful later on once we analyze the chromatic number. Proof of the second part of Theorem 1.4(a).: Recall that \(t=t(n)=\gamma n\log n\) for some \(\gamma\in(0,\infty)\) and \[k_{2}=k_{2}(n)=2\gamma\log n-4\sqrt{\gamma\log n\log\log n}\sim 2\gamma\log n.\] Let \(k\) be the smallest odd integer that is at least \(k_{2}\) and assume \(k=2\ell+1\) for some \(\ell\in\mathbb{N}\). Clearly, \(k=k_{2}+\mathcal{O}(1)\). Suppose that the player uses Algorithm 3.3 to play the game against the semi-random process. For any \(i\in[2\ell]\cup\{0\}\), let \(X_{i}\) be the number of squares on \(v_{i}\) at time \(t\). Note that \(X_{i}\in\operatorname{Bin}(t,1/n)\) with \(\mathbb{E}[X_{i}]=\gamma\log n\). It follows from Chernoff's bound (4) that \[\mathbb{P}(X_{i}<\ell) = \mathbb{P}\Big{(}X_{i}\leq\mathbb{E}[X_{i}]-2\sqrt{\gamma\log n \log\log n}+\mathcal{O}(1)\Big{)}\] \[\leq \exp\left(-\frac{(2\sqrt{\gamma\log n\log\log n}+\mathcal{O}(1))^ {2}}{2\gamma\log n}\right)\] \[= \exp\Big{(}-2\log\log n+o(1)\Big{)}\] \[\sim (\log n)^{-2}.\] Hence, by the union bound over all vertices \(v_{i}\), the algorithm does not finish before time \(t\) with probability at most \((2\ell+1)\cdot\mathcal{O}((\log n)^{-2})=\mathcal{O}((\log n)^{-1})=o(1)\). Hence the desired bound holds a.a.s., and the proof is finished. ### Upper Bounds Suppose first that \(t=t(n)=n\log n/\beta\) for some \(\beta=\beta(n)\to\infty\) as \(n\to\infty\). Lemma 3.1(b) shows that a.a.s. the number of squares on any vertex is at most \(\ell\sim\log n/\log\beta\). It implies immediately that one cannot construct a complete graph of size larger than \(2\ell+1\). But the truth is that a.a.s. it is only possible to build a complete graph of size asymptotic to \(\ell\). The main difficulty in showing this lies in the fact that the player can easily create vertices of large degree by placing a large number of circles on some vertices. So this is certainly not the bottleneck for this problem. The key observation used in the proof is that in order to create a large complete graph, many squares need to land on vertices that are already covered by circles, but this happens quite rarely. Suppose that vertex \(u\) is covered by \(k\) squares, \(u_{t_{1}},u_{t_{2}},\ldots,u_{t_{k}}\), for some \(k\leq\ell\). We will first estimate the number of squares that land "on top" of the associated circles \(v_{t_{1}},v_{t_{2}},\ldots,v_{t_{k}}\). Formally, we will say that \((v_{t_{i}},u_{s})\) is a **rare pair** if \(v_{t_{i}}\) and \(u_{s}\) cover the same vertex and \(s>t_{i}\). Note, in particular, that when distinct squares fall on the same circle, those are still counted as distinct rare pairs. The name is justified by the next lemma which shows that on average only \(o(\ell)\) squares land on a given circle. **Lemma 3.4**.: _Suppose that \(t=t(n)\) is such that \(t=n^{1+o(1)}\) and \(t\ll n\log n\). Let \(\beta=\beta(n)=n\log n/t\to\infty\), as \(n\to\infty\). Let \(\ell=\ell(n)\sim\log n/\log\beta\) and \(\epsilon=\epsilon(n)=o(1)\) be defined as in Theorem 1.3. Then, the following property holds a.a.s.: for any vertex \(u\),_ 1. \(u\) _is covered by_ \(k\leq\ell\) _squares,_ \(u_{t_{1}},u_{t_{2}},\ldots,u_{t_{k}}\)_,_ 2. _the associated circles,_ \(v_{t_{1}},v_{t_{2}},\ldots,v_{t_{k}}\)_, belong to less than_ \(\ell^{2}\epsilon=o(\ell^{2})\) _rare pairs._ Proof.: Property (a) follows immediately from Lemma 3.1(b). Since we aim for a statement that holds a.a.s. we may assume that property (a) holds when proving property (b). Note that if \((v_{t_{i}},u_{s})\) forms a rare pair, then the square \(u_{s}\) needs to arrive after the circle \(v_{t_{i}}\) is placed (that is, \(s>t_{i}\)). Hence, the probability that a given vertex \(u\) fails to satisfy property (b) does not depend on the strategy of the player and can be upper bounded by \[p\ :=\ \binom{t}{\ell^{2}\epsilon}\left(\frac{\ell}{n}\right)^{\ell^{2} \epsilon}\leq\left(\frac{et}{\ell^{2}\epsilon}\cdot\frac{\ell}{n}\right)^{ \ell^{2}\epsilon}=\left(\frac{e\log n}{\ell\beta\epsilon}\right)^{\ell^{2} \epsilon}=\exp\left(-\ell^{2}\epsilon\log\left(\frac{\ell\beta\epsilon}{e\log n }\right)\right).\] Suppose first that \(\beta\leq\log n/\log\log n\). Then, since \(\ell\geq\log n/\log\beta\), we get that \(\epsilon=e^{2}\log\beta/\beta\geq e^{2}\log n/(\beta\ell)\) and so \[p\leq\exp\left(-\ell^{2}\epsilon\right)\leq\exp\left(-e^{2}\ell\log n/\beta \right).\] Since \(\beta\leq\log n/\log\log n\) (so, in particular, \(\log\beta\leq\log\log n\)), \[p\leq\exp\left(-e^{2}\ell\log\log n\right)\leq\exp\left(-e^{2}\ell\log\beta \right)\leq\exp\left(-e^{2}\log n\right)=o(1/n).\] Suppose now that \(\log n/\log\log n<\beta\leq\log^{2}n\). In particular, \((1+o(1))\log\log n\leq\log\beta\leq 2\log\log n\). Then, since \[\log\ell\geq\log\log n-\log\log\beta=(1+o(1))\log\log n,\] we get that \[\epsilon=\frac{15\log\ell}{\ell}\geq\frac{2e^{2}\log\log n}{\ell}\geq\frac{e^ {2}\log\beta}{\ell}=\frac{e^{2}\ell\log\beta}{\ell^{2}}\geq\frac{e^{2}\log n }{\ell^{2}}.\] It follows that \[p\leq\exp\left(-e^{2}\log n\cdot\log\left(\frac{e\beta}{\ell}\right)\right)= \exp\left(-e^{2}\log n\cdot\log\left((e+o(1)\frac{\beta\log\beta}{\log n} \right)\right).\] Since \(\beta\geq\log n/\log\log n\) and \(\log\beta\geq(1+o(1))\log\log n\), we conclude that \[p\leq\exp\left(-e^{2}\log n\cdot\log\left((e+o(1))\right)\right)=\exp\left(- (e^{2}+o(1))\log n\right)=o(1/n).\] Finally, suppose that \(\beta>\log^{2}n\). This time \(\epsilon=e/\ell\) and, since \(\sqrt{\beta}>\log n\), \[p \leq \exp\left(-e\ell\log\left(\frac{\beta}{\log n}\right)\right)\leq\exp \left(-e\ell\log\left(\sqrt{\beta}\right)\right)=\exp\Big{(}-(e/2)\ell\log \beta\Big{)}\] \[= \exp\Big{(}-(e/2)\log n\Big{)}=o(1/n).\] In all scenarios, the conclusion follows by the union bound over all vertices. For a given graph \(G=(V,E)\) and any set of vertices \(S\subseteq V\), we will use \(G[S]\) to denote the graph **induced** by set \(S\), that is, \(G[S]=(S,E^{\prime})\) and edge \(uv\in E\) is in \(E^{\prime}\) if and only if \(u\in S\) and \(v\in S\). The above lemma immediately implies the following useful corollary. **Corollary 3.5**.: _Suppose that \(t=t(n)\) is such that \(t=n^{1+o(1)}\) and \(t\ll n\log n\). Let \(\ell=\ell(n)\) and \(\epsilon=\epsilon(n)\) be defined as in Lemma 3.4. Then, a.a.s. for any set \(S\subseteq[n]\), \(G_{t}[S]\) has at most \(|S|\ell^{2}\epsilon\) rare pars._ Our next observation shows that one can remove only a few edges from \(G_{t}[S]\) in order to destroy all rare pairs. **Lemma 3.6**.: _Suppose that \(t=t(n)\) is such that \(t=n^{1+o(1)}\) and \(t\ll n\log n\). Let \(\ell=\ell(n)\) and \(\epsilon=\epsilon(n)\) be defined as in Theorem 1.3. Then, a.a.s. for any set \(S\subseteq[n]\), one can remove at most \(2|S|\ell\sqrt{\epsilon}\) edges from \(G_{t}[S]\) to remove all rare pars._ Proof.: Suppose that some vertex \(w\) yields \(r\) rare pairs. Let \[R_{w}=\{(v_{t_{i}},u_{s_{i}}):v_{t_{i}}=u_{s_{i}}=w\text{ and }s_{i}>t_{i}\}\] be the set of rare pairs associated with vertex \(w\), let \(V_{w}=\{v_{t_{i}}:(v_{t_{i}},u_{s_{i}})\in R_{w}\text{ for some }u_{s_{i}}\}\) be the set of the associated circles, and let \(U_{w}=\{u_{s_{i}}:(v_{t_{i}},u_{s_{i}})\in R_{w}\text{ for some }v_{t_{i}}\}\) be the set of the associated squares. We will first show that one can remove at most \(2\sqrt{r}\) edges to destroy all rare pairs from \(R_{w}\). If \(|V_{w}|\leq\sqrt{r}\), then one can remove all edges associated with circles from \(V_{w}\) which clearly destroys all rare circles from \(R_{w}\). Similarly, if \(|U_{w}|\leq\sqrt{r}\), then one can achieve the same by removing all edges associated with squares from \(U_{w}\). We may then assume that \(|V_{w}|>\sqrt{r}\) and that \(|U_{w}|>\sqrt{r}\). Let \(\hat{s}\) be the largest integer from \([t]\) with the property that \(\hat{U}_{w}=\{u_{s_{i}}\in U_{w}:s_{i}\geq\hat{s}\}\) has cardinality \(\sqrt{r}\). In other words, \(\hat{U}_{w}\subseteq U_{w}\) consists of the \(\sqrt{r}\) "youngest" squares from \(U_{w}\). Similarly, let \(\hat{t}\) be the smallest integer from \([t]\) with the property that \(\hat{V}_{w}=\{v_{t_{i}}\in V_{w}:t_{i}\leq\hat{t}\}\) has cardinality \(\sqrt{r}\). In other words, \(\hat{V}_{w}\subseteq V_{w}\) consists of the \(\sqrt{r}\) "oldest" circles from \(V_{w}\). Let us remove all edges associated with squares from \(\hat{U}_{w}\), and let us remove all edges associated with circles from \(\hat{V}_{w}\), (or both), for a total of at most \(2\sqrt{r}\) edges. We claim that this procedure removes all rare pairs from \(R_{w}\). For a contradiction, suppose that a rare pair \((v_{t_{i}},u_{s_{i}})\) is not removed. In particular, \(t_{i}>\hat{t}\) and \(s_{i}<\hat{s}\). On the other hand, by the definition of being a rare pair, \(t_{i}<s_{i}\). We conclude that \(\hat{t}<\hat{s}\). But it means that each circle from \(\hat{V}_{w}\) forms a rare pair with any square from \(\hat{U}_{w}\) for a total of \(\sqrt{r}\cdot\sqrt{r}=r\) rare pairs. With the additional rare pair \((v_{t_{i}},u_{s_{i}})\), there are at least \(r+1\) rare pairs which gives us the desired contradiction, and the claim is proved: one can remove at most \(2\sqrt{r}\) edges to destroy all rare pairs from \(R_{w}\). Consider any set \(S\subseteq[n]\). By Corollary 3.5, since we aim for a statement that holds a.a.s., we may assume that \(G_{t}[S]\) yields at most \(|S|\ell^{2}\epsilon\) rare pars, that is, \(\sum_{w\in S}r_{w}\leq|S|\ell^{2}\epsilon\), where \(r_{w}\) is the number of rare pairs associated with vertex \(w\). By the above observation, one may remove at most \(\sum_{w\in S}2\sqrt{r_{w}}\) edges from \(G_{t}[S]\) to destroy all of them. Clearly, the optimization problem \[\max\sum_{w\in S}2\sqrt{r_{w}}\hskip 28.452756pt\text{subject to}\hskip 28.452756pt\sum_{ w\in S}r_{w}\leq|S|\ell^{2}\epsilon\text{ and }r_{w}\geq 0\text{ for all }w\in S\] \(F_{t}[S]\) attains its maximum when all \(r_{w}\)'s are equal. We conclude that it is possible to remove at most \[\sum_{w\in S}2\sqrt{r_{w}}\leq\sum_{w\in S}2\sqrt{\ell^{2}\epsilon}=2|S|\ell \sqrt{\epsilon}\] edges from \(G_{t}[S]\) to destroy all rare pairs, and the proof of the lemma is finished. Now, we are ready to finish the proof of Theorem 1.3. Proof of Theorem 1.3(b).: Let us apply _any_ strategy to play the game. For a contradiction, suppose that there exists a \(K_{k^{\prime}}\) at time \(t\), where \(k^{\prime}=\ell(1+4\epsilon^{1/4})\) as in the statement of the theorem. Recall that \(\ell\sim\log n/\log\beta\) and \(\epsilon=\epsilon(n)=o(1)\). It is also straightforward to check that \(\ell\epsilon^{1/4}\to\infty\) as \(n\to\infty\). Let \(S\subseteq[n]\) be any set of cardinality \(k^{\prime}\) that induces a complete graph. Since we aim for a statement that holds a.a.s., we may apply Lemma 3.6. It follows that one can remove at most \(2k^{\prime}\ell\epsilon^{1/2}<4\ell^{2}\epsilon^{1/2}\) edges from \(G_{t}[S]\) in order to destroy all rare pairs. Note that after that operation, set \(S\) satisfies the following properties: 1. \(S\) has cardinality \(k^{\prime}=\ell(1+4\epsilon^{1/4})\), 2. the number of edges induced by \(S\) (and so also the number of squares) is more than \[\binom{k^{\prime}}{2}-4\ell^{2}\epsilon^{1/2}=\binom{\ell}{2}+4\ell^{2} \epsilon^{1/4}+\binom{4\ell\epsilon^{1/4}}{2}-4\ell^{2}\varepsilon^{1/2}> \binom{\ell}{2}+4\ell^{2}\epsilon^{1/4},\] (note that the first equality is due to a simple fact that \(\binom{a+b}{2}=\binom{a}{2}+ab+\binom{b}{2}\) for \(a,b\in\mathbb{N}\)) 3. \(S\) induces no rare pair, 4. there are at most \(\ell\) squares on any vertex. Let us now remove all edges induced by \(S\) and put them back, one by one, following the order they appeared during the semi-random process. We will distinguish \(k^{\prime}\) phases. The first phase starts when the circle associated with the first edge lands on vertex \(v_{1}\in S\). Since there are no rare pairs (property (c)), no square will land on \(v_{1}\) but other circles might end up there. The first phase continues as long as circles continue landing on \(v_{1}\). The second phase starts when some circle lands on vertex \(v_{2}\neq v_{1}\). Note that, since all edges introduced during the first phase are edges of the graph and all the circles cover vertex \(v_{1}\), all squares cover unique vertices at the beginning of the second phase. In particular, there is at most one square on vertex \(v_{2}\) when the first circle is placed on \(v_{2}\). In general, phase \(i\) starts when some circle is placed on vertex \(v_{i}\not\in\{v_{1},v_{2},\ldots,v_{i-1}\}\). Arguing as above, we conclude that at that point there are at most \(i-1\) squares on \(v_{i}\), and no more squares can land on it in the future. This is a useful upper bound for the number of squares, provided that \(i\leq\ell\). For larger values of \(i\) (that is, \(\ell<i\leq k^{\prime}=\ell(1+4\epsilon^{1/4})\)), we may apply property (d). We conclude that the number of squares is at most \(\binom{\ell}{2}+4\epsilon^{1/4}\ell\cdot\ell\), which contradicts property (b). This finishes the proof of the theorem. Suppose now that \(t=t(n)=\gamma n\log n\) for some \(\gamma\in(0,\infty)\). The upper bound of \(k^{\prime}_{2}\) in Theorem 1.4(b) is trivial and the argument above can be easily adjusted to show the upper bound of \(k^{\prime}_{1}\). We carefully explain the adjustment needed below. Proof of Theorem 1.4(b).: First, let us note that the upper bound of \(k^{\prime}_{2}=2\ell+1\) is indeed trivial. For a contradiction, suppose that some set \(S\) of cardinality at least \(2\ell+2\) induces a complete graph on \((2\ell+2)(2\ell+1)/2\) edges (and so it induces that many squares). Hence, by averaging argument, there is a vertex with at least \((2\ell+1)/2>\ell\) squares which contradicts Lemma 3.1(c). It remains to prove the upper bound of \(k^{\prime}_{1}\). As in Lemma 3.4, we first need to upper bound the number of rare pairs generated by one vertex. The probability that a given vertex \(u\) generates at least \(c\ell^{2}\) rare pairs is at most \[p := \binom{t}{c\ell^{2}}\left(\frac{\ell}{n}\right)^{c\ell^{2}}\leq \left(\frac{e\gamma n\log n}{c\ell^{2}}\cdot\frac{\ell}{n}\right)^{c\ell^{2}}= \left(\frac{e\gamma\log n}{c\ell}\right)^{c\ell^{2}}=\left(\frac{e}{c\xi} \right)^{c\ell^{2}}\] \[= \exp\left(-c(\xi\gamma\log n)^{2}\log\left(\frac{c\xi}{e}\right)\right)\] \[= \exp\left(-\Theta(\log^{2}n)\log\left(1+\frac{1}{\sqrt{\log n}} \right)\right)\] \[= \exp\left(-\Theta(\log^{3/2}n)\right)=o(1/n),\] when \(c=(e/\xi)(1+\log^{-1/2}n)\). (Note that \(\log(1+x)=x+\mathcal{O}(x^{2})\).) By the union bound over all vertices \(u\) we get that a.a.s. no vertex generates at least \(c\ell^{2}\) rare pairs. We conclude that a.a.s. for any set \(S\subseteq[n]\), \(G_{t}[S]\) induces at most \(c\ell^{2}|S|\) rare pairs (the counterpart of Corollary 3.5). Lemma 3.6 still applies and we get that a.a.s. for any set \(S\), one can remove at most \(2\sqrt{c}\ell|S|\) edges from \(G_{t}[S]\) to remove all rare pairs. We finish the proof as before. For a contradiction, suppose that there exists a complete graph on \(k^{\prime}=\ell(1+bc^{1/4})\) vertices, where \(b\) is a constant that will be properly tuned soon. We may assume that \(1+bc^{1/4}\leq 2\); otherwise, the trivial upper bound of \(k^{\prime}_{2}=2\ell+1\) applies. A.a.s. for any set \(S\) of cardinality \(k^{\prime}\), after destroying all rare pairs from \(G_{t}[S]\), the number of edges left is at least \[\binom{k^{\prime}}{2}-2\sqrt{c}\ell k^{\prime} = \binom{\ell}{2}+bc^{1/4}\ell^{2}+\binom{bc^{1/4}\ell}{2}-2\sqrt{ c}\ell k^{\prime}\] \[> \binom{\ell}{2}+bc^{1/4}\ell^{2}+\frac{b^{2}c^{1/2}\ell^{2}}{2} \left(1-\mathcal{O}\left(\frac{1}{\ell}\right)\right)-4\sqrt{c}\ell^{2}\] \[= \binom{\ell}{2}+bc^{1/4}\ell^{2}+\frac{c^{1/2}\ell^{2}}{2}\left(b ^{2}-8-\mathcal{O}\left(\frac{1}{\log n}\right)\right)\] \[> \binom{\ell}{2}+bc^{1/4}\ell^{2},\] when, for example, \(b=\sqrt{8}(1+\log^{-1/2}n)\). As before, we get the desired contradiction which implies the upper bound of \[k^{\prime}=\Big{(}1+bc^{1/4}\Big{)}\ell = \Big{(}1+2\sqrt{2}(e/\xi)^{1/4}(1+\log^{-1/2}n)^{2}\Big{)}\ell\] \[\leq \Big{(}1+2\sqrt{2}(e/\xi)^{1/4}\Big{)}\ell\Big{(}1+3\log^{-1/2}n \Big{)}=k^{\prime}_{1}.\] This finishes the proof of the theorem. ## 4. Chromatic Number Parts (a) in Theorems 1.5, 1.6 and 1.7 follow immediately from our results for complete graphs, namely, parts (a) in Theorems 1.3 and 1.4, and Observation 1.2. Parts (b) will follow from upper bounds for the number of squares that land on vertices, Lemma 3.1. Let us start with some useful basic facts about degeneracy of graphs. Recall that for a given \(d\in\mathbb{N}\), a graph \(H\) is \(d\)**-degenerate** if every sub-graph \(H^{\prime}\subseteq H\) has minimum degree \(\delta(H^{\prime})\leq d\) (where the minimum degree of a graph is the minimum degree over all vertices). The **degeneracy** of \(H\) is the smallest value of \(d\) for which \(H\) is \(d\)-degenerate. The \(d\)**-core** of a graph \(H\) is the maximal induced subgraph \(H^{\prime}\subseteq H\) with minimum degree \(\delta(H^{\prime})\geq d\). (Note that the \(d\)-core is well defined, though it may be empty. Indeed, if \(S\subseteq V(H)\) and \(T\subseteq V(H)\) induce sub-graphs with minimum degree at least \(d\), then the same is true for \(S\cup T\).) If \(H\) has degeneracy \(d\), then it has a non-empty \(d\)-core. Indeed, by definition, \(H\) is _not_\((d-1)\)-degenerate and so it has a sub-graph \(H^{\prime}\) with \(\delta(H^{\prime})\geq d\). Moreover, it follows immediately from the definition that if \(H\) has degeneracy \(d\), then there exists a permutation of the vertices of \(H\), \((v_{1},v_{2},\ldots,v_{k})\), such that for each \(\ell\in[k]\) vertex \(v_{\ell}\) has degree at most \(d\) in the sub-graph induced by the set \(\{v_{1},v_{2},\ldots,v_{\ell}\}\). Indeed, one can define such permutation recursively. Let \(v_{k}\) be any vertex in \(H\) that is of degree at most \(d\). Then, let \(v_{k-1}\) be any vertex of degree at most \(d\) in the graph \(H^{\prime}\) induced by the set \(\{v_{1},v_{2},\ldots,v_{k-1}\}\), etc. The above properties imply a useful reformulation of degeneracy: a graph \(H\) is \(d\)-degenerate if and only if the edges of \(H\) can be oriented to form a directed acyclic graph \(D\) with maximum out-degree at most \(d\). In other words, there exists a permutation of the vertices of \(H\), \((v_{1},v_{2},\ldots,v_{k})\), such that for every directed edge \((v_{i},v_{j})\in D\) we have \(i>j\) and the out-degrees are at most \(d\). As a consequence, we get another well-known but useful property: for any \(d\)-degenerate graph \(H\) we have \(\chi(H)\leq d+1\). Indeed, one may colour vertices of \(H\) greedily using the permutation \((v_{k},v_{k-1},\ldots,v_{1})\). With these properties, we may easily prove Theorems 1.5 and 1.6. Proof of Theorems 1.5(b) and 1.6(b).: Fix an arbitrary strategy for the player, and consider graph \(G_{t}\) generated at time \(t\). Let \(X_{v}=X_{v}(n)\) be the number of squares that land on \(v\) until time \(t\). By Lemma 3.1, we know that a.a.s. \[\max\Big{(}X_{v}:v\in[n]\Big{)}\ \leq\ \ell.\] Since we aim for a statement that holds a.a.s., we may assume that this property is satisfied. Let \(S\subseteq[n]\) be any subset of vertices. Since each edge connects a square with a circle, \(G_{t}[S]\) induces at most \(|S|\ell\) edges and so the average degree in \(G_{t}[S]\) is at most \(2\ell\). It follows that \(\delta(G_{t}[S])\leq 2\ell\) for any \(S\subseteq[n]\) and so \(G_{t}\) is \(2\ell\)-degenerate. The above observation implies that \(\chi(G_{t})\leq 2\ell+1\) which finishes the proof of the theorem. ## 5. Independent Sets ### Upper Bound We will first prove an upper bound that not only implies Theorem 1.8(a) and Theorem 1.9(a) but also provides a good upper bound when the average degree of \(G_{t}\) is a constant, especially when that constant is large. Having said that, we do not tune our argument to get the best possible bound but rather aim for an easy argument that provides the upper bound that matches the lower bound when the average degree tends to infinity as \(n\to\infty\). **Lemma 5.1**.: _Suppose that \(t=t(n)=\Omega(n)\). Let \(\lambda=\lambda(n)=t/n\), let \(\ell=\ell(n)=\lambda-\sqrt{5\lambda\log\lambda}\), and let \(k=k(n)=2\lceil\ell\rceil+1\). Finally, let_ * \(u=u(n)=\lceil n/k\rceil=\left\lceil\frac{n}{2\lambda}(1+\mathcal{O}(\sqrt{ \log\lambda/\lambda}))\right\rceil\)_, if_ \(\lambda\gg n^{2/5}\)_,_ * \(u=u(n)=\lceil n/k\rceil(1+k^{2}\sqrt{\log\lambda}/\lambda^{5/2})=\frac{n}{2 \lambda}(1+\mathcal{O}(\sqrt{\log\lambda/\lambda}))\)_, if_ \(1\ll\lambda=\mathcal{O}(n^{2/5})\)_,_ * \(u=u(n)=n\left(\frac{1}{2\lceil\ell\rceil+1}+\frac{2\lceil\ell\rceil}{\lambda^ {5/2}}\right)+n^{3/4}\)_, if_ \(\lambda=\Theta(1)\)_._ _Then, there exists a strategy that a.a.s. creates \(G_{t}\) such that \(\alpha(G_{t})\leq u\)._ Proof.: Let us arbitrarily partition the set of vertices \([n]\) into \(\lceil n/k\rceil\) parts, each of size at most \(k=2\lceil\ell\rceil+1\). We will independently apply Algorithm 3.3 to each part. The algorithm succeeds on a given part and produces a complete graph of order \(k\) if all vertices in that part receive at least \(\ell\) squares at time \(t\). For a given \(i\in\{1,2,\ldots,\lceil n/k\rceil\}\), let \(X_{i}\) be the indicator random variable for the event that Algorithm 3.3 fails on part \(i\), and let \(X=\sum_{i=1}^{\lceil n/k\rceil}X_{i}\) be the number of parts that failed. For a given vertex \(v\in[n]\), let \(Y_{v}\) be the random variable counting the number of squares on \(v\) at time \(t\). Clearly, \(Y_{v}\in\operatorname{Bin}(t,1/n)\) with \(\mathbb{E}[Y_{v}]=t/n=\lambda\). It follows immediately from Chernoff's bound (4) that \[\mathbb{P}(Y_{v}<\ell)=\mathbb{P}\Big{(}Y_{v}<\lambda-\sqrt{5\lambda\log\lambda} \Big{)}\leq\exp\left(-\frac{5\lambda\log\lambda}{2\lambda}\right)=1/\lambda^{5/2}.\] Since the algorithm fails on a given part if at least one vertex in that part (out of at most \(k\) vertices) receives less than \(\ell\) squares at time \(t\), \[\mathbb{P}(X_{i}=1)\leq k/\lambda^{5/2}=\mathcal{O}(1/\lambda^{3/2}).\] Hence, the expected number of parts that fail can be estimated as follows: \[\mathbb{E}[X]\leq\lceil n/k\rceil\cdot\frac{k}{\lambda^{5/2}}=\mathcal{O}(n/ \lambda^{5/2}).\] If \(\lambda\gg n^{2/5}\), then \(\mathbb{E}[X]\to 0\) as \(n\to\infty\) and so, by Markov's inequality, we get that a.a.s. \(X=0\). Since each independent set can have at most one vertex from each part (as all of them are successful a.a.s.), we get that a.a.s. \(\alpha(G_{t})\leq\lceil n/k\rceil\) and the desired bound holds. Suppose then that \(1\ll\lambda=\mathcal{O}(n^{2/5})\). This time, by Markov's inequality we get that a.a.s. \(X\leq\mathbb{E}[X]\sqrt{\log\lambda}=\mathcal{O}(n\sqrt{\log\lambda}/\lambda^{ 5/2})\). As before, each independent set can have at most one vertex from each successful part and, trivially, at most \(kX\) vertices from parts that failed. We get that a.a.s. \[\alpha(G_{t}) \leq \left\lceil\frac{n}{k}\right\rceil+kX=\frac{n}{2\lambda}(1+ \mathcal{O}(\sqrt{\log\lambda/\lambda}))+\mathcal{O}(n\sqrt{\log\lambda}/ \lambda^{3/2})\] \[= \frac{n}{2\lambda}(1+\mathcal{O}(\sqrt{\log\lambda/\lambda})).\] Finally, suppose that \(\lambda=\Theta(1)\). Note first that \((X_{i})\) is a sequence of negatively correlated random variables. Combining this with earlier observations, we conclude that \(X\) can be stochastically upper bounded by \(Y=\sum_{i=1}^{\lceil n/k\rceil}Y_{i}\), where \((Y_{i})\) are negatively correlated Bernoulii random variables with parameter \(k/\lambda^{5/2}.\) It follows from Chernoff's bound (3) and the comment right after it that a.a.s. \(X\leq Y\leq\frac{n}{\lambda^{5/2}}+n^{2/3}\). Arguing as before, we get that a.a.s. \[\alpha(G_{t}) \leq \left(\left\lceil\frac{n}{k}\right\rceil-X\right)+kX=\left\lceil \frac{n}{k}\right\rceil+(k-1)X\] \[\leq n\left(\frac{1}{2\lceil\ell\rceil+1}+\frac{2\lceil\ell\rceil}{ \lambda^{5/2}}\right)+O(n^{2/3}).\] This finishes the proof of the lemma. Let us note that the upper bounds we just proved are asymptotically tight when \(\lambda=\lambda(n)\gg 1\). On the other hand, the above bound is not sharp when \(\lambda=\Theta(1)\). There are many ways one may improve it. For example, a more careful argument could estimate the size of a largest independent set of a given part that fails (right now, we simply use a trivial upper bound of \(k\)). Moreover, some parts that succeed receive more squares than needed for the argument (for example, perhaps each vertex in that part receives more than \(\ell\) squares). Such additional squares could be used to create slightly larger cliques. Finally, when \(\lambda\) is a small constant, a better strategy could be to create a large perfect matching using the adaptive algorithm analyzed in [15]. ### Lower Bounds It is easy to prove by induction that for any graph \(G=(V,E)\), \(\alpha(G)\geq n/(\Delta+1)\), where \(\Delta\) is the maximum degree. Caro [9] and Wei [25] independently proved the following, more refined, version of this observation. (See also [1].) **Observation 5.2** ([9, 25]).: _For any graph \(G=(V,E)\),_ \[\alpha(G)\geq\sum_{v\in V}\frac{1}{\deg(v)+1}\geq\frac{n}{d+1}\geq\frac{n}{ \Delta+1},\] _where \(d=\frac{1}{n}\sum_{v\in V}\deg(v)\) is the average degree and \(\Delta=\max_{v\in V}\deg(v)\) is the maximum degree._ This observation immediately proves parts (b) of Theorems 1.8 and 1.9 since the average degree of \(G_{t}\) is \(d=2|E(G_{t})|/n=2t/m=2\lambda\). As mentioned above, this simple observation is asymptotically tight when \(\lambda=\lambda(n)\gg 1\). We propose two improvements when \(\lambda=\Theta(1)\). Observation 5.2 still applies but the lower bound of \(n/(d+1)=n/(2\lambda+1)\) that holds deterministically may be improved with slightly more effort. Indeed, the existence of an independent set of size \[L(G_{t}):=\sum_{v\in V}\frac{1}{\deg(v)+1}\] is still deterministically guaranteed. Understanding the graph parameter \(\alpha(G_{t})\) is challenging but \(L(G_{t})\) is relatively easy to deal with. In order to minimize \(L(G_{t})\), the player should keep the degree distribution as "flat" as possible; in particular, note that \(L(G_{t})\) is minimized when all vertices have degrees \(\lfloor d\rfloor\) or \(\lceil d\rceil\). However, she cannot achieve such distribution since squares arrive uniformly at random and so a.a.s. some vertices will receive more than \(\lceil d\rceil\) squares. To get a weaker lower bound we may consider an "off-line" version of the semi-random process, that is, let the player wait till time \(t\) before placing all of her circles at once. Clearly, the original process (the "on-line" version) is at least as challenging to the player as its off-line counterpart, so the obtained lower bound also applies there and the lower bound for \(\alpha(G_{t})\) holds. Let \(Y_{k}\) be the number of vertices that received \(k\) squares at time \(t\). It is easy to see (see Lemma 3.1) that a.a.s. for any \(k\in\mathbb{N}\cup\{0\}\), \[Y_{k}\ \sim\ n\frac{\lambda^{k}}{k!}\exp(-\lambda).\] The player will place her \(t=\lambda n\) circles greedily on vertices with minimum degree. Let \(M\in\mathbb{N}\cup\{0\}\) be the largest integer \(m\) such that \[f(m):=\sum_{k=0}^{m-1}(m-k)\frac{\lambda^{k}}{k!}\exp(-\lambda)\leq\lambda.\] For each \(k\in\{0,1,\ldots,M-1\}\), the player may put \(M-k\) circles on each vertex with \(k\) squares to make them of degree \(M\). The total number of circles used so far is \((1+o(1))nf(M)\) and the fraction of vertices of degree \(M\) at this point is asymptotic to \[g(M):=\sum_{k=0}^{M}\frac{\lambda^{k}}{k!}\exp(-\lambda).\] The remaining \((1+o(1))n(\lambda-f(M))\) circles are places on such vertices. Once this is done there are \((1+o(1))nh(k)\) vertices of degree \(k\), where \[h(k)=\begin{cases}g(M)-(\lambda-f(M))=g(M)-\lambda+f(M)&\text{ if }\qquad k=M\\ \frac{\lambda^{M+1}}{(M+1)!}\exp(-\lambda)+\lambda-f(M)&\text{ if }\qquad k=M+1\\ \frac{\lambda^{k}}{k!}\exp(-\lambda)&\text{ if }\qquad k\geq M+2.\end{cases}\] These observations imply the following lower bound. **Lemma 5.3**.: _Suppose that \(t=t(n)\sim\lambda n\) for some \(\lambda\in(0,\infty)\). Let \(\epsilon>0\) be any (arbitrarily small) constant. Then, there is no strategy that a.a.s. creates \(G_{t}\) such that_ \[\alpha(G_{t})<(1-\epsilon)\,n\sum_{k\geq M}\frac{h(k)}{k+1}.\] Finally, we analyze how small \(L(G_{t})\) can get for the original ("on-line") semi-random process. It is easy to see that in order to minimize \(L(G_{t})\) the player needs to apply a greedy strategy. In this strategy, in each round \(s\leq t\) of the process, the player puts a circle on a vertex with minimum degree; if there is more than one such vertex to choose from, the decision which one to select is made arbitrarily. Note that in each round \(s\leq t\), the minimum degree in \(G_{s}\) is at most the average degree, that is, at most \(2s/n\leq 2t/n=2\lambda\) so the player will never put a circle on a vertex of degree more than \(2\lambda\). In the greedy strategy, we distinguish phases by labelling them with integers \(q\in\{0,1,\ldots,r\}\), where \(r=\lfloor 2\lambda\rfloor\). During the \(q\)th phase, the minimum degree in \(G_{s}\) is equal to \(q\). In order to analyze the evolution of the semi-random process, we will track the following sequence of \(r+1\) variables: for \(0\leq i\leq r\), let \(Y_{i}=Y_{i}(s)\) denote the number of vertices in \(G_{s}\) of degree \(i\). Phase \(0\) starts at the beginning of the process. Since \(G_{0}\) is empty, \(Y_{0}(0)=n\) and \(Y_{i}(0)=0\) for \(1\leq i\leq r\). There are initially many isolated vertices but they quickly disappear. Phase \(0\) ends at time \(s\) which is the smallest value of \(s\) for which \(Y_{0}(s)=0\). The DEs method (see Subsection 2.2) will be used to show that a.a.s. Phase \(0\) ends at time \(s_{0}\sim x_{0}n\), where \(x_{0}\) is an explicit constant which will be obtained by investigating the associated system of DEs. Moreover, the number of vertices of degree \(i\) (\(1\leq i\leq r\)) at the end of this phase is well concentrated around some values that are also determined based on the solution to the same system of DEs: a.a.s. \(Y_{i}(s_{0})\sim y_{i}(x_{0})n\). With that knowledge, we move on to Phase \(1\) in which we prioritize vertices of degree \(1\). Consider any Phase \(q\), where \(q\in\{0,1,\ldots,r\}\). This phase starts at time \(s_{q-1}\), exactly when the previous phase ends (or at time \(s_{-1}:=0\) if \(q=0\)). At that point, the minimum degree of \(G_{s_{q-1}}\) is \(q\), so \(Y_{i}(s)=0\) for any \(s\geq s_{q-1}\) and \(i<q\). Hence, we only need to track the behaviour of the remaining \(r+1-q\) variables. Let us denote \(H(s)=(Y_{q}(s),Y_{q+1}(s),\ldots,Y_{r}(s))\). Let \(\delta_{A}\) be the Kronecker delta for the event \(A\), that is, \(\delta_{A}=1\) if \(A\) holds and \(\delta_{A}=0\) otherwise. Then, for any \(i\) such that \(q\leq i\leq r\), \[\mathbb{E}\Big{(}Y_{i}(s+1)-Y_{i}(s)\ |\ H(s)\Big{)}=-\delta_{i=q}+\delta_{i=q+ 1}-\frac{Y_{i}(s)}{n}+\delta_{i\geq q+1}\cdot\frac{Y_{i-1}(s)}{n}. \tag{7}\] Indeed, since the circle is put on a vertex of degree \(q\), we always lose one vertex of degree \(q\) (term \(-\delta_{i=q}\)) that becomes of degree \(q+1\) (term \(\delta_{i=q+1}\)). We might lose a vertex of degree \(i\) when the square lands on a vertex of degree \(i\) (term \(Y_{i}(s)/n\)). We might also gain one of them when the square lands on a vertex of degree \(i-1\) (term, \(Y_{i-1}(s)/n\)); note that this is impossible if \(i=q\) (term \(\delta_{i\geq q+1}\)). This suggests the following system of DEs: for any \(i\) such that \(q\leq i\leq r\), \[y_{i}^{\prime}(x)=-\delta_{i=q}+\delta_{i=q+1}-y_{i}(x)+\delta_{i\geq q+1} \cdot y_{i-1}(x). \tag{8}\] It is easy to check that the assumptions of the DEs method are satisfied (we omit details since we did not formally introduce the tool). The conclusion is that a.a.s. during the entire Phase \(q\), for any \(q\leq i\leq r\)), \(|Y_{i}(s)-y_{i}(s/n)n|=o(n)\). In particular, Phase \(q\) ends at time \(s_{q}\sim x_{q}n\), where \(x_{q}>x_{q-1}\) is the solution of the equation \(y_{q}(x)=0\). Using the final values \(y_{i}(x_{q})\) in Phase \(q\) as initial values for Phase \(q+1\) we can repeat the argument inductively moving from phase to phase. We stop the analysis at the end of Phase \(r\) when a graph of minimum degree equal to \(r+1\) is reached. As discussed earlier, it happens at time \(s>t\) and so we may "rewind" the process back to round \(t\) to check the degree distribution of \(G_{t}\). Based on that, we may compute \(L(G_{t})\) which gives us the desired lower bound for \(\alpha(G_{t})\). Suppose that round \(t\) occurs during phase \(q\leq r\). A.a.s. there are \((1+o(1))w(k)n\) vertices of degree \(k\geq q\) in \(G_{t}\), where \[w(k)=\begin{cases}y_{k}(t/n)&\text{ if }q\leq k\leq r\\ \frac{\lambda^{k}}{k!}\exp(-\lambda)&\text{ if }k\geq r+1.\end{cases}\] These observations imply the following lower bound. **Lemma 5.4**.: _Suppose that \(t=t(n)\sim\lambda n\) for some \(\lambda\in(0,\infty)\). Let \(\epsilon>0\) be any (arbitrarily small) constant. Then, there is no strategy that a.a.s. creates \(G_{t}\) such that_ \[\alpha(G_{t})<(1-\epsilon)\,n\sum_{k\geq M}\frac{w(k)}{k+1}.\] ## 6. Conclusions In this paper, we investigated three monotone properties. Our bounds are off by at most a multiplicative factor of \(2+o(1)\). It would be interesting to close the gap between the upper and the lower bounds (or, at least, narrow them down). * The property of containing a complete graph of order \(k\) is well understood unless \(t=t(n)=\Theta(n\log n)\). * The property of creating a graph with the chromatic number at least \(k\), is well understood when \(t\gg n\log n\). More work is needed when \(t=\mathcal{O}(n\log n)\). * The property of not having an independent set of size at least \(k\) remains to be investigated when \(t=\Theta(n)\). In other regimes, the asymptotic behaviour is determined. ## 7. Acknowledgements This work was done while the authors were visiting the Simons Institute for the Theory of Computing.
2305.18698
AutoMM: Energy-Efficient Multi-Data-Type Matrix Multiply Design on Heterogeneous Programmable System-on-Chip
As the increasing complexity of Neural Network(NN) models leads to high demands for computation, AMD introduces a heterogeneous programmable system-on-chip (SoC), i.e., Versal ACAP architectures featured with programmable logic (PL), CPUs, and dedicated AI engines (AIE) ASICs which has a theoretical throughput up to 6.4 TFLOPs for FP32, 25.6 TOPs for INT16 and 102.4 TOPs for INT8. However, the higher level of complexity makes it non-trivial to achieve the theoretical performance even for well-studied applications like matrix-matrix multiply. In this paper, we provide AutoMM, an automatic white-box framework that can systematically generate the design for MM accelerators on Versal which achieves 3.7 TFLOPs, 7.5 TOPs, and 28.2 TOPs for FP32, INT16, and INT8 data type respectively. Our designs are tested on board and achieve gains of 7.20x (FP32), 3.26x (INT16), 6.23x (INT8) energy efficiency than AMD U250 FPGA, 2.32x (FP32) than Nvidia Jetson TX2 GPU, 1.06x (FP32), 1.70x (INT8) than Nvidia A100 GPU.
Jinming Zhuang, Zhuoping Yang, Peipei Zhou
2023-05-30T02:42:26Z
http://arxiv.org/abs/2305.18698v1
AutoMM: Energy-Efficient Multi-Data-Type Matrix Multiply Design on Heterogeneous Programmable System-on-Chip ###### Abstract As the increasing complexity of Neural Network(NN) models leads to high demands for computation, AMD introduces a heterogeneous programmable system-on-chip (SoC), i.e., Versal ACAP architectures featured with programmable logic(PL), CPUs, and dedicated AI engines (AIE) ASICs which has a theoretical throughput up to 6.4 TFLOPs for FF32, 25.6 TOPs for INT16 and 102.4 TOPs for INT8. However, the higher level of complexity makes it non-trivial to achieve the theoretical performance even for well-studied applications like matrix-matrix multiply. In this paper, we provide AutoMM, an automatic white-box framework that can systematically generate the design for MM accelerators on Versal which achieves 3.7 TFLOPs, 7.5 TOPs, and 28.2 TOPs for FF32, INT16, and INT8 data type respectively. Our designs are tested on board and achieve gains of 7.20x (FP32), 3.26x (INT16), 6.23x (INT8) energy efficiency than AMD U250 FPGA, 2.32x (FP32) than Nvidia Jetson TX2 GPU, 1.06x (FP32), 1.70x (INT8) than Nvidia A100 GPU. heterogeneous system-on-chip, Versal ACAP, matrix multiply, support for multiple data types. ## I Introduction With the end of the Dennard voltage scaling law, domain-specific accelerators, e.g. FPGAs, TPUs, and GPUs, became a mainstream trend to improve performance while maintaining power efficiency [1]. To keep up the pace of high computation demand, AMD proposes the Versal architecture which is a heterogeneous programmable system-on-chip featuring the dedicated AI Engine (AIE) ASIC, programmable logic (FPGA), and software ARM cores to provide high throughput while maintaining flexibility. As shown in Table I, we use the on board result of the MM application to demonstrate the energy efficiency between last and current generation FPGAs and GPUs. Comparing the 16nm U250 FPGA with Nvidia Jetson TX2 GPU, Jetson TX2 achieves 3.11x energy efficiency since the bit level reconfiguration of prior FPGAs leads to more power consumption. The 7nm VCK190 enables both bit-level hardware customization on the PL side and byte-level customization on the dedicated AIE array. Due to the AIE array, our proposed design on VCK190, i.e., AutoMM, achieves 1.06x energy efficiency compared with Nvidia A100 GPU with the same technology node. However, designing energy-efficient accelerators on Versal platforms can be very challenging due to the inconsistency between the high throughput provided by the AIE array and the relatively low off-chip bandwidth. We collect the theoretical performance and off-chip bandwidth of two 16nm and 7nm GPUs and FPGAs under FP32 data type in Table II. The required computation-to-communication (CTC) ratio refers to the minimum data reuse rate that can sustain the theoretical throughput under the provided off-chip bandwidth. While VCK190 provides 6400 GFLOPs throughput, it only equips with one DDR4-DIMM external memory with 25.6 GB/s bandwidth meaning at least 250 operations per byte are needed to sustain the peak performance which is 13.10x, 17.01x, and 19.8x more severe compared with 16nm U250 FPGA, 16nm Jetson TX2 GPU and 7nm A100 GPU respectively. Therefore, huge challenges caused by the significant gap between performance and off-chip bandwidth on Versal platforms should be addressed to achieve high performance and energy-efficient designs. With such contradictory results from energy efficiency and required CTC ratio, one key question arises: _How can we design more energy-efficient MM accelerator designs to make full use of the gigantic computation resources under limited communication bandwidth?_ To answer this, we identify the design challenges at different levels and show the detailed design methodologies to tackle them: * **High Efficiency Single AIE Design**: To achieve high efficiency in single AIE computation, we propose the optimized coding style in Section IV-B that makes full use of the 7-way VLIW capability to achieve back-to-back issued MAC intrinsic execution. * **IO Reused and Routing Optimized AIE Array Design**: We efficiently utilize the limited I/O ports between PL\(\leftrightarrow\)AIEs by combining broadcast with packet-switch connections to scale out and maintain the computation efficiency to tens and hundreds of AIEs. In addition, to alleviate the routing congestion in the AIE array, we explore a broadcast factor for the data transfer from PLLO to AIEs. * **PL\(\leftrightarrow\)AIEs Bubble-free Pipelining Data Transfer**: To amortize the bandwidth gap between limited off-chip bandwidth and the high bandwidth requirement from AIEs, we make full use of the on-chip storage to increase data reuse on PL. Bubble-free pipelining data transfer algorithm is proposed and implemented in the dedicated data mover on \begin{table} \begin{tabular}{c|c|c c c} \hline \multirow{2}{*}{**Fabrication**} & \multirow{2}{*}{**Board Name \& Performance** (GFLOPs) & **Power** & **Eenergy Efficiency** \\ & & (GFLOPs) & (GNBs) & (GFLOPs) & (Ratio) \\ \hline \multirow{2}{*}{16 nm} & AMD U250 [2] & 1.470 & 77 & 19.09 & 1.00x \\ & Nvidia Jetson TX2 [4] & 750 & 512 & 1.65 & 0.77x \\ \hline \multirow{2}{*}{7 nm} & AMD UCF190 [6] & 6.400 & 25.6 & 290 & 13.10x \\ & Nvidia A100 [7] & 19,500 & 1555 & 12.54 & 0.66x \\ \hline \end{tabular} \end{table} TABLE II: Theoretical performance, off-chip bandwidth and require CTC ratio comparisons among FPGAs and GPUs of two generations when the data type is FP32. \begin{table} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{**Fabrication**} & \multirow{2}{*}{**Board Name \& Framework** (GFLOPs) & **Performance** (GFLOPs) & **Reused** (GFLOPs) \\ \hline \multirow{2}{*}{16 nm} & AMD U250 [2], AuteSA [3] & 885 & 96.20 & 8.02 & 1.00x \\ & Nvidia Jetson TX2 [4], cuBLAS [5] & 560 & 20.20 & 27.72 & 3.11x \\ \hline \multirow{2}{*}{7 nm} & AMD UCF190 [6], HaS [5] & 3.745 & 38.34 & 51.18 & 7.28 \\ & Nvidia A100 [7], AntisaS [5] & 15,016 & 248.20 & 60.50 & 6.78x \\ \hline \end{tabular} \end{table} TABLE I: Performance, power, and energy efficiency comparisons among FPGAs and GPUs when the data type is FP32. PL to feed the data between PL\(\leftrightarrow\)AIEs producing a non-stall AIE execution pipeline. * We compare the energy efficiency of our design with 16nm U250 FPGA, 16 nm Nvidia Jetson TX2 and 7nm A100 GPU under FP32, INT16, and INT8 data types for MM, NCF, and MLP applications. Our on broad experiment shows that we achieve 3.7 TFLOPs, 7.5 TOPs, and 28.2 TOPs throughput for FP32, INT16, and INT8 on MM. Compared with A100 GPU on end-to-end applications, we achieve 0.96x and 1.16x energy efficiency gains on NCF [8] and MLP [9]. * **Automatic MM Accelerator Design Framework on Versal**: While AMD provides users a black-box IP DPU [10] for INT8 neural network (NN) applications, we are among the first ones to provide an open-source white-box framework as shown in Fig 1, i.e., AutoMM, to automatically generate MM accelerator designs for different data types on Versal ACAP. We provide the AutoMM Python APIs to generate the source code for the accelerators. AutoMM is integrated into CHARM [11] framework: _[https://github.com/arc-research-lab/CHARM_](https://github.com/arc-research-lab/CHARM_). ## II Related Work In this section, we discuss the related work of artificial intelligence accelerators on different architectures including FPGAs, GPUs, and dataflow architectures. **FPGA acceleration.** Moss et al. [12] propose a customizable hardware template with a fixed systolic array architecture to process matrix multiplication workloads on FPGA. AutoSA [3] generates systolic array designs from user-specified matrix sizes by exploring different mapping strategies and implementing them on FPGA. FBLAS [13] proposes an open-source HLS implementation of the BLAS library for FPGAs. CHARM (FPGA23 [11]) proposes an open-source design framework of FP32 matrix-multiply-based applications on Versal ACAP (advanced compute acceleration platform). **Dataflow architectures.** Eyeriss [14] propose a tiled architecture with a 2D array of PEs and a shared global buffer to process the GEMM operations in NN applications. TPUs [15] leverages systolic array architecture to schedule the byte-level computations and data movements in GEMM processing. In computation, Versal ACAP is capable of both bit-level computation customization on FPGA and byte-level computation customization as most of the aforementioned dataflow architectures and coarse-grained reconfigurable architecture [11, 16, 17] support. In memory architecture, FPGA and aforementioned dataflow architectures use scratchpad memory, while GPUs [18] use cache hierarchy to ease the data movement programming. Versal ACAP also adopts scratchpad memory and therefore, it needs specific control for data movement. Specifically, as for on-chip communication, the aforementioned dataflow architectures adopt certain bus-based network-on-chip (NoC) or systolic arrays for the data movements between buffers and computation processing elements. However, since there is heterogeneity between FPGA and AIE array on Versal ACAP, new challenges including how to efficiently leverage the DMAs and I/Os between FPGA & AIE arrays and switch-box based AXI stream (AXIS) within AIE arrays on Versal ACAP need to be solved, and these challenges are addressed in this paper. ## III Versal Architecture Overview In this section, we summarize the system architecture of the heterogeneous SoC research platform, AMD Versal VCK190 evaluation kit. With the AMD XCVC1902 Adaptive Compute Acceleration Platform (ACAP) chip on the board, VCK190 Fig. 1: AutoMM Compilation Framework and Python User Interface. Fig. 2: Versal ACAP architecture. is featured with a comprehensive set of various hardware as shown in Fig 2. VCK190 has a wide range of architectures built in, including an array of 400 VLIW processors, called the AI engine array (AIE array), ARM processors called the Processor System (PS), and the FPGA Programmable Logic (PL). These hardware components could communicate with each other through the NOC or on-chip AXIS. Inside the AIE array, an AIE core can communicate with another core in two approaches. Each AIE core shares its local memory with its neighbors for communication. On the other hand, the cores are connected to an AXI stream mesh through AXIS switches. The AXIS switches can be reconfigured such a way that there could be either (1) a circuit-switched path with dedicated ports for each communication, or (2) a packet-switched network with a target identifier attached to reuse the paths for multiple communications. Each AIE core has two input and two output connections from/to the switch. Each switch has six output ports to its northern neighbor, thus six input ports from its southern neighbor. For the rest of the directions, the switch has four I/O ports with its neighbor. There are 39 AXIS interface tiles between the AIE array and the PL. The interface crosses the clock domain of the PL and the AIE and automatically converts the rates. The AIE side of the interface has eight 32-bit input and six 32-bit output channels at 1 GHz, supporting up to 256 Gbps input and 192 Gbps output. The PL side has eight 64-bit input channels and six 64-bit output channels. Each AI engine is a 7-way very long instruction word (VLIW) supported vector processor including two loads (from local memory to register), two moves (update vector registers), one store (from register to local memory), one vector operation (2D-SIMD) and one scalar operation instructions. It owns 2Kb vector registers, 3Kb accumulation registers, and 32 KB of data memory located either on the west or the east of the core alternating between rows. In this case, the AIE can not only access its own memory, but also the memory of the AIE on its north and south, and the opposite side of its own memory. In total, one AIE can access up to 128 KB memory in total. ## IV Design Methodology Designing a high performance system-level accelerator leveraging heterogeneous resources can be very challenging. In this section, we first illustrate the dataflow, tiling, and mapping strategy of matrix-matrix multiply (MM). We then describe the detailed programming models and design methodologies of the single AIE, AIE array, and AIE\(\leftrightarrow\)PL. ### _Dataflow, Tiling and Mapping Strategy of MM_ Four levels of tiling and output stationary dataflow are applied in our design to compute the matrix-matrix multiply(MM). The pseudo-code and the corresponding mapping strategy of our tiled MM example are shown in Listing 1 with four levels of loops and Fig 3 respectively. **Single AIE Level (Line 15-19).** An MM with size TI * TK * TJ named "TILE" is mapped to a single AIE. To fully utilize the 7-way VLIW capability of the AIE core, We manually pack several 2D-SIMD vector intrinsics into a function "MatMul" to calculate a sub-tile with size PI * PK * PJ. Thus a TILE can be computed by launching "MatMul" (TI/PI) * (TJ/PJ) * (TK/PK) times. **AIE Array Level (Line 11-14).** When scaling out to the AIE array, we explore the spatial data parallelism among different AIEs as shown in the AIE array mapping in Fig 3. More specifically, we unroll A * B * C TILEs on the AIE array with each AIE computing a TILE as mentioned above. The TILEs in the same reduction dimension (k.2 loop) are assigned to the AIEs in the same column producing the read-after-write (RAW) dependency. The m.2 and n.2 loop are mapped to different columns in the AIE array. We refer to the MM with size (A*TI) * (B*TK) * (C*TJ) as the "BATCH" level. **PL On-chip Data Reuse Level (Line 6-10).** In order to amortize the bandwidth gap between off-chip to PL and PL to AIE array, we explore the on-chip data reuse by allocating a large number of RAMs on the PL side to store multiple X * Y * Z BATCHEs of data. The BATCHEs of data are fed to the AIE array by the DMA module on the PL side following the bubble-free pipeline algorithm which will be discussed in section IV-D and the partial result from the AIE array will finally be accumulated on the PL side. **Off-chip Level (Line 1-5).** Data that exceeds the capacity of the on-chip buffer are stored in the off-chip memory. The double buffer technique is applied to hide the overhead of loading/storing the data between off-chip to on-chip memory. ### _Single AIE Programming Model_ Our system-level design starts from the single AIE kernel. The Vitis programming tools expose C intrinsics [19], Fig. 3: Mapping strategy and data layout. including load/store, scalar, and vector operations, for AIE programming. To achieve high computation efficiency of AIE, it is necessary for us to explore the best coding style for a single AIE. The overall data processing in a single AIE is shown in Listing 2. Variables L, R, and O are three pointers referencing the local memories allocated for the MM kernel(Lines 5-7). restrict directives specify that input pointers do not alias, enabling more aggressive optimizations. chess_pipelining is applied for all the three loops(Line 10) to inform the compiler of finding optimized pipeline design. To reduce the frequency of writing local memory O, we choose k loop as the innermost loop (Line 14) and introduce two 8-length vector registers, acc0 and acc1 (line 12-13), to hold the partial accumulation results in an interleaved manner which avoids of waiting for two cycles adding the partial result to the same register after each vector MAC operation. This allows the local memory O to be written only once after the final accumulation results are carried out. To make full usage of the upto 7-way VLIW and get back-to-back issued MAC operations, we manually pack 16 8*1 vector 2D-SIMD instructions in each Matmul function to calculate MM with the size of PI(8)*PK(8)*PJ(2) (Line 1-3). In addition to two accumulator registers, we further allocate four 8-length in total 1Kb vector registers (A0, A1, B0, B1) shown in Fig. 4 for storing the two vector operands needed for current MAC operation and two pre-loaded vector operands for future MAC operation. We use Li and Ri to notify the 8-length vector and Rij to notify the element in one vector. By pre-loading L0 and R0 from local memory to vector register A0 and B0 prior to the start of the Matmul function (Line 8), the MAC instruction can be issued in time 0. And at the same time, the two load instructions for loading the local memory used in future MAC operations can be packed to the same VLIW. Since only when the last iteration should we store data from accumulator register Acc0 and Acc1 back to local memory so there are two kinds of Matmul functions in the design(Lines 15 and 18). Note that we hoist the last iteration of the loop out to avoid the significant performance degradation of inserting an if statement in the k.3 loop. In summary, to conduct MM under FP32 datatype on a single AIE we pack 16 MAC8 together in the innermost loop as an atomic operation and these 16 instructions will calculate a Matmul block with size 8*8*2. To scale up the MM size, we can assign TI, TJ, and TK sizes that are multiple of our atomic operation, for example, 32x32x32. In this case, the loop boundaries for Line 9, Line 11, and Line 14 are 4, 16, and 3 respectively. The methodologies of building atomic and scaling up MM size in a single AIE are applied to other data types as well. ### _Scaling Out to AIE Array_ **PLIO Reuse.** When scaling out to a large number of AIE cores, as described in section III, the total number of PLIOs in the interface tiles is much smaller than the total number of operands of all the AIE cores, identifying reuse patterns within the AIE array can be important to build a feasible and communicating and computation balanced AIE array design. As shown in Fig 5 the 4x4 AIE array calculates MM with 1*4*4 TILEs in which the AIEs in the same column take the output of the previous AIE as input producing RAW dependency. Figure 5 (a) demonstrates how the 1*4 TILEs in LHS matrix is transferred to 16 AIEs by reusing one port in the interface tile. A similar mechanism can be applied to the RHS and Output matrix. In particular, we leverage a combination of broadcast and packet-switch connections to effectively transfer the data from PL to AIE throughput the I/O port in interface tiles. First, by using the data broadcast opportunities in the MM application (e.g. one row of TILEs in LHS can be broadcast to different columns of RHS), we can use 1 port to broadcast the single TILE(0,0) of LHS to AIE(col 0-3, row 0) as shown in solid lines. The packet-switch opportunity appears when the computation time of a single AIE is higher than communication, i.e., the CTC ratio of a single AIE is larger than 1. In this case, by attaching the different data TILEs with a unique header, the data TILEs can be scattered to multiple AIEs in a time-division multiplex way without hurting the computation of each AIE. For example, a single AIE kernel that computes 32x32x32 MM with FP32 data type takes at least 4096 cycles to compute and 1024 cycles to transfer LHS and RHS TILEs. In this case, the single AIE kernel CTC ratio is 4. Here we refer to 1024 cycles as one time step. Therefore, we can pack 4 LHS TILEs (0, 0-3)(same for RHS) in the same packet stream to AIE(col 0, row 0-3) on different time steps as shown in the dashed lines. In a summary, TILE 0 of LHS can be broadcast to AIE(col Fig. 4: Single AIE pipeline. 0-3, row 0) in time step 0, TILE 1 of LHS can be broadcast to AIE(col 0-3, row 1) in time step 1 by reusing the same port. TILE 2 and 3 of LHS share the same pattern in time steps 2 and 3. Thus, by combining broadcast circuit-switched connections and packet-switched connections, we can use 1 port to distribute data to 16 AIEs in four time steps without performance degradation which reduces the number of ports by 16x. **Routing Optimization.** By combining the broadcast and packet-switch connections we hugely reduce the ports needed in the design, however, the routing complexity is not reduced for each switch box. Currently, we observe that the Vitis AIE compiler will split the data stream immediately in the first switch box after the interface tile as shown in 5 (a). Thus, routing congestion in the switch boxes is very likely to happen when broadcasting data to AIEs at a long distance from the interface tile. In order to reduce the routing congestion caused by long-distance broadcasts, we apply broadcast factors on both LHS and RHS matrices. As shown in Fig. 5 (b), instead of broadcasting the LHS to all four columns of the AIE array, we set the broadcast factor to two which means that we use 2 ports with each one sending the same data to two columns. Thus the total number of connections from west to east is reduced from 10 to 4. The benefit will be more obvious when routing on more AIEs. ### _AIE-PL Bubble-free Pipelining Data Transfer Algorithm_ In order to amortize the bandwidth gap between off-chip memory to PL and PL to AIE, we hugely explore the on-chip data reuse by allocating over 80% on-chip buffers and storing multiple numbers of BATCHEs. We design dedicated DMA modules with a bubble-free pipelining algorithm that determines the order of each TILE that reaches the corresponding local memory of AIEs. We use the data movement and computation in AIE column 0, namely AIE(col 0, row 0-3) with IDO-ID3 that calculates the first row of LHS and column of RHS in BATCH 0-3 shown in Fig 3, as an example to demonstrate our data transferring strategy. In Figure 6, we first illustrate the pipeline bubbles when using the straightforward data transferring sequence where multiple BATCHEs of data are sent to the AIE array in the lexicographical order as (BATCH, ID). Lexicographical order means that the TILE with the smaller BATCH index will be transferred earlier than the larger BATCH index. It also means that the TILE with a smaller TILE ID in the same BATCH will be transferred earlier than the larger TILE ID. As demonstrated in Figure 6, each TILE has a unique (BATCH, ID) pair and we use white or grey to identify loading the LHS and RHS data into the ping-pong banks of each AIE local memory. The time for storing the data in local memory is overlapped by the computation due to VLIW, thus omitted in the figure. Once the previous AIE finishes computing, the read-after-write (RAW) dependency between AIEs in a column is considered resolved. For illustration purposes, We assume the CTC ratio of each AIE is 4, which means 1 time step for data loading and 4 time steps for computation. The order graph on the right side of Figure 6 illustrates the sequence of (BATCH,ID) during data transferring. When applying the lexicographical order, from time 0 to time 3, ID 0 to ID 3 in BATCH 0 are transferred. From time 4 to time 7, ID 0 to ID 3 in BATCH 1 are transferred. If there are no bubbles, from time 8 to time 11, ID 0 to ID 3 in BATCH 2 will be transferred. However, the first data transfer bubble appears in time 10 for AIE 2. AIE 2 takes the BATCH 0 data in time 2 and BATCH 1 data in time 6. It does not compute on the BATCH 0 data until time 9 as the initial latency due to RAW dependencies from AIE 1 and AIE 0. It is impossible for AIE 2 to take BATCH 2 data until it completes the execution of BATCH 0 and releases the memory bank 0. Therefore, it causes three transferring bubbles and pushes back the BATCH 2 data transfer from time 10 to time 13. Then butterfly effect happens due to the lexicographical order, AIE 0 can't get its BATCH 3 data, thus after finishing computing BATCH 2, it can not start to compute BATCH 3 which leads to computation bubbles from time 13-18. To address this, we implement a pipeline bubble-free scheduling technique as shown in Fig. 6. In this approach, we only send data that is needed in the next computation period. For example, in the first computation period, corresponding to time 1-4, we send data with (BATCH, ID) pair (1,0) and (0,1) sequentially. These two tiles are needed in time 5-8 for AIE 0 and AIE 1. Similarly, three tiles are sent in time 5-7 as they are needed in time 9-12 for AIE 0, 1, and 2. By using this zigzag data transferring manner instead of lexicographical order between PL and AIE, we successfully eliminate data transfer bubbles & compute bubbles and achieve a full pipeline. ### _AutoMM Framework_ The overall architecture of our AutoMM framework is shown in Fig 1. Our AutoMM framework mainly consists of four components including the Python-based user interface, the optimizer for design space exploration(DSE), the host CPU Runtime manager and the automatic code generator(ACG). We provide users with Python APIs shown in Listing 3 that take the definition of the MM-based model as input (Lines 2-6). Based on the size of the MM kernels, our DSE (Lines 8-9) will find the best hardware configuration by optimizing the AIE PLIO utilization, AIE placement, PL buffer utilization, DMA between AIE \(\leftrightarrow\) PL, and the overall tiling for both PL and AIE. Then our ACG (Lines 10-11) takes the output of DSE and Host Runtime to generate the corresponding source code for AIE array, PL and Host CPU respectively. Our API also enables automatic compilation via AMD Vitis compiler (Lines 13-14) and on-board execution (Lines 15-16). To the best of our knowledge, AutoMM is the first work to provide the high-level Python APIs to generate source code for Versal ACAP. ``` importcharm #Define the LHS(A) and RHS(B) operands #A-np.random.rand(4096, 4096).astype(np.float32) #B-np.random.rand(4096, 4096).astype(np.float32) #automm=charm()#Createcharmobject #launchCHARMdes #Versal_config=automm.cdse(A,B) #launchCHARMCodeGen #launchCHARMCodeGen #automm.cacg(Versal_config,'vck190') #RunVitisCompilationFlow #automm.build() #RunOn-boardExecution #automm.run() ``` Listing 3: AutoMM Python APIs. ## V Experiment Results In this section, we demonstrate the MM design performance, power, and energy efficiency of AutoMM implementation on AMD VCK190 under various data types. We compare them with prior works on other platforms including AutoSA [3] implementation on AMD U250 FPGA, cuBLAS [5] on Nvidia A100 40GB PCIe GPU and Jetson TX2 GPU. We also evaluate AutoMM on two deep learning inference tasks: NCF [8] for recommendations, MLP [9] for multilayer perceptron classification or regression. These two inference models are mainly based on different shapes of matrix-multiply layers. ### _Experiment Setup_ AMD Vitis 2021.1 is used for all the experiments on VCK190 with PL running on 230MHz and AIE running on 1GHz. The designs on U250 FPGA are generated by AutoSA [3] and Autobridge [20] for FP32, INT8 (300MHz) and INT16 (250MHz) using AMD Vitis 2019.2. We set up the GPU experiment of MM under FP32 data type by using cublasSgemm() API in cuBLAS from CUDA Toolkit 10.2 for Jetson TX2 GPU and 11.3 for A100 GPU. For INT8 experiment on A100 GPU, we use the cublasGemmEx() API in cuBLAS from CUDA 11.3. When comparing the performance of MM, we use the same size for VCK190 and NVIDIA GPUs. For U250 designs, we pick the design sizes with the best performance due to the AutoSA [3] design size limitation. We set the matrix size to 6K*6K*6K for VCK190, Nvidia A100, and Jetson TX2 GPUs, and 1040*1K*1K for U250 under FP32. For INT16, the matrix size is 9K*9K*10K and 1K*1K*1K for VCK190 and U250. For INT8, the matrix size is 16K*16K*16K for VCK190 and Nvidia A100 GPU, and 1056* 1K*1K for U250. We use AMD board evaluation and management tool [21], AMD AMD Board Utility [22], NVIDIA System Management Interface tool, and P3 P4460 Kill-A-Watt(Tm) power meter to measure the power of VCK190, U250 FPGA, A100, and Jetson TX2 GPU respectively. We iterate the design to make sure the total execution time exceeds the 60s and the power is relatively stable and the average value is reported. ### _End-to-end Applications_ We apply our AutoMM framework to NCF and MLP applications and compare the energy efficiency with A100 GPU under FP32 data type. AutoMM achieves 2.3 TFLOPs and 0.96x energy efficiency compared with A100 GPU shown in Table VI since the MM with small sizes in NCF leads to performance degradation on the overall performance. For MLP, AutoMM achieves 3.5 TFLOPs and 1.16x energy efficiency gain compared with A100 GPU. ## VI Conclusion and Acknowledgement In this work, we propose AutoMM framework, an automatic white-box tool that can systematically generate the design for MM accelerators under different data types on Versal. We believe our design methodology can be a good reference for other users to design their own applications on Versal. We thank all the reviewers for their valuable feedback. We acknowledge the support from the University of Pittsburgh New Faculty Start-up Grant, Pitt Center for Advanced Manufacturing (UPCAM) Grant, Pitt Provost Open Educational Resources (OER) Grant, NSF awards CNS-2213701, CCF-2217003. We thank AMD for FPGA and software donation, the AMD Heterogeneous Accelerated Compute Cluster at UCLA, and the Center for Research Computing (CRC) at the University of Pittsburgh.
2310.03768
The Fallacy in the Paradox of Achilles and the Tortoise
Zeno's ancient paradox depicts a race between swift Achilles and a slow tortoise with a head start. Zeno argued that Achilles could never overtake the tortoise, as at each step Achilles arrived at the tortoise's former position, the tortoise had already moved ahead. Though Zeno's premise is valid, his conclusion that Achilles can "never" pass the tortoise relies on equating infinite steps with an infinite amount of time. By modeling the sequence of events in terms of a converging geometric series, this paper shows that such an infinite number of events sum up to a finite distance traversed in finite time. The paradox stems from confusion between an infinite number of events, which can happen in a finite time interval, and an infinite amount of time. The fallacy is clarified by recognizing that the infinite number of events can be crammed into a finite time interval. At a given speed difference after a finite amount of time, Achilles will have completed the infinite series of gaps at the "catch-up time" and passed the tortoise. Hence this paradox of Achilles and the tortoise can be resolved by simply adding "before the catch-up time" to the concluding statement of "Achilles would never overtake the tortoise".
James Q. Feng
2023-10-04T17:20:02Z
http://arxiv.org/abs/2310.03768v1
## The Fallacy in the Paradox of Achilles and the Tortoise ## Abstract Zeno's ancient paradox depicts a race between swift Achilles and a slow tortoise with a head start. Zeno argued that Achilles could never overtake the tortoise, as at each step Achilles arrived at the tortoise's former position, the tortoise had already moved ahead. Though Zeno's premise is valid, his conclusion that Achilles can "never" pass the tortoise relies on equating infinite steps with an infinite amount of time. By modeling the sequence of events in terms of a converging geometric series, this paper shows that such an infinite number of events sum up to a finite distance traversed in finite time. The paradox stems from confusion between an infinite number of events, which can happen in a finite time interval, and an infinite amount of time. The fallacy is clarified by recognizing that the infinite number of events can be crammed into a finite time interval. At a given speed difference after a finite amount of time, Achilles will have completed the infinite series of gaps at the "catch-up time" and passed the tortoise. Hence this paradox of Achilles and the tortoise can be resolved by simply adding "before the catch-up time" to the concluding statement of "Achilles would never overtake the tortoise". have moved to a newer place at \(x=x_{2}=x_{0}+s_{T}\,t_{1}=x_{0}\,[1+(s_{T}/\,s_{A})\,(1+s_{T}/\,s_{A})]\) further ahead, and so on. Thus, Achilles should reach \(x=x_{n}\) at time \[t_{n}=(x_{0}/\,s_{A})*[1+(s_{T}/\,s_{A})+(s_{T}/\,s_{A})^{2}+\ldots+(s_{T}/\,s_ {A})^{n}]=(x_{0}/\,s_{A})\,[1-(s_{T}/\,s_{A})^{n+1}]\,/\,(1-s_{T}/\,s_{A}) \tag{1}\] as a geometric series, with \[x_{n}=x_{0}\,[1-(s_{T}/\,s_{A})^{n+1}]\,/\,(1-s_{T}/\,s_{A}), \tag{2}\] where the speed of Achilles and the speed of the tortoise, \[s_{A}=x_{n}/\,t_{n}\,\,\mbox{and}\,\,s_{T}=(x_{n+1}-x_{0})\,/\,t_{n}\] are constants. Given that the speed ratio \(s_{T}/\,s_{A}<1\), as Achilles is faster than the tortoise, we have \[t_{\infty}=(x_{0}/\,s_{A})\,/\,(1-s_{T}/\,s_{A})\,\,\mbox{and}\,\,x_{\infty}= x_{0}\,/\,(1-s_{T}/\,s_{A})\,\,\mbox{as}\,\,n\,\rightarrow\,\infty, \tag{3}\] which suggests that Achilles should catch up with the tortoise at \(x=x_{\infty}\) (the catch-up distance), when \(t=t_{\infty}\) (the catch-up time); thereafter, Achilles would be running ahead of the tortoise. For example, if \(x_{0}=1\) and \(s_{T}/\,s_{A}=1/2\) (or \(1/3\), \(1/5\), \(1/10\)), assuming \(s_{T}=1\), we should have the catch-up time \(t_{\infty}=1\) (or \(1/2\), \(1/4\), \(1/9\)) and catch-up distance \(x_{\infty}=2\) (or \(3/2\), \(5/4\), \(10/9\)). Although Zeno presented his premise flawlessly - i.e., the tortoise would be ahead of Achilles at any time \(t=t_{n}<t_{\infty}\), no matter how large the value of \(n\) becomes - his conclusion that Achilles would _never_ be able to run past the tortoise is mistakenly based on an implication that \(n\rightarrow\infty\) equates to \(t_{n}\rightarrow\infty\), means that something could not happen at any time. Yet, "unbounded \(n\) corresponds to unbounded \(t_{n}\)" was not included in Zeno's premise. In fact, a large \(n\) in \(t_{n}\) cannot mean that the value of \(t_{n}\) must be large. With \(s_{T}/\,s_{A}<1\), \(t_{n}<t_{\infty}=(x_{0}/\,s_{A})\,/\,(1-s_{T}/\,s_{A})\) is always finite, beyond which (as the clock ticks and time moves forward when \(t>t_{\infty}\)) Achilles will be ahead of the tortoise. The paradox of Zeno's Achilles and the tortoise has been regarded as a supertask, consisting of an infinite sequence of subtasks to be completed in a finite amount of time. Whether a supertask can possibly be completed has been a subject of academic debate (e.g., Black 1951; Wisdom 1952; Chihara 1965; McLaughlin and Miller 1992; Alper and Bridger 1997; Salmon 1998; Peijnenburg and Atkinson 2008; Ardourel 2015). The difficulty appears to come from the intuition that an infinite number of subtasks could take forever to complete. But in the case of Zeno's Achilles and the tortoise, the completion of those infinite steps seems possible when one recognizes the fact that an infinite number of subtasks may not necessarily need an infinite amount of time to complete, because the finite catch-up time \(t_{\infty}\) can be finite. In philosophy, the concepts of space and time have been related to the relative locations of objects and sequences of events. It has become common sense to measure the distance between objects with yardsticks of constant length and to create units of time using a time interval between two specific and regular events, such as repeated positions of the Sun in the sky throughout the day. In some ways, Zeno tricked us by implying the time measurement with events of a shrinking time interval, presenting an illusion of an infinite sequence of events occurring through an infinitely long time. As time is traditionally presented as a dense real number axis, a finite time interval contains an infinite number of real numbers to offer opportunities for creating illusions with an infinite sequence of events crammed into a finite small time interval. Another similar paradox of Zeno describes the runner Achilles starting at the starting line of a track and running past half of the distance to the finish line. He then runs past half of the remaining distance and continues in this way over and over without end, getting ever closer to the finish line but never able to reach it. It has the same type of logical fallacy as that in Achilles and the tortoise; so does Whitrow's version of the paradox of a bouncing ball (Whitrow 1980). It has been suggested that many problems in philosophy arise from misunderstandings about what everyday words actually mean (Wittgenstein 1953). If correctly stated using appropriate language, the paradox of Achilles and the tortoise can be resolved by simply adding "before the catch-up time \(t_{\infty}\) defined in (3)" to the concluding statement of "Achilles would never be able to run past the tortoise". As might be noted, the resolution presented here involves only consistency in logic and mathematics assuming time to be represented as a real-number with infinite divisibility. In the physical world, modern understanding would indicate (Hilbert 1983): "The sort of divisibility needed to realize the infinitely small is nowhere to be found in reality. The infinite divisibility of a continuum is an operation which exists only in thought." Thus discrete solutions have also been investigated (e.g., Ardourel 2015; Theunissen & Oud 2021). But for resolving the paradox _per se_ it should be sufficient to focus just on the logical reasoning according to Zeno's conception, as shown herewith.
2307.00876
Electron slingshot acceleration in relativistic preturbulent shocks explored via emitted photon polarization
Transient electron dynamics near the interface of counterstreaming plasmas at the onset of a relativistic collisionless shock (RCS) is investigated using particle-in-cell simulations. We identify a slingshot-like injection process induced by the drifting electric field sustained by the flowing focus of backwards-moving electrons, which is distinct from the well-known stochastic acceleration. The flowing focus signifies the plasma kinetic transition from a preturbulent laminar motion to a chaotic turbulence. We find a characteristic correlation between the electron dynamics in the slingshot acceleration and the photon emission features. In particular, the integrated radiation from the RCS exhibits a counterintuitive non-monotonic dependence of the photon polarization degree on the photon energy, which originates from a polarization degradation of relatively high-energy photons emitted by the slingshot-injected electrons. Our results demonstrate the potential of photon polarization as an essential information source in exploring intricate transient dynamics in RCSs with relevance for earth-based plasma and astrophysical scenarios.
Zheng Gong, Xiaofei Shen, Karen Z. Hatsagortsyan, Christoph H. Keitel
2023-07-03T09:14:53Z
http://arxiv.org/abs/2307.00876v2
# Electron slingshot acceleration in relativistic preturbulent shocks explored ###### Abstract Electron acceleration mechanisms near the counterstreaming interface of a relativistic collisionless shock (RCS) are investigated using particle-in-cell (PIC) simulations. We identify a slingshot-like injection process induced by the drifting electric field sustained by the flowing focus of backwards-moving electrons, which is distinct from the well-known stochastic acceleration. The flowing focus signifies the plasma kinetic transition from a preturbulent laminar motion to a chaotic turbulence. We find a characteristic correlation between the electron dynamics in the slingshot acceleration and the photon emission features. In particular, the integrated radiation from the RCS exhibits a counterintuitive non-monotonic dependence of the photon polarization degree on the photon energy, which originates from a polarization degradation of relatively high-energy photons emitted by the slingshot-injected electrons. Our results demonstrate the potential of photon polarization as an essential information source in exploring intricate dynamics in RCSs with relevance for earth-based plasma and astrophysical scenarios. Plasma shocks are characterized by rapid steepening of a nonlinear wave, the eventual overtaking by its rear part, and the irreversible energy transfer to the surrounding particles [1; 2; 3]. They are of extensive interest and ubiquitous in various scenarios, such as plasma accelerators [4; 5; 6; 7; 8], inertial confinement fusion [9; 10; 11; 12], Earth's magnetosphere bombarded by solar winds [13; 14; 15], young stellar outflows [16], and active galactic nuclei jets [17]. Recent observations suggest that RCSs offer plausible acceleration mechanisms towards understanding the origin of TeV cosmic leptons [18; 19; 20; 21] and galactic PeVatrons [22]. Moreover, the unprecedented 100 TeV photon emission from the pulsar wind nebulae is interpreted as the Compton up-scattering of ultrarelativistic electrons driven by a RCS [23; 24; 25], and the RCS prompted afterglow radiation signals a peculiar long gamma-ray burst from the merger of a compact binary system [26; 27; 28; 29]. The onset of filamentation turbulence in the RCS efficiently converts energy from ordered bulk flows to self-amplified fields [30; 31; 32; 33; 34; 35; 36; 37]. This develops through the filamentation merging and magnetic loop coalescence [38; 39; 40; 41], where electrons, undergoing turbulent motion with severe swirling and trace crossing, no longer travel in a quasi-layer form. The turbulence is crucial for characterizing Weibel-mediated microstructures [42; 43; 44; 45; 46; 47] and instigating stochastic acceleration [48; 49; 50; 51; 52; 53; 54]. The latter, akin to the _Fermi_ process [55; 56], refers to the particle energization through chaotic scatterings off inhomogeneous structures, which has been well recognized as sources of energetic electrons in RCSs [57; 58; 59; 60; 61]. Relevant experiments have testified the growth of magnetic filament turbulence [62; 63; 64; 65; 66; 67] and the first-order Fermi acceleration [68]. However, it remains largely unexplored how the plasma transits from the nonturbulent flow to the kinetic turbulence and how this transition impacts the acceleration and radiation features in the RCS. As a versatile information carrier of multi-messenger astrophysics [69; 70; 71], photon polarization is critical for measuring the magnetic configuration nearby black holes [72] and crab nebulae [73] and for analyzing the particle acceleration in the blazar's jet [74]. Therefore, the question arises whether the polarization feature of spontaneously emitted photons can be employed to reveal the mechanism responsible for the turbulence transition in a RCS. In this letter, we investigate the electron dynamics in the transition to turbulence nearby the interface of a counterstreaming RCS. We employ PIC simulations to examine the photon emission and observe an anomalous non-monotonic dependence (NMD) of the photon polarization degree on the photon energy. We found that the NMD indicates a specific mechanism of electron acceleration, which we term as slingshot injection, caused by a drifting electric field due to the flowing focus of backwards-moving electrons. Utilizing Hamiltonian analyses, we elucidate that the backwards-flowing focus marks the plasma transition to a turbulent regime in the RCS, which in the electron's transverse phase space is exhibited as the change from the phase-locked to the phase-slipping dynamics. The NMD photon properties stem from a polarization degradation of relatively high-energy photons emitted by the slingshot-injected electrons. The correlation among the NMD of photon polarization, the slingshot injection, and the backwards-flowing focus emphasizes the importance of the transition region to the turbulence in characterizing the acceleration and radiation in the RCS. We have carried out 2D simulations of counterstreaming RCSs, see Fig. 1. The latter is initiated when a uniform plasma flow with a bulk Lorentz factor \(\gamma_{0}=50\), injected from the right side, is reflected from the left side boundary, which adopts a reflection condition [58]. The periodic boundary condition is set in the lateral direction. Motivated by the unknown composition of as trophysical jets, we consider the flow consisting of electrons, positrons, and ions with the number density of \(n_{e0}\), \(n_{p0}\), and \(n_{i0}\), respectively. The charge neutralization \(n_{e0}=n_{p0}+Z_{i}n_{i0}\) is satisfied initially and the ion with charge (mass) \(Z_{i}=1\) (\(m_{i}=1836m_{e}\)) is used. The ratio \(\eta\equiv n_{i0}/(n_{i0}+n_{p0})\in(0.01,1)\) denotes the proportion of ions among the whole positive charged particles. The simulation domain is \(200\lambda_{pe}\times 20\lambda_{pe}\) with resolution \(\Delta x\)=\(\Delta y\)=\(\lambda_{pe}/50\) and \(\Delta t=0.95\Delta x/c\). Each cell is filled with 48 macro-particles for each species. Here, \(\omega_{pe}=(n_{e0}e^{2}/\varepsilon_{0}m_{e})^{1/2}\) (\(\lambda_{pe}=2\pi c/\omega_{pe}\)) is the plasma frequency (skin depth), with the electron charge (mass) \(e\) (\(m_{e}\)), the vacuum permittivity \(\varepsilon_{0}\), and the speed of light \(c\). The models of the photon polarization have been implemented in the EPOCH code [77; 78]. Unless otherwise indicated, we discuss results from the fiducial simulation with \(\gamma_{0}=50\) and \(\eta=0.4\). The snapshot of the electron density \(n_{e}\) in Fig. 1(a) exhibits that the filamentation exclusively exists at the front of the RCS interface. Between two adjacent filaments, an electron focusing point emerges, and following that, two oblique density strips stretch out [see Fig. 2(b)]. Behind the strips, the coherent filament structures and focusing points disappear while the turbulence shows up. A nontrivial thing is that the photons with energy \(\varepsilon_{ph}\equiv\hbar\omega_{ph}>10^{-2}\hbar\omega_{ph}^{m}\) are primarily emitted by electrons nearby the interface, where \(\omega_{ph}^{m}\sim 10^{8}\omega_{pe}\) is the photon cut-off frequency and \(\hbar\) the Planck constant. In contrast, in the case of \(\eta=0.01\), the energetic photon emission predominantly occurs in the turbulent region [see Fig. 1(b)], even though the preturbulent structures are extended to a larger range. The degree of photon's linear polarization along the direction of the electron's transverse acceleration is characterized by the Stokes parameter \(\mathcal{Q}\)[79], formulated as [78] \[\mathcal{Q}=\frac{\varepsilon_{e}(\varepsilon_{e}-\varepsilon_{ph})K_{\frac{ \lambda}{2}}(\zeta)}{[\varepsilon_{e}^{2}+(\varepsilon_{e}-\varepsilon_{ph})^ {2}]K_{\frac{\lambda}{2}}(\zeta)-\varepsilon_{e}(\varepsilon_{e}-\varepsilon _{ph})\tilde{K}_{\frac{\lambda}{2}}(\zeta)}, \tag{1}\] where \(K_{n}(\zeta)\) is the modified secondary Bessel function, \(\tilde{K}_{1/3}(\zeta)=\int_{\zeta}^{\infty}K_{1/3}(z)\mathrm{d}z\), \(\zeta=2\varepsilon_{ph}/[3\chi_{e}(\varepsilon_{e}-\varepsilon_{ph})]\), and \(\varepsilon_{e}=\gamma_{e}m_{e}c^{2}\) the electron energy; \(\chi_{e}\equiv(e\hbar/m_{e}^{3}c^{4})|F_{\mu\nu}p^{\nu}|\) is the electron quantum strong-field parameter with the field tensor \(F_{\mu\nu}\), and the electron four-momentum \(p^{\nu}\). At \(\chi_{e}\ll 0.1\), \(\partial\mathcal{Q}/\partial\varepsilon_{ph}>0\) predicted by Eq. (1) manifests a monotonic dependence of \(\mathcal{Q}\) on \(\omega_{ph}\), because for the higher-frequency radiation the formation length is shorter and the preservation of the local polarization degree is improved. This monotonic dependence is confirmed by the results of \(\eta=0.01\) [see Fig. 1(c)(d)], where electrons experience stochastic acceleration [48] and the photon emission is isotropic in the angular space. However, for \(\eta=0.4\) [see Fig. 1(c)(d)], the averaged polarization degree \(\langle\mathcal{Q}\rangle\) versus \(\omega_{ph}\) exhibits NMD, with a polarization dip \(\Delta\mathcal{Q}_{\omega}\approx 4.5\%\) and a bandwidth ratio \(\mathcal{B}_{\omega}\equiv\omega_{ph}^{m}/\omega_{ph}^{*}\sim 10^{3}\), contradictory to the forementioned monotonic dependence. Here, \(\omega_{ph}^{*}\) is the local maximum point of the function \(\langle\mathcal{Q}\rangle\) vs \(\omega_{ph}\) [see Fig. 1(c)]. In the angular distribution, \(\langle\mathcal{Q}\rangle\) has a polarization valley \(\Delta\mathcal{Q}_{\theta}\approx 11\%\) and the photon emission tends to be more collimated within an emission angle \(\theta_{ph}\lesssim 15^{\circ}\). To unveil the reason of the counterintuitive NMD, we focus on the electron dynamics within the dashed box marked in Fig. 1(a). For the deflection of backwards-moving electrons nearby the interface, the effective plasma density approximates \(\eta n_{e0}\) and the charge density has a sinusoidal profile \(\rho\sim|e|\eta n_{e0}\cos[k_{y}(y-y_{c})]\) with \(k_{y}\sim\omega_{pe}/2c\) the periodic wave number and \(y_{c}\) the relative central axis [78]. The self-generated transverse electric and magnetic field is calculated as \(E_{y}(y)=(|e|\eta n_{e0}/\varepsilon_{0}k_{y})\sin[k_{y}(y-y_{c})]\) and \(B_{z}(y)=cE_{y}\). As justified by simulations, the energy exchange \(d\gamma_{e}/dt\) is insignificant and thus the transverse dynamics is described by \(\ddot{y}+(\Omega^{2}/k_{y})\sin[k_{y}(y-y_{c})]=0\), with \(\Omega^{2}=2\eta n_{e0}e^{2}/\varepsilon_{0}\gamma_{0}m_{e}\). Then the corresponding Hamiltonian can be derived as [78] \[H_{\perp}(y,\dot{y})=\frac{\Omega^{2}}{k_{y}^{2}}\cos[k_{y}(y-y_{c})]+\frac{1}{ 2}\dot{y}^{2}. \tag{2}\] Following \(H_{\perp}(y,\dot{y})=H_{\perp}(y_{0},0)\), the electron transverse Figure 1: The dynamics of a counterstreaming RCS: The electron density \(n_{e}\) at \(t=80\pi/\omega_{pe}\) for (a) \(\eta=0.4\)[75] and (b) \(\eta=0.01\)[76], where lines present the typical electron moving tendency with stars marking the photon emission and the histograms display the spatial distribution of emitted photons with \(\omega_{ph}>10^{-2}\omega_{ph}^{m}\). (c) \(\langle\mathcal{Q}\rangle\) and \(\omega_{ph}dN_{ph}/d\omega_{ph}\) vs \(\omega_{ph}\). (d) \(\langle\mathcal{Q}\rangle\) and \(dN_{ph}/d\theta_{ph}\) vs \(\theta_{ph}\). motion is analyzed as \[t=\frac{k_{y}}{\sqrt{2}\Omega}\int\frac{dy}{\sqrt{\cos[k_{y}(y_{0}-y_{c})]-\cos[k_{ y}(y-y_{c})]}}. \tag{3}\] The trajectories predicted by Eq. (3) demonstrate that the backwards-moving electrons would be focused into \(y=y_{c}\) at a restoring time \(t_{r}\sim 0.6\pi/\Omega\), as confirmed by simulation results [see Figs. 2(a)(b)]. After the backwards-flowing focus, the electron motion starts to transit from the preturbulent motion to turbulence, interpreted as a shrinking of the Hamiltonian's separatrix. The separatrix \(H_{\perp}(y,\dot{y})\equiv H_{\perp}(y_{c},0)=\Omega^{2}/k_{y}^{2}\) divides the electron dynamics into the confined phase-locked and the escaping phase-slippage regions. If the magnetic field decreases with the equivalent restoring frequency reduced from \(\Omega\) to \(\Omega^{\prime}\), the phase space volume encompassed by the separatrix is shrunk from \(H_{\perp}(y,\dot{y})<\Omega^{2}/k_{y}^{2}\) to \(H_{\perp}(y,\dot{y})<\Omega^{\prime 2}/k_{y}^{2}\). Thus, the electrons within the region of \(\Omega^{\prime 2}/k_{y}^{2}<H_{\perp}(y,\dot{y})<\Omega^{2}/k_{y}^{2}\) are released into the phase-slippage region [see Fig. 2(c)]. The electron release breaks the coherent filament structure and deteriorate the transverse inhomogeneity, leading to the onset of the plasma turbulence. The transition from the preturbulent flowing focus to the turbulence is illustrated by the evolution of the particle separation [see Fig. 2(d)], where \(\delta r\) is the distance between an electron and its closest partner at the beginning and \(\overline{\delta r}\) refers to the averaged value. After the focus at \(\omega_{pe}t/2\pi\sim 45\), the signature of the chaotic dynamics arises with \(\overline{\delta r}\propto\exp{(\lambda_{l}\delta t)}\) characterized by the Lyapunov exponent \(\lambda_{l}\approx 0.15\omega_{pe}/\pi\)[83]. The electrons exhibit a chaotic behavior during the defocusing stage [84], where the decrease of the exerted magnetic field \(|\overline{B_{z}}|\) proves the shrinking of the Hamiltonian's separatrix. Later at \(\omega_{pe}t/2\pi\sim 70\), \(\overline{\delta r}\propto 0.2c\delta t/\pi\) implies a drifting tendency because of the localized electrons prone to occupy the whole interaction domain [85]. Eventually at \(\omega_{pe}t/2\pi>130\), \(\overline{\delta r}\propto 9(\omega_{pe}\delta t/2\pi)^{1/2}\) manifests the electrons' random walk procedure [86; 87]. The flowing focus leads to a negative longitudinal electric field \(E_{x}\) with a scale length \(\delta l\) [see Fig. 3(a)], favorable for injecting electrons into the RCS. This injection resembles a slingshot, where the filaments serve as the handheld, the backwards-moving electrons behave as the elastic string, and the injected forwards-moving electrons are the projectiles [88]. The scale length is calculated as \(\delta l\sim t_{r}c\approx\pi\sqrt{\gamma_{0}/\eta}(c/\omega_{pe})\). Given \(\nabla\cdot\mathbf{E}=\rho/\varepsilon_{0}\), the field strength is estimated as \(\langle E_{x}\rangle\approx\pi\sqrt{\eta\gamma_{0}}m_{e}\omega_{pe}/|e|\) [see Fig. 3(b)]. The flowing focus successively occurs for the replenished backwards-moving electrons and the field \(E_{x}\) propagates with a velocity \(v_{x}\approx v_{d}=(1-1/\gamma_{0}^{2})^{1/2}\). In the interface's co-moving frame \(\xi\equiv x-v_{d}t\), the electron's longitudinal dynamics is determined by the Hamiltonian \(\Pi_{\parallel}(\xi,p_{x})=-|e|\varphi(\xi)+c\sqrt{m_{e}^{2}c^{2}+p_{x}^{2}}-v _{d}p_{x}\) with \(\varphi(\xi)=-\int E_{x}(\xi)d\xi\) [see Fig. 3(c)] [78]. Considering the separatrix \((\xi_{0},p_{d})\) with \(E_{x}(\xi_{0})=0\) and \(p_{d}=m_{e}v_{d}/(1-v_{d}^{2}/c^{2})^{1/2}\), the relation \(\Pi_{\parallel}(0,p_{d}^{\pm})=\Pi_{\parallel}(\xi_{0},p_{d})\) Figure 2: (a) Schematic of the backwards-flowing focus with the predicted electron trajectories [80]. The brown (purple) markers denote the magnetic (electric) field direction and the green arrows present the slingshot-injected electrons. (b) Zoom in on the dashed box marked in Fig. 1(a), where the red (blue) lines represent the backward (forward) moving electrons [81]. (c) Electron evolution in \((y,\dot{y})\) space with the blue dashed (red dotted) lines contouring \(H_{\perp}|_{\Omega\eta\approx 0.1\omega_{pe}}\) (\(H_{\perp}|_{\Omega^{\prime}\approx 0.02\omega_{pe}}\)) [82]. (d) Time evolution of \(\delta r\) (\(\overline{\delta r}\)) in grey (red). Figure 3: (a) Electric field \(E_{x}\) with the transverse profile of \(B_{z}\) and \(E_{y}\). (b) \(\delta l\) and \(\langle E_{x}\rangle\) vs \(\eta\). (c) Hamiltonian \(\Pi_{\parallel}(\xi,p_{x})\) with the red arrows denoting the moving tendency modified by the magnetic deflection. (d) Time-evolved electron position, where the black lines profile \(B_{z}\) and the red (blue) stars mark the photon emission belong to the slingshot (stochastic) mechanism. Three kinds of slingshot electrons are shown with the color of ‘A’ cyan, ‘B’ lime, and ‘C’ magenta. governs the injection threshold \(p_{th}^{-}\) and the maximum achievable momentum \(p_{th}^{+}\), derived as [78] \[p_{th}^{+}\sim 2\gamma_{0}^{3}+\frac{3}{2}\gamma_{0}-\frac{2}{\gamma_{0}}\quad \&\quad p_{th}^{-}\sim-\frac{\gamma_{0}}{2}-\frac{1}{2\gamma_{0}}. \tag{4}\] Specifically, there are three types of slingshot-injected electrons [see Figs. 3] [89]. The 'A' electrons co-moving with \(E_{x}\) get a pronounced energy gain up to \(\gamma_{e}\sim 10^{3}\) for the considered parameters. The initially backwards-moving 'B' electrons are below the threshold, i.e. \(p_{x}\sim-\gamma_{0}<p_{th}^{-}\), but they are still injected because the magnetic deflection \(\mathbf{v}\times\mathbf{B}\) leading to an attractor effect in \((\xi,p_{x})\) space [90; 91], which drags the electrons towards the degraded Hamiltonian \(\Pi_{\parallel}\) [see the red arrows in Fig. 3(c)] [78]. The 'C' electrons are trapped by the \(E_{x}\) induced by assembling two stretched-out density strips behind the flowing focus position. Unlike the directed slingshot electrons, the stochastic electrons tend to be repetitively rebounded by the magnetic turbulence and undergo Fermi-like acceleration [58; 59]. Figure 3(d) manifests that the primary contribution of photon emission nearby the preturbulent RCS interface originates from the slingshot electrons. In the search for a distinct criterion distinguishing between the slingshot and stochastic electrons, we turn to the electron's longitudinal and transverse work \(W_{\parallel,\perp}\) [see Fig. 4(a)], where \(W_{\parallel}=-\int|e|E_{x}dx\), \(W_{\perp}=-\int|e|E_{y}dy\), and \(W_{\rm t}=W_{\parallel}+W_{\perp}\); the integrals are calculated from the beginning to the photon emitting moment. The slingshot acceleration relies on \(E_{x}\) while the stochastic process is isotropic, meaning that the photon emission associated with \(W_{\parallel}/W_{\rm t}\to 1\) (\(W_{\parallel}/W_{\rm t}\to 0.5\)) belong to the slingshot (stochastic) mechanism [92]. Therefore, the condition of \(W_{\parallel}/W_{\rm t}\lessgtr 0.75\) is a reasonable criterion to distinguish the photon emission from the stochastic or slinghot mechanism. For the photons produced from the two mechanisms, both of their \(\langle\mathcal{Q}\rangle\) vs \(\omega_{ph}\) [in Fig. 4(b)] is monotonically increasing as predicted by Eq.(1). However, the photon emission from the slingshot is shifted to the higher frequency range compared with the stochastic scenario due to the enhanced energy of slingshot electrons [see Fig. 4(c)]. Therefore, the NMD of \(\langle Q\rangle\) vs \(\omega_{ph}\) comes from the combination between the high polarization degree stochastic photons and the low polarization degree slingshot photons around \(\omega_{ph}\sim 10^{6}\omega_{pe}\) [see Fig. 4(b)]. Nearby this frequency region, the emission of both mechanisms contributes. The maximum slingshot energy \(\gamma_{sli}^{m}\) is approximated as \(\gamma_{sli}^{m}\sim|e|\left\langle E_{x}\right\rangle\delta t/m_{e}c\sim\pi \sqrt{\eta\gamma_{0}}\omega_{pe}\delta t\) when the electron energy is far from the saturation \(\gamma_{e}\ll p_{th}^{+}\sim 10^{6}\). The energy gain of the stochastic process is estimated using the random walk model [86]: \(\gamma_{sto}^{m}\sim 0.5(\omega_{pe,0}\delta t)^{1/2}\gamma_{0}^{3/4}\eta^{1/4}\)[78]. These estimates agree well with the simulation results [Fig. 4(c)]. Following \(\gamma_{sli}^{m}\) and \(\gamma_{sto}^{m}\), the photon cut-off frequency \(\omega_{ph}^{m}\) for the slingshot and stochastic mechanisms is predicted as \(\omega_{ph}^{m,sli}\sim\gamma_{sli}^{m\,2}B\propto\gamma_{0}^{3/2}\eta\) and \(\omega_{ph}^{m,sto}\sim\gamma_{sto}^{m\,2}B\propto\gamma_{0}^{2}\eta^{1/2}\) [see Fig. 5(a)] with the magnetic field strength \(B\propto\gamma_{0}^{1/2}\). Examining the NMD polarization features of the polarization dip \(\Delta\mathcal{Q}_{\omega}\) and the bandwidth \(\mathcal{B}_{\omega}\), we conclude that the high-frequency photon emission is dominated by the slingshot mechanism due to fulfilling three criteria: i) the photon cut-off frequency originating from slingshot electrons is much higher than via the stochastic mechanism, i.e. \(\omega_{ph}^{m,sli}\gg\omega_{ph}^{m,sto}\), reformulated as \(\eta\gtrsim\eta^{*}=0.01\gamma_{0}\); ii) the number of the slingshot injected electron \(N_{e}^{sli}\propto\langle E_{x}\rangle\) should be larger than the most energetic part of the stochastic electrons \(N_{e}^{sto}\propto n_{pe0}\), rearranged as \(\eta\gtrsim\eta^{\dagger}\propto\gamma_{0}^{-1}\); iii) the saturation of the slinghot acceleration should be higher than the stochastic acceleration, i.e. \(p_{th}^{+}\gtrsim\gamma_{sto}^{m}\), expressed as \(\gamma_{0}\gtrsim\gamma^{*}=2\eta^{1/9}\). The criteria of the slingshot dominance predicted by \(\eta>\max\left\{\eta^{*},\eta^{\dagger}\right\}\) and \(\gamma>\gamma^{*}\) agrees well with the simulation results [see Fig. 5(b)]. The dependence of \(\Delta\mathcal{Q}_{\omega}\) and \(\mathcal{B}_{\omega}\) on \(\eta\) and \(\gamma_{0}\) in Fig. 5(b) confirms that the NMD of the polarization degree on photon energy is exclusively from the emission dominated by the slingshot mechanism. In conclusion, inspecting the origin of unexpected polarization features of the photon radiation in the preturbulent RCS, we have identified the electron slingshot-like acceleration mechanism, distinct from the known stochastic process [48; 49; 50; 51; 52; 53; 54]. The slingshot injection is induced by the electron backwards-flowing focus associated with the transition to turbulence in the RCS's counterstreaming interface. The identified features of the transition region to turbulence, slingshot injection, and photon polarization dependence have crucial implications for both laboratory and astrophysical phenomena. For instance, the turbulence transition with the backwards-flowing focus may deteriorate the ignition efficiency in confinement fusion by yielding superthermal particles nearby the interface of a beam propagating in dense plasmas [93; 94; 95]. Moreover, the slingshot procedure provides an alternative mechanism accounting for TeV cosmic electrons [19] and feasible pre-stage injection for the subsequent infinite _Fermi_ acceleration in RCSs [96]. Finally, the nontrivial photon polarization dynamics suggests the necessity of revising the retrieval procedure for astrophysical magnetic configuration based on the radiation features [72; 73; 97; 98]. The original version of code EPOCH adapted here is funded by the UK EPSRC grants EP/G054950/1, EP/G056803/1, EP/G055165/1 and EP/ M022463/1. The authors would like to thank Laurent Gremillet, Anatoly Spitkovsky, and Dmitri Uzdensky for the discussion regarding plasma stream instabilities, the initialization of RCS in PIC simulations, and the undetermined composition of astrophysical jets, respectively. Z. G. also thanks Zhi-Qiu Huang for the gained knowledge about the RCS generated following gamma-ray bursts.
2303.03650
Systematic approaches to generate reversiblizations of Markov chains
Given a target distribution $\pi$ and an arbitrary Markov infinitesimal generator $L$ on a finite state space $\mathcal{X}$, we develop three structured and inter-related approaches to generate new reversiblizations from $L$. The first approach hinges on a geometric perspective, in which we view reversiblizations as projections onto the space of $\pi$-reversible generators under suitable information divergences such as $f$-divergences. With different choices of functions $f$, we not only recover nearly all established reversiblizations but also unravel and generate new reversiblizations. Along the way, we unveil interesting geometric results such as bisection properties, Pythagorean identities, parallelogram laws and a Markov chain counterpart of the arithmetic-geometric-harmonic mean inequality governing these reversiblizations. This further serves as motivation for introducing the notion of information centroids of a sequence of Markov chains and to give conditions for their existence and uniqueness. Building upon the first approach, we view reversiblizations as generalized means. In this second approach, we construct new reversiblizations via different natural notions of generalized means such as the Cauchy mean or the dual mean. In the third approach, we combine the recently introduced locally-balanced Markov processes framework and the notion of convex $*$-conjugate in the study of $f$-divergence. The latter offers a rich source of balancing functions to generate new reversiblizations.
Michael C. H. Choi, Geoffrey Wolfer
2023-03-07T04:58:26Z
http://arxiv.org/abs/2303.03650v4
# Systematic approaches to generate reversiblizations of non-reversible Markov chains ###### Abstract. Given a target distribution \(\pi\) and an arbitrary Markov infinitesimal generator \(L\) on a finite state space \(\mathcal{X}\), we develop three structured and inter-related approaches to generate new reversiblizations from \(L\). The first approach hinges on a geometric perspective, in which we view reversiblizations as projections onto the space of \(\pi\)-reversible generators under suitable information divergences such as \(f\)-divergences. Different choices of \(f\) allow us to recover almost all known reversiblizations while at the same time unravel and generate new reversiblizations. Along the way, we give interesting geometric results such as bisection properties, Pythagorean identities, parallelogram laws and a Markov chain counterpart of the arithmetic-geometric-harmonic mean inequality governing these reversiblizations. This also motivates us to introduce the notion of information centroids of a sequence of Markov chains and to give conditions for their existence and uniqueness. Building upon the first approach, we view reversiblizations as generalized means in the second approach, and construct new reversiblizations via different natural notions of generalized means such as the Cauchy mean or the dual mean. In the third approach, we combine the recently introduced locally-balanced Markov processes framework and the notion of convex \(*\)-conjugate in the study of \(f\)-divergence. The latter offers a rich source of balancing functions to generate new reversiblizations. **AMS 2010 subject classifications**: 60J27, 60J28, 94A17, 62B10 **Keywords**: Metropolis-Hastings; reversiblizations; \(f\)-divergence; information geometry; generalized mean; symmetrization; information centroid; Barker proposal; balancing function; locally-balanced Markov processes ###### Contents * 1 Introduction * 2 Preliminaries * 3 Generating new reversiblizations via geometric projections and minimization of \(f\)-divergence * 3.1 A bisection property for \(D_{f}\) and \(\overline{D}_{f}\) * 3.2 Squared Hellinger distance with \(f(t)=(\sqrt{t}-1)^{2}\) * 3.3 \(\chi^{2}\)-divergence and reverse \(\chi^{2}\)-divergence with \(f(t)=(t-1)^{2}\) and \(f^{*}(t)=t(1/t-1)^{2}\). * 3.4 \(\alpha\)-divergence with \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\) * 3.5 Jensen-Shannon divergence and Vincze-Le Cam divergence * 3.6 Renyi-divergence * 3.7 A Markov chain version of arithmetic-geometric-harmonic mean inequality for hitting time and mixing time parameters * 3.8 Approximating \(f\)-divergence by \(\chi^{2}\)-divergence and an approximate triangle inequality * 3.9 \(f\) and \(f^{*}\)-projection centroids of a sequence of Markov chains * 3.9.1 Proof of Theorem 3.11 * 3.9.2 Proof of Theorem 3.12 * 4 Generating new reversiblizations via generalized mean ###### Abstract We consider a class of \(\pi\)-reversible Markov chains with a given \ Choi and Huang (2020) or the Barker proposal in Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020). Third, new reversiblizations also give rise to new symmetrizations of non-symmetric and non-negative matrices or in general non-self-adjoint kernel operators. By taking \(\pi\) to be the discrete uniform distribution on \(\mathcal{X}\), this yields symmetrizations of the original non-symmetric and non-negative matrices. To the best of our knowledge, many of the new reversiblizations or symmetrizations proposed in subsequent sections of this manuscript have not yet been investigated in the linear algebra or functional analysis literature. While there are already quite a few reversiblizations in the literature, there seem to be a lack of systematic approaches to generate new reversiblizations. In this paper, we propose and develop three systematic approaches to generate new reversiblizations while recovering most of the reversiblizations in the earlier mentioned literature. We summarize our main contributions as follow: 1. **Generating reversiblizations via geometric projections.** This approach continues the line of work initiated in Billera and Diaconis (2001); Diaconis and Miclo (2009); Wolfer and Watanabe (2021), in which reversiblizations are viewed as projections under information divergences such as \(f\)-divergences. The advantage of this approach is that we can recover all known reversiblizations in a unified framework. We also discover that the Barker proposal arises naturally as a projection under the \(\chi^{2}\)-divergence. Notable highlights of this approach include bisection properties, Pythagorean identities, parallelogram laws and a Markov chain counterpart of the arithmetic-geometric-harmonic mean (AM-GM-HM) inequality for various hitting time and mixing time parameters. We also introduce, visualize and characterize the notion of \(f\) and \(f^{*}\)-projection centroids of a sequence of Markov chains. 2. **Generating reversiblizations via generalized mean.** Capitalizing on the geometric approach, we realize that one can also broadly view reversiblizations as a suitable mean or average between \(L\) and its \(\pi\)-dual \(L_{\pi}\). In this approach, we generate new reversiblizations by investigating into generalized notions of means such as the Cauchy mean or the dual mean reversiblizations. The reversiblizations generated in this approach are usually not a quasi-arithmetic mean unlike the geometric projection approach, and are usually based on the differences between \(L\) and \(L_{\pi}\). 3. **Generating reversiblizations via balancing function and convex \(f\).** The reversiblizations generated in the first two approaches all fall into the locally-balanced Markov processes framework. To adapt this framework to generate reversiblizations, it amounts to choosing a suitable balancing function, and a rich source of such balancing functions comes from a simple average between a convex \(f\) and its convex \(*\)-conjugate \(f^{*}\) (to be introduced in Section 2). The rest of this paper is organized as follow. We begin our paper via introducing various notions and notations in Section 2. We proceed to discuss the geometric projection approach to generate reversiblizations in Section 3. Within this section, we first discuss the bisection property, followed by an investigation of a range of commonly used \(f\)-divergences and the Renyi-divergence. We state the Markov chain version of AM-GM-HM inequality in Section 3.7, and the notion of \(f\) and \(f^{*}\)-projection centroids of a sequence of Markov chains is given in Section 3.9. In Section 4, we discuss the generalized mean approach to generate reversiblizations. We first introduce two broad classes of Cauchy mean reversiblizations such as the Stolarsky mean and the logarithmic mean reversiblizations in Section 4.1, and then in Section 4.2 we consider the dual mean reversiblizations such as the dual power mean, dual Stolarsky mean and the dual logarithmic mean. Finally, we combine the locally-balanced Markov processes framework with the convex \(*\)-conjugate in \(f\)-divergence to generate reversiblizations in Section ## 2. Preliminaries Let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a convex function with \(f(1)=0\) that grows with at most polynomial order, and is assumed to have a derivative at \(1\) given by \(f^{\prime}(1)=0\). Let \(\mathcal{L}\) denote the set of Markov infinitesimal generators defined on a finite state space \(\mathcal{X}\), that is, the set of \(\mathcal{X}\times\mathcal{X}\) matrices with non-negative off-diagonal entries and zero row sums for all rows, and similarly we write \(\mathcal{L}(\pi)\subseteq\mathcal{L}\) to be the set of reversible generators with respect to a distribution \(\pi\). We say that \(L\) is \(\pi\)-stationary if \(\pi L=0\). Let \(L_{\pi}\) be the \(\pi\)-dual of \(L\in\mathcal{L}\) (in the sense of (Jansen and Kurt, 2014, Proposition \(1.2\)) with \(H(x,y)=\pi(y)\) for all \(x,y\in\mathcal{X}\) therein) with off-diagonal entries defined to be, for \(x\neq y\), \[L_{\pi}(x,y)=\frac{\pi(y)}{\pi(x)}L(y,x),\] while the diagonal entries of \(L_{\pi}\) are such that the row sums are zero for each row. In the special case when \(L\) admits \(\pi\) as its unique stationary distribution, then \(L_{\pi}=L^{*}\), the \(\ell^{2}(\pi)\) adjoint of \(L\) or the time-reversal of \(L\). Following the definition as in Diaconis and Miclo (2009), given a fixed target \(\pi\), for any two given Markov infinitesimal generators \(M,L\in\mathcal{L}\), we define the \(f\)-divergence between \(M\) and \(L\) to be \[D_{f}(M||L):=\sum_{x\in\mathcal{X}}\pi(x)\sum_{y\in\mathcal{X}\setminus\{x\}}L (x,y)f\left(\frac{M(x,y)}{L(x,y)}\right), \tag{2.1}\] where the conventions that \(0f(0/0)=0\) and \(0f(a/0)=0\) for \(a>0\) apply in the definition above. We remark that by requiring non-negativity of \(f\), the definition of \(f\)-divergence between Markov generators is slightly different from the classical definition of \(f\)-divergence in information theory between probability measures, see e.g. Sason and Verdu (2016) and the references therein. For instance, \(f(t)=t\ln t\) is not in the set while \(f(t)=t\ln t-t+1\) is in the set. Let \(f^{*}\) be the convex \(*\)-conjugate (or simply conjugate) of \(f\) defined to be \(f^{*}(t):=tf(1/t)\) for \(t>0\), then it can readily be seen that \[D_{f}(M||L)=D_{f^{*}}(L||M),\] and \(f^{*}\) is also convex with \(f^{*}(1)=f^{*\prime}(1)=0\). Thus, for convex \(f\) that is self-conjugate, that is, \(f^{*}=f\), the \(f\)-divergence as defined in (2.1) is thus symmetric in its arguments. As a result, we can symmetrize a possibly non-symmetric \(D_{f}\) into a symmetric one by considering \(D_{(f+f^{*})/2}\). For given \(L,M\in\mathcal{L}\), we define \[\overline{D}_{f}(L||M):=D_{f}\left(L\bigg{|}\bigg{|}\frac{1}{2}(L+M)\right). \tag{2.2}\] Information divergences that can be expressed by \(\overline{D}_{f}\) include the Jensen-Shannon divergence and Vincze-Le Cam divergence, see Section 3.5. Given a general Markov generator \(L\) which does not necessarily admit \(\pi\) as its stationary distribution, we are interested in investigating the projection of \(L\) onto the set \(\mathcal{L}(\pi)\) with respect to the \(f\)-divergence \(D_{f}\) as introduced earlier in (2.1). To this end, inspired by the notions of reversible information projections introduced in Wolfer and Watanabe (2021) for the Kullback-Leibler divergence in a discrete-time setting, we define analogously the notions of \(f\)-projection and \(f^{*}\)-projection with respect to \(D_{f}\) to be \[M^{f}=M^{f}(L,\pi):=\operatorname*{arg\,min}_{M\in\mathcal{L}(\pi)}D_{f}(M||L),\quad M^{f^{*}}=M^{f^{*}}(L,\pi):=\operatorname*{arg\,min}_{M\in\mathcal{L}( \pi)}D_{f}(L||M). \tag{2.3}\] It is instructive to note that our notions of projection are with respect to a fixed target \(\pi\), while in Wolfer and Watanabe (2021) projections are onto the entire reversible set. In the context of Markov chain Monte Carlo, we are often given a target \(\pi\) for instance a posterior distribution in a Bayesian model, and in this setting it is not at all restrictive to consider and investigate projections onto \(\mathcal{L}(\pi)\). In the subsequent sections, we shall specialize ourselves into various common choices of \(f\), and investigate the corresponding \(M^{f}\) and \(M^{f^{*}}\) associated with these \(f\). It turns out that in most of these cases, these two projections can be expressed as certain power mean of \(L\) and \(L_{\pi}\). We shall define, for \(x\neq y\in\mathcal{X}\) and \(p\in\mathbb{R}\backslash\{0\}\), \[P_{p}(x,y):=\left(\frac{L(x,y)^{p}+L_{\pi}(x,y)^{p}}{2}\right)^{1/p}, \tag{2.4}\] and the diagonal entries of \(P_{p}\) are such that the row sum is zero for all rows, that we call power mean reversiblizations. We check that \(P_{p}\) is indeed \(\pi\)-reversible, since \[\pi(x)P_{p}(x,y) =\left(\frac{(\pi(x)L(x,y))^{p}+(\pi(x)L_{\pi}(x,y))^{p}}{2} \right)^{1/p}\] \[=\left(\frac{(\pi(y)L_{\pi}(y,x))^{p}+(\pi(y)L(y,x))^{p}}{2} \right)^{1/p}\] \[=\pi(y)P_{p}(y,x),\] and hence the detailed balance condition is satisfied with \(P_{p}\). We can also understand the limiting cases as \[P_{0}(x,y) =\lim_{p\to 0}P_{p}(x,y)=\sqrt{L(x,y)L_{\pi}(x,y)},\] \[P_{\infty}(x,y) =\lim_{p\to\infty}P_{p}(x,y)=\max\{L(x,y),L_{\pi}(x,y)\},\] \[P_{-\infty}(x,y) =\lim_{p\to-\infty}P_{p}(x,y)=\min\{L(x,y),L_{\pi}(x,y)\},\] which are, respectively, the geometric mean reversiblization as studied in Diaconis and Miclo (2009); Wolfer and Watanabe (2021), \(M_{2}\)-reversiblization as proposed in Choi (2020), and the classical Metropolis-Hastings reversiblization. We also call the case of \(p=1/3\) to be the Lorentz mean reversiblization as it is the Lorentz mean known in the literature Lin (1974). ## 3. Generating new reversiblizations via geometric projections and minimization of \(f\)-divergence ### A bisection property for \(D_{f}\) and \(\overline{D}_{f}\) First, we present a bisection property that states the information divergence as measured by \(D_{f}\) is the same for the pair \((L,M)\) and \((L_{\pi},M)\), where \(M\in\mathcal{L}(\pi)\) and we recall that \(L_{\pi}\) is the \(\pi\)-dual of \(L\). This general result will be useful in proving various Pythagorean identities or bisection properties in subsequent sections. **Theorem 3.1** (Bisection property of \(D_{f}\)).: _Assume that \(M\in\mathcal{L}(\pi)\) and \(L\in\mathcal{L}\). Then we have_ \[D_{f}(L||M) =D_{f}(L_{\pi}||M),\] \[D_{f}(M||L) =D_{f}(M||L_{\pi}).\] _In particular, if \(L\) admits \(\pi\) as its stationary distribution, then_ \[D_{f}(L||M) =D_{f}(L^{*}||M),\] \[D_{f}(M||L) =D_{f}(M||L^{*}).\] Proof.: For the first equality, we calculate that \[D_{f}(L||M)=\sum_{x\neq y}\pi(x)M(x,y)f\left(\frac{L(x,y)}{M(x,y)}\right)=\sum_{x \neq y}\pi(y)M(y,x)f\left(\frac{L_{\pi}(y,x)}{M(y,x)}\right)=D_{f}(L_{\pi}||M).\] For the second equality, making use of the first equality for \(f^{*}\)-divergence \(D_{f^{*}}\) we have \[D_{f}(M||L)=D_{f^{*}}(L||M)=D_{f^{*}}(L_{\pi}||M)=D_{f}(M||L_{\pi}).\] We proceed to prove an analogous bisection property for \(\overline{D}_{f}\): **Theorem 3.2** (Bisection property of \(\overline{D}_{f}\)).: _Assume that \(M\in\mathcal{L}(\pi)\) and \(L\in\mathcal{L}\). Then we have_ \[\overline{D}_{f}(L||M)=\overline{D}_{f}(L_{\pi}||M).\] _In particular, if \(L\) admits \(\pi\) as its stationary distribution, then_ \[\overline{D}_{f}(L||M)=\overline{D}_{f}(L^{*}||M).\] Proof.: We check that \[\overline{D}_{f}(L||M) =\sum_{x\neq y}\pi(x)\frac{1}{2}(L(x,y)+M(x,y))f\left(\frac{L(x, y)}{\frac{1}{2}(L(x,y)+M(x,y))}\right)\] \[=\sum_{x\neq y}\pi(y)\frac{1}{2}(L_{\pi}(y,x)+M(y,x))f\left(\frac {L_{\pi}(y,x)}{\frac{1}{2}(L_{\pi}(y,x)+M(y,x))}\right)=\overline{D}_{f}(L_{ \pi}||M).\] Indeed the proof shows that this remains true when \((L+M)/2\) is replaced with any convex combination. ### Squared Hellinger distance with \(f(t)=(\sqrt{t}-1)^{2}\) In this subsection, we shall investigate \(f\)-divergence with the choice of \(f(t)=(\sqrt{t}-1)^{2}\), known as the squared Hellinger distance in the literature. It can readily be seen that \(f\) is a strictly convex self-conjugate function with \(f(1)=f^{\prime}(1)=0\). **Theorem 3.3** (Squared Hellinger distance and \(P_{1/2}\)-reversiblization).: _Let \(f(t)=(\sqrt{t}-1)^{2}\). Suppose that \(L\in\mathcal{L}\)._ 1. _(_\(P_{1/2}\)_-reversiblization as_ \(f\)_-projection) The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto D_{f}(M||L)\] _admits a unique minimizer the_ \(f\)_-projection given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M^{f}(x,y)=\left(\frac{\sqrt{L(x,y)}+\sqrt{L_{\pi}(x,y)}}{2}\right)^{2}=P_{1/ 2}(x,y),\] _the power mean_ \(P_{1/2}\) _of_ \(L(x,y)\) _and_ \(L_{\pi}(x,y)\) _with_ \(p=1/2\)_. In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution,_ \[M^{f}(x,y)=\left(\frac{\sqrt{L(x,y)}+\sqrt{L^{*}(x,y)}}{2}\right)^{2}.\] 2. _(_\(P_{1/2}\)_-reversiblization as_ \(f^{*}\)_-projection) The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto D_{f}(L||M)\] _admits a unique minimizer the_ \(f^{*}\)_-projection given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M^{f^{*}}(x,y)=\left(\frac{\sqrt{L(x,y)}+\sqrt{L_{\pi}(x,y)}}{2}\right)^{2}=P_{ 1/2}(x,y).\] _In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution,_ \[M^{f^{*}}(x,y)=\left(\frac{\sqrt{L(x,y)}+\sqrt{L^{*}(x,y)}}{2}\right)^{2}.\] 3. _(Pythagorean identity) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, we have_ (3.1) \[D_{f}(L||\overline{M})=D_{f}(L||M^{f^{*}})+D_{f}(M^{f^{*}}|| \overline{M}),\] (3.2) \[D_{f}(\overline{M}||L)=D_{f}(\overline{M}||M^{f})+D_{f}(M^{f}||L).\] 4. _(Bisection property)_ \[D_{f}(L||M^{f^{*}}) =D_{f}(L_{\pi}||M^{f^{*}}),\] \[D_{f}(M^{f}||L) =D_{f}(M^{f}||L_{\pi}).\] _In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution, then_ \[D_{f}(L||M^{f^{*}}) =D_{f}(L^{*}||M^{f^{*}}),\] \[D_{f}(M^{f}||L) =D_{f}(M^{f}||L^{*}).\] 5. _(Parallelogram law) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, we have_ \[D_{f}(L||\overline{M})+D_{f}(L_{\pi}||\overline{M}) =2D_{f}(L||M^{f^{*}})+2D_{f}(M^{f^{*}}||\overline{M}),\] \[D_{f}(\overline{M}||L)+D_{f}(\overline{M}||L_{\pi}) =2D_{f}(\overline{M}||M^{f})+2D_{f}(M^{f}||L).\] _Proof._ We first prove item (1). Pick an arbitrary total ordering on \(\mathcal{X}\) with strict inequality being denoted by \(\prec\). We also write \[\alpha=\alpha(x,y)=\pi(x)M(x,y),\quad\alpha^{\prime}=\alpha^{ \prime}(y,x)=\pi(y)M(y,x),\] \[\beta=\beta(x,y)=\pi(x)L(x,y),\quad\beta^{\prime}=\beta^{\prime} (y,x)=\pi(y)L(y,x).\] We then see that \[D_{f}(M||L) =\sum_{x\prec y}\pi(x)L(x,y)f\left(\frac{M(x,y)}{L(x,y)}\right)+ \pi(y)L(y,x)f\left(\frac{M(y,x)}{L(y,x)}\right)\] \[=\sum_{x\prec y}\alpha-2\sqrt{\alpha\beta}+\beta+\alpha^{\prime} -2\sqrt{\alpha^{\prime}\beta^{\prime}}+\beta^{\prime}.\] For \(M\in\mathcal{L}(\pi)\), we have \(\alpha=\alpha^{\prime}\), and hence we proceed to minimize the summand of each term above, which leads to minimizing the following strictly convex mapping as a function of \(\alpha\) \[\alpha\mapsto 2\alpha-2\sqrt{\alpha\beta}+\beta-2\sqrt{\alpha\beta^{\prime}}+ \beta^{\prime}.\] By differentiation, this yields \[M^{f}(x,y)=\left(\frac{\sqrt{L(x,y)}+\sqrt{L_{\pi}(x,y)}}{2}\right)^{2}.\] Next, we prove item (2), which follows from the property as \(D_{f}(M||L)=D_{f}(L||M)\) since \(f\) is a self-conjugate function. Thirdly, we prove item (3), and it suffices for us to prove (3.1) since (3.2) follows from the self-conjugate property of \(f\) and \(M^{f}=M^{f^{*}}\). To this end, we calculate that \[D_{f}(L||M^{f^{*}}) +D_{f}(M^{f^{*}}||\overline{M})-D_{f}(L||\overline{M})\] \[=\sum_{x}\pi(x)\bigg{(}\sum_{y\neq x}\left(\sqrt{L(x,y)}-\sqrt{M^ {f^{*}}(x,y)}\right)^{2}+\sum_{y\neq x}\left(\sqrt{M^{f^{*}}(x,y)}-\sqrt{ \overline{M}(x,y)}\right)^{2}\] \[\quad-\sum_{y\neq x}\left(\sqrt{L(x,y)}-\sqrt{\overline{M}(x,y)} \right)^{2}\bigg{)}\] \[=\sum_{x}\pi(x)2\sum_{y\neq x}\left(\sqrt{L(x,y)}\sqrt{\overline {M}(x,y)}-\sqrt{M^{f^{*}}(x,y)}\sqrt{\overline{M}(x,y)}\right)\] \[\quad+\sum_{x}\pi(x)2\sum_{y\neq x}\left(M^{f^{*}}(x,y)-\sqrt{L( x,y)}\sqrt{M^{f^{*}}(x,y)}\right).\] The proof is completed once we show that each term on the right hand side equals to zero. For the first term on the right hand side, using the expression of \(M^{f^{*}}\) we see that \[\sum_{x}\pi(x)2\sum_{y\neq x}\bigg{(} \sqrt{L(x,y)}\sqrt{\overline{M}(x,y)}-\sqrt{M^{f^{*}}(x,y)}\sqrt{ \overline{M}(x,y)}\bigg{)}\] \[=\sum_{x}\pi(x)\sum_{y\neq x}\left(\sqrt{L(x,y)}\sqrt{\overline{M }(x,y)}-\sqrt{L_{\pi}(x,y)}\sqrt{\overline{M}(x,y)}\right)\] \[=\left(\sum_{x}\pi(x)\sum_{y\neq x}\sqrt{L(x,y)}\sqrt{\overline{M }(x,y)}\right)-\sum_{x}\sum_{y\neq x}\pi(y)\sqrt{L(y,x)}\sqrt{\overline{M}(y,x )}=0,\] where the second equality follows from the reversibility of \(\overline{M}\), and we interchange the summation order of the second term in the third equality. Similarly, for the second term we write that \[\sum_{x}\pi(x)2\sum_{y\neq x}\Big{(} M^{f^{*}}(x,y)-\sqrt{L(x,y)}\sqrt{M^{f^{*}}(x,y)}\Big{)}\] \[=\sum_{x}\pi(x)\sum_{y\neq x}\left(\sqrt{L_{\pi}(x,y)}\sqrt{M^{ f^{*}}(x,y)}-\sqrt{L(x,y)}\sqrt{M^{f^{*}}(x,y)}\right)\] \[=\left(\sum_{x}\sum_{y\neq x}\pi(x)\sqrt{L_{\pi}(x,y)}\sqrt{M^{ f^{*}}(x,y)}\right)-\sum_{x}\sum_{y\neq x}\pi(x)\sqrt{L(x,y)}\sqrt{M^{f^{*}}(x,y)}\] \[=\left(\sum_{x}\sum_{y\neq x}\pi(y)\sqrt{L(y,x)}\sqrt{M^{f^{*}}(y,x)} \right)-\sum_{x}\sum_{y\neq x}\pi(x)\sqrt{L(x,y)}\sqrt{M^{f^{*}}(x,y)}\] \[=\left(\sum_{y}\sum_{x\neq y}\pi(y)\sqrt{L(y,x)}\sqrt{M^{f^{*}}(y, x)}\right)-\sum_{x}\sum_{y\neq x}\pi(x)\sqrt{L(x,y)}\sqrt{M^{f^{*}}(x,y)}=0,\] where in the third equality we use the reversibility of \({M^{f^{*}}}\), while we again interchange the summation order of the first term in the fourth equality. As for item (4), it follows directly from the bisection property in Theorem 3.1 where we note that \(M^{f},{M^{f^{*}}}\in\mathcal{L}(\pi)\). Finally, for item (5), we utilize both the Pythagorean identity and bisection property to arrive at the desired result. (\chi^{2}\)-divergence and reverse \(\chi^{2}\)-divergence with \(f(t)=(t-1)^{2}\) and \(f^{*}(t)=t(1/t-1)^{2}\) In this subsection, we look into the case of \(\chi^{2}\)-divergence \(D_{\chi^{2}}:=D_{f}\) with \(f(t)=(t-1)^{2}\), and its conjugate \(D_{f^{*}}\) generated by \(f^{*}(t)=t(1/t-1)^{2}\), which is known as the reverse \(\chi^{2}\)-divergence in the literature. **Theorem 3.4** (\(\chi^{2}\)-divergence, \(P_{2}\)-reversblization and harmonic reersiblization).: _Let \(f(t)=(t-1)^{2}\). Suppose that \(L\in\mathcal{L}\)._ 1. _(Harmonic or_ \(P_{-1}\)_-reversblization as_ \(f\)_-projection of_ \(D_{f}\) _and_ \(f^{*}\)_-projection of_ \(D_{f^{*}}\)_) The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto D_{f}(M||L)\;(\text{resp. }D_{f^{*}}(L||M))\] _admits a unique minimizer the_ \(f\)_-projection of_ \(D_{f}\) _(resp._ \(f^{*}\)_-projection of_ \(D_{f^{*}}\)_) given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M^{f}(x,y)=\left(\frac{L(x,y)^{-1}+L_{\pi}(x,y)^{-1}}{2}\right)^{-1}=P_{-1}(x,y),\] _the power mean_ \(P_{-1}\) _of_ \(L(x,y)\) _and_ \(L_{\pi}(x,y)\) _with_ \(p=-1\)_. In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution,_ \[M^{f}(x,y)=\left(\frac{L(x,y)^{-1}+L^{*}(x,y)^{-1}}{2}\right)^{-1}.\] 2. _(_\(P_{2}\)_-reversblization as_ \(f^{*}\)_-projection of_ \(D_{f}\) _and_ \(f\)_-projection of_ \(D_{f^{*}}\)_) The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto D_{f}(L||M)\;(\text{resp. }D_{f^{*}}(M||L))\] _admits a unique minimizer the_ \(f^{*}\)_-projection of_ \(D_{f}\) _(resp._ \(f\)_-projection of_ \(D_{f^{*}}\)_) given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M^{f^{*}}(x,y)=\left(\frac{L(x,y)^{2}+L_{\pi}(x,y)^{2}}{2}\right)^{1/2}=P_{2}( x,y),\] _the power mean_ \(P_{2}\) _of_ \(L(x,y)\) _and_ \(L_{\pi}(x,y)\) _with_ \(p=2\)_. In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution,_ \[M^{f^{*}}(x,y)=\left(\frac{L(x,y)^{2}+L^{*}(x,y)^{2}}{2}\right)^{1/2}.\] 3. _(Pythagorean identity) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, we have_ (3.3) \[D_{f}(L||\overline{M}) =D_{f}(L||M^{f^{*}})+D_{f}({M^{f^{*}}}||\overline{M}),\] (3.4) \[D_{f}(\overline{M}||L) =D_{f}(\overline{M}||M^{f})+D_{f}(M^{f}||L).\] _._ 4. _(Bisection property) We have_ \[D_{f}(L||M^{f^{*}}) =D_{f}(L_{\pi}||M^{f^{*}}),\] \[D_{f}(M^{f}||L) =D_{f}(M^{f}||L_{\pi}).\] _In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution, then_ \[D_{f}(L||M^{f^{*}}) =D_{f}(L^{*}||M^{f^{*}}),\] \[D_{f}(M^{f}||L) =D_{f}(M^{f}||L^{*}).\] 5. _(Parallelogram law) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, we have_ \[D_{f}(L||\overline{M})+D_{f}(L_{\pi}||\overline{M}) =2D_{f}(L||M^{f^{*}})+2D_{f}(M^{f^{*}}||\overline{M}),\] \[D_{f}(\overline{M}||L)+D_{f}(\overline{M}||L_{\pi}) =2D_{f}(\overline{M}||M^{f})+2D_{f}(M^{f}||L).\] _Remark 3.1_.: We remark that the harmonic or \(P_{-1}\)-reversiblization is in fact the Barker proposal in the Markov chain Monte Carlo literature Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020). See also the discussion in Section 5 below. Proof.: We first prove item (1). Pick an arbitrary total ordering on \(\mathcal{X}\) with strict inequality being denoted by \(\prec\). We also write \[\alpha =\alpha(x,y)=\pi(x)M(x,y),\quad\alpha^{\prime}=\alpha^{\prime}(y, x)=\pi(y)M(y,x),\] \[\beta =\beta(x,y)=\pi(x)L(x,y),\quad\beta^{\prime}=\beta^{\prime}(y,x)= \pi(y)L(y,x).\] We then see that \[D_{f}(M||L) =\sum_{x\prec y}\pi(x)L(x,y)f\left(\frac{M(x,y)}{L(x,y)}\right)+ \pi(y)L(y,x)f\left(\frac{M(y,x)}{L(y,x)}\right)\] \[=\sum_{x\prec y}\frac{\alpha^{2}}{\beta}-2\alpha+\beta+\frac{ \alpha^{\prime 2}}{\beta^{\prime}}-2\alpha^{\prime}+\beta^{\prime}.\] For \(M\in\mathcal{L}(\pi)\), we have \(\alpha=\alpha^{\prime}\), and hence we proceed to minimize the summand of each term above, which leads to minimizing the following mapping as a function of \(\alpha\) \[\alpha\mapsto\frac{\alpha^{2}}{\beta}+\frac{\alpha^{2}}{\beta^{\prime}}-4 \alpha+\beta+\beta^{\prime}.\] By differentiation, this yields \[M^{f}(x,y)=\left(\frac{L(x,y)^{-1}+L_{\pi}(x,y)^{-1}}{2}\right)^{-1}.\] Next, we prove (2). To be consistent with the notations in the proof, we take \(L\in\mathcal{L}(\pi)\) and \(M\in\mathcal{L}\). Owing to the reversibility of \(L\), we thus have \(\beta=\beta^{\prime}\), which yields minimizing the following mapping as a function of \(\beta\) \[\beta\mapsto\frac{\alpha^{2}}{\beta}+\frac{\alpha^{\prime 2}}{\beta}+2\beta-2 \alpha-2\alpha^{\prime}.\] Again via differentiation we arrive at \[M^{f^{*}}(x,y)=\left(\frac{M(x,y)^{2}+M_{\pi}(x,y)^{2}}{2}\right)^{1/2}.\] Thirdly, we prove item (3). To prove (3.3), we first calculate that \[D_{f}(L||\overline{M}) =\sum_{x\neq y}\pi(x)\overline{M}(x,y)\left(\frac{L(x,y)}{\overline {M}(x,y)}-1\right)^{2}\] \[=\sum_{x\neq y}\pi(x)\overline{M}(x,y)\left(\left(\frac{M^{f^{*}} (x,y)}{\overline{M}(x,y)}-1\right)+\frac{L(x,y)-M^{f^{*}}(x,y)}{\overline{M}(x,y)}\right)^{2} \tag{3.5}\] \[=D_{f}(M^{f^{*}}||\overline{M})+\sum_{x\neq y}\pi(x)\frac{L(x,y)^ {2}-M^{f^{*}}(x,y)^{2}-2\overline{M}(x,y)L(x,y)+2\overline{M}(x,y)M^{f^{*}}(x, y)}{\overline{M}(x,y)}.\] Using the expression of \(M^{f^{*}}\) we note that \[\sum_{x\neq y}\pi(x)\frac{L(x,y)^{2}-M^{f^{*}}(x,y)^{2}}{\overline {M}(x,y)} =\sum_{x\neq y}\pi(x)\frac{L(x,y)^{2}-L_{\pi}(x,y)^{2}}{2\overline {M}(x,y)} \tag{3.6}\] \[=\sum_{x\neq y}\pi(x)\frac{L(x,y)^{2}}{2\overline{M}(x,y)}-\sum_{ x\neq y}\pi(y)\frac{L(y,x)^{2}}{2\overline{M}(y,x)}=0.\] We now substitute (3.6) into (3.5) to yield \[D_{f}(L||\overline{M})=D_{f}(M^{f^{*}}||\overline{M})+\sum_{x\neq y}\pi(x)(-2L (x,y)+2M^{f^{*}}(x,y)),\] and it suffices to prove the second term of the right hand side above equals to \(D_{f}(L||M^{f^{*}})\), which is indeed the case since \[D_{f}(L||M^{f^{*}}) =\sum_{x\neq y}\pi(x)M^{f^{*}}(x,y)\left(\frac{L(x,y)}{M^{f^{*}}( x,y)}-1\right)^{2}\] \[=\sum_{x\neq y}\pi(x)\left(\frac{L^{2}(x,y)}{M^{f^{*}}(x,y)}-2L(x,y)+M^{f^{*}}(x,y)\right)\] \[=\sum_{x\neq y}\pi(x)\left(\frac{L^{2}(x,y)+L_{\pi}^{2}(x,y)}{2M ^{f^{*}}(x,y)}-2L(x,y)+M^{f^{*}}(x,y)\right)\] \[=\sum_{x\neq y}\pi(x)(-2L(x,y)+2M^{f^{*}}(x,y)),\] which in the third equality we use the same argument as in (3.6), and in the fourth equality we use the definition of \(M^{f^{*}}\). We proceed to prove (3.4), and we compute that \[D_{f}(\overline{M}||L) =\sum_{x\neq y}\pi(x)L(x,y)\left(\frac{\overline{M}(x,y)}{L(x,y)} -1\right)^{2}\] \[=\sum_{x\neq y}\pi(x)L(x,y)\left(\left(\frac{M^{f}(x,y)}{L(x,y)} -1\right)+\frac{\overline{M}(x,y)-M^{f}(x,y)}{\overline{M}(x,y)}\right)^{2} \tag{3.7}\] \[=D_{f}(M^{f}||L)+\sum_{x\neq y}\pi(x)\frac{\overline{M}(x,y)^{2} -M^{f}(x,y)^{2}-2\overline{M}(x,y)L(x,y)+2L(x,y)M^{f}(x,y)}{L(x,y)}.\] Now, for any \(\overline{M}\in\mathcal{L}(\pi)\), we see that \[\sum_{x\neq y}\pi(x)\frac{\overline{M}(x,y)^{2}}{L(x,y)}=\sum_{x\neq y}\pi(x) \overline{M}(x,y)^{2}\left(\frac{1}{L(x,y)}+\frac{1}{L_{\pi}(x,y)}\right)/2= \sum_{x\neq y}\pi(x)\frac{\overline{M}(x,y)^{2}}{M^{f}(x,y)},\] and applying this to the first two term of the second expression in (3.7), we arrive at \[\sum_{x\neq y}\pi(x) \frac{\overline{M}(x,y)^{2}-M^{f}(x,y)^{2}-2\overline{M}(x,y)L(x, y)+2L(x,y)M^{f}(x,y)}{L(x,y)}\] \[=\sum_{x\neq y}\pi(x)\left(\frac{\overline{M}(x,y)^{2}-M^{f}(x,y) ^{2}}{M^{f}(x,y)}-2\overline{M}(x,y)+2M^{f}(x,y)\right)\] \[=\sum_{x\neq y}\pi(x)\left(\frac{\overline{M}(x,y)^{2}}{M^{f}(x,y )}-2\overline{M}(x,y)+M^{f}(x,y)\right)\] \[=D_{f}(\overline{M}||M^{f}),\] which finishes the proof. For item (4), it follows directly from the bisection property in Theorem 3.1 where we note that \(M^{f},M^{f^{*}}\in\mathcal{L}(\pi)\). Finally, for item (5), we utilize both the Pythagorean identity and bisection property to reach the desired result. ### \(\alpha\)-divergence with \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\) In this subsection, we investigate the \(f\) and \(f^{*}\)-projections of non-reversible Markov chains under the \(\alpha\)-divergence with \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\), and its conjugate \(D_{f^{*}}\) generated by \(f^{*}(t)=tf(1/t)\). **Theorem 3.5** (\(\alpha\)-divergence, \(P_{\alpha}\)-reversiblization and \(P_{1-\alpha}\)-reversiblization).: _Let \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\) for \(\alpha\in\mathbb{R}\) and \(\alpha\notin\{0,1\}\). Suppose that \(L\in\mathcal{L}\)._ 1. _(_\(P_{1-\alpha}\)-reversiblization as_ \(f\)_-projection of_ \(D_{f}\) _and_ \(f^{*}\)_-projection of_ \(D_{f^{*}}\)_) The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto D_{f}(M||L)\;(\text{resp. }D_{f^{*}}(L||M))\] _admits a unique minimizer the_ \(f\)_-projection of_ \(D_{f}\) _(resp._ \(f^{*}\)_-projection of_ \(D_{f^{*}}\)_) given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M^{f}(x,y)=\left(\frac{L(x,y)^{1-\alpha}+L_{\pi}(x,y)^{1-\alpha}}{2}\right)^{ 1/(1-\alpha)}=P_{1-\alpha}(x,y),\] _the power mean_ \(P_{1-\alpha}\) _of_ \(L(x,y)\) _and_ \(L_{\pi}(x,y)\) _with_ \(p=1-\alpha\)_. In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution,_ \[M^{f}(x,y)=\left(\frac{L(x,y)^{1-\alpha}+L^{*}(x,y)^{1-\alpha}}{2}\right)^{1/ (1-\alpha)}.\] 2. _(_\(P_{\alpha}\)-reversiblization as_ \(f^{*}\)_-projection of_ \(D_{f}\) _and_ \(f\)_-projection of_ \(D_{f^{*}}\)_) The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto D_{f}(L||M)\;(\text{resp. }D_{f^{*}}(M||L))\] _admits a unique minimizer the_ \(f^{*}\)_-projection of_ \(D_{f}\) _(resp._ \(f\)_-projection of_ \(D_{f^{*}}\)_) given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M^{f^{*}}(x,y)=\left(\frac{L(x,y)^{\alpha}+L_{\pi}(x,y)^{\alpha}}{2}\right)^{ 1/\alpha}=P_{\alpha}(x,y),\] the power mean \(P_{\alpha}\) of \(L(x,y)\) and \(L_{\pi}(x,y)\) with \(p=\alpha\). In particular, when \(L\) admits \(\pi\) as its stationary distribution,_ \[M^{f^{*}}(x,y)=\left(\frac{L(x,y)^{\alpha}+L^{*}(x,y)^{\alpha}}{2}\right)^{1/ \alpha}.\] 3. _(Pythagorean identity) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, we have_ (3.8) \[D_{f}(L||\overline{M}) =D_{f}(L||M^{f^{*}})+D_{f}(M^{f^{*}}||\overline{M}),\] (3.9) \[D_{f}(\overline{M}||L) =D_{f}(\overline{M}||M^{f})+D_{f}(M^{f}||L).\] 4. _(Bisection property) We have_ \[D_{f}(L||M^{f^{*}}) =D_{f}(L_{\pi}||M^{f^{*}}),\] \[D_{f}(M^{f}||L) =D_{f}(M^{f}||L_{\pi}).\] _In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution, then_ \[D_{f}(L||M^{f^{*}}) =D_{f}(L^{*}||M^{f^{*}}),\] \[D_{f}(M^{f}||L) =D_{f}(M^{f}||L^{*}).\] 5. _(Parallelogram law) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, we have_ \[D_{f}(L||\overline{M})+D_{f}(L_{\pi}||\overline{M}) =2D_{f}(L||M^{f^{*}})+2D_{f}(M^{f^{*}}||\overline{M}),\] \[D_{f}(\overline{M}||L)+D_{f}(\overline{M}||L_{\pi}) =2D_{f}(\overline{M}||M^{f})+2D_{f}(M^{f}||L).\] _Proof._ We first prove item (1). Pick an arbitrary total ordering on \(\mathcal{X}\) with strict inequality being denoted by \(\prec\). We also write \[a=a(x,y)=\pi(x)M(x,y),\quad a^{\prime}=a^{\prime}(y,x)=\pi(y)M(y,x),\] \[\beta=\beta(x,y)=\pi(x)L(x,y),\quad\beta^{\prime}=\beta^{\prime}( y,x)=\pi(y)L(y,x).\] We then see that \[D_{f}(M||L) =\sum_{x\prec y}\pi(x)L(x,y)f\left(\frac{M(x,y)}{L(x,y)}\right)+ \pi(y)L(y,x)f\left(\frac{M(y,x)}{L(y,x)}\right)\] \[=\sum_{x\prec y}\frac{a^{\alpha}}{\beta^{\alpha-1}\alpha(\alpha -1)}-\frac{a}{\alpha-1}+\frac{1}{\alpha}+\frac{a^{\prime\alpha}}{\beta^{ \prime(\alpha-1)}\alpha(\alpha-1)}-\frac{a^{\prime}}{\alpha-1}+\frac{1}{ \alpha}.\] For \(M\in\mathcal{L}(\pi)\), we have \(a=a^{\prime}\), and hence we proceed to minimize the summand of each term above, which leads to minimizing the following mapping as a function of \(a\) \[a\mapsto\frac{a^{\alpha}}{\beta^{\alpha-1}\alpha(\alpha-1)}-\frac{a}{\alpha-1 }+\frac{a^{\alpha}}{\beta^{\prime(\alpha-1)}\alpha(\alpha-1)}-\frac{a}{\alpha -1}.\] Differentiating with respect to \(a\) gives \[M^{f}(x,y)=\left(\frac{L(x,y)^{1-\alpha}+L_{\pi}(x,y)^{1-\alpha}}{2}\right)^{1 /(1-\alpha)}.\] Next, we prove (2). To be consistent with the notations in the proof, we take \(L\in\mathcal{L}(\pi)\) and \(M\in\mathcal{L}\). Owing to the reversibility of \(L\), we thus have \(\beta=\beta^{\prime}\), which yields minimizing the following mapping as a function of \(\beta\) \[\beta\mapsto\frac{a^{\alpha}}{\beta^{\alpha-1}\alpha(\alpha-1)}+\frac{\beta}{ \alpha}+\frac{a^{\prime\alpha}}{\beta^{\alpha-1}\alpha(\alpha-1)}+\frac{\beta }{\alpha}.\] Differentiating with respect to \(\beta\) yields \[M^{f^{*}}(x,y)=\left(\frac{L(x,y)^{\alpha}+L_{\pi}(x,y)^{\alpha}}{2}\right)^{1 /\alpha}.\] Thirdly, we prove item (3). To prove (3.8), we first calculate that \[D_{f}(L||\overline{M}) =\sum_{x\neq y}\pi(x)\overline{M}(x,y)\left(\frac{\left(\frac{L(x, y)}{\overline{M}(x,y)}\right)^{\alpha}-\alpha\frac{L(x,y)}{\overline{M}(x,y)}-(1- \alpha)}{\alpha(\alpha-1)}\right)\] \[=\sum_{x\neq y}\pi(x)\overline{M}(x,y)\left(\frac{\left(\frac{M^ {f^{*}}(x,y)}{\overline{M}(x,y)}\right)^{\alpha}-\alpha\frac{M^{f^{*}}(x,y)}{ \overline{M}(x,y)}-(1-\alpha)}{\alpha(\alpha-1)}\right)\] \[\quad+\sum_{x\neq y}\pi(x)\overline{M}(x,y)\left(\frac{\left( \frac{L(x,y)^{\alpha}-(M^{f^{*}}(x,y))^{\alpha}}{\overline{M}(x,y)^{\alpha}} \right)-\alpha\frac{L(x,y)-M^{f^{*}}(x,y)}{\overline{M}(x,y)}}{\alpha(\alpha- 1)}\right) \tag{3.10}\] \[=D_{f}(M^{f^{*}}||\overline{M})++\sum_{x\neq y}\pi(x)\overline{M}( x,y)\left(\frac{\left(\frac{\left(L(x,y)^{\alpha}-(M^{f^{*}}(x,y))\right)^{ \alpha}}{\overline{M}(x,y)^{\alpha}}\right)-\alpha\frac{L(x,y)-M^{f^{*}}(x,y)} {\overline{M}(x,y)}}{\alpha(\alpha-1)}\right).\] Using the expression of \(M^{f^{*}}\) we note that \[\sum_{x\neq y}\pi(x)\frac{L(x,y)^{\alpha}-M^{f^{*}}(x,y)^{\alpha }}{\overline{M}(x,y)^{\alpha-1}} =\sum_{x\neq y}\pi(x)\frac{L(x,y)^{\alpha}-L_{\pi}(x,y)^{\alpha}} {2\overline{M}(x,y)^{\alpha-1}} \tag{3.11}\] \[=\sum_{x\neq y}\pi(x)\frac{L(x,y)^{\alpha}}{2\overline{M}(x,y)^{ \alpha-1}}-\sum_{x\neq y}\pi(y)\frac{L(y,x)^{\alpha}}{2\overline{M}(y,x)^{ \alpha-1}}=0.\] Substituting (3.11) into (3.10) gives rise to \[D_{f}(L||\overline{M})=D_{f}(M^{f^{*}}||\overline{M})+\sum_{x\neq y}\pi(x) \frac{(-L(x,y)+M^{f^{*}}(x,y))}{\alpha-1},\] and it suffices to prove the second term of the right hand side above equals to \(D_{f}(L||M^{f^{*}})\), which is true since \[D_{f}(L||M^{f^{*}}) =\sum_{x\neq y}\pi(x)M^{f^{*}}(x,y)\left(\frac{\left(\frac{L(x,y)} {M^{f^{*}}(x,y)}\right)^{\alpha}-\alpha\frac{L(x,y)}{M^{f^{*}}(x,y)}-(1- \alpha)}{\alpha(\alpha-1)}\right)\] \[=\sum_{x\neq y}\pi(x)M^{f^{*}}(x,y)\left(\frac{\left(\frac{L(x,y)} {M^{f^{*}}(x,y)}\right)^{\alpha}-1}{\alpha(\alpha-1)}\right)+\sum_{x\neq y} \pi(x)\frac{(-L(x,y)+M^{f^{*}}(x,y))}{\alpha-1}\] \[=\sum_{x\neq y}\pi(x)M^{f}(x,y)\left(\frac{\left(\frac{\overline{M}(x,y )}{M^{f}(x,y)}\right)^{\alpha}-\alpha\frac{\overline{M}(x,y)}{M^{f}(x,y)}-(1- \alpha)}{\alpha(\alpha-1)}\right)\] \[=D_{f}(\overline{M}||M^{f}),\] which finishes the proof. For item (4), it follows directly from the bisection property in Theorem 3.1 where we note that \(M^{f},M^{f^{*}}\in\mathcal{L}(\pi)\). Finally, for item (5), we utilize both the Pythagorean identity and bisection property to reach the desired result. ### Jensen-Shannon divergence and Vincze-Le Cam divergence In this subsection and the next, the goal is to unravel relationships or inequalities between various \(f\)-divergences or statistical divergences. In particular, we shall illustrate this approach by looking into the Jensen-Shannon divergence and Vincze-Le Cam divergence. To this end, by recalling that \(\overline{D}_{f}\) as first introduced in (2.2), let us define **Definition 3.1** (Jensen-Shannon divergence Lin (1991); Sason and Verdu (2016)).: Given \(L,M\in\mathcal{L}\) and taking \(f(t)=t\ln t-t+1\) and \(h(t)=t\ln t-(1+t)\ln((1+t)/2)\), the Jensen-Shannon divergence is defined to be \[JS(L||M):=\overline{D}_{f}(L||M)+\overline{D}_{f}(M||L)=D_{h}(L||M),\] where \(D_{KL}:=D_{f}\) is the classical Kullback-Leibler divergence between \(M\) and \(L\). Note that \(JS(L||M)=JS(M||L)\). **Definition 3.2** (Vincze-Le Cam divergence Le Cam (1986); Sason and Verdu (2016); Vincze (1981)).: Given \(L,M\in\mathcal{L}\) and taking \(f(t)=(t-1)^{2}\) and \(h(t)=\frac{(t-1)^{2}}{1+t}\), the Vincze-Le Cam divergence is defined to be \[\Delta(L||M):=2\overline{D}_{f}(L||M)=2\overline{D}_{f}(M||L)=D_{h}(L||M),\] where \(D_{f}=D_{\chi^{2}}\) is the \(\chi^{2}\)-divergence between \(M\) and \(L\). Note that \(\Delta(L||M)=\Delta(M||L)\). In both \(JS\) and \(\Delta\), while they can both be considered as a \(h\)-divergence for an appropriate strictly convex \(h\), unfortunately their \(M^{h}=M^{h^{*}}\) projections cannot be expressed in closed form using approaches similar as in the previous sections or as in Diaconis and Miclo (2009). Using the convexity of \(D_{f}(L||\cdot)\), we can obtain inequalities between these divergences: **Theorem 3.6** (Bounding Jensen-Shannon by Kullback-Leibler).: _Given \(L,M\in\mathcal{L}\), \(\overline{M}\in\mathcal{L}(\pi)\), and taking \(f(t)=t\ln t-t+1\) and \(h(t)=t\ln t-(1+t)\ln((1+t)/2)\), we have_ \[JS(L||M)\leqslant\frac{1}{2}(D_{f}(L||M)+D_{f}(M||L)). \tag{3.13}\] _In particular, denote by \(M^{h^{*}}:=\arg\min_{M\in\mathcal{L}(\pi)}JS(L||M)=\arg\min_{M\in\mathcal{L}( \pi)}D_{h}(L||M)=M^{h}\) the unique \(h^{*}\)-projection or \(h\)-projection of \(JS=D_{h}\), then_ \[JS(L||M^{h^{*}})\leqslant\frac{1}{2}(D_{f}(L||\overline{M})+D_{f}(\overline{ M}||L)). \tag{3.14}\] _We also have the following bisection property for \(JS\):_ \[JS(L||\overline{M})=JS(L_{\pi}||\overline{M}).\] Proof.: To prove (3.13), we note that by the convexity of \(D_{f}\) and the property that \(D_{f}(L||L)=D_{f}(M||M)=0\), \[JS(L||M)\leqslant\frac{1}{2}(D_{f}(L||M)+D_{f}(M||L)).\] As for (3.14), it follows from definition that \[JS(L||M_{m}^{h})\leqslant JS(L||\overline{M})\leqslant\frac{1}{2}(D_{f}(L|| \overline{M})+D_{f}(\overline{M}||L)).\] Finally, for the bisection property, we either apply the bisection property twice for \(\overline{D}_{f}\) (Theorem 3.2) or by the bisection property once for \(D_{h}\). The analogous theorem of \(\Delta\) is now stated, and its proof is omitted since it is very similar as that of Theorem 3.6: **Theorem 3.7** (Bounding Vincze-Le Cam by \(\chi^{2}\)).: _Given \(L,M\in\mathcal{L}\), \(\overline{M}\in\mathcal{L}(\pi)\), and taking \(f(t)=(t-1)^{2}\) and \(h(t)=\frac{(t-1)^{2}}{1+t}\), we have_ \[\Delta(L||M)\leqslant D_{\chi^{2}}(L||M). \tag{3.15}\] _In particular, denote by \(M^{h^{*}}:=\operatorname*{arg\,min}_{M\in\mathcal{L}(\pi)}\Delta(L||M)= \operatorname*{arg\,min}_{M\in\mathcal{L}(\pi)}D_{h}(L||M)=M^{h}\) the unique \(h^{*}\)-projection or \(h\)-projection of \(\Delta=D_{h}\), then_ \[\Delta(L||M^{h^{*}})\leqslant D_{\chi^{2}}(L||\overline{M}). \tag{3.16}\] _We also have the following bisection property for \(\Delta\):_ \[\Delta(L||\overline{M})=\Delta(L_{\pi}||\overline{M}).\] ### Renyi-divergence The objective of this subsection is to investigate the projections of non-reversible Markov chains in other notions of statistical divergence apart from \(f\)-divergence. Building upon relationships between various \(f\)-divergences or other statistical divergences, one can possibly construct and develop new inequalities governing the information divergences between these objects. In this subsection, we shall in particular study the Renyi-divergence which can be defined as a log-transformed version of the \(\alpha\)-divergence as introduced in Section 3.4. Precisely, for \(\alpha>0\) and \(\alpha\neq 1\), we define the Renyi-divergence between \(M,L\in\mathcal{L}\) to be \[R_{\alpha}(M||L):=\frac{1}{\alpha-1}\ln\left(1+\alpha(\alpha-1)D_{f}(M||L) \right). \tag{3.17}\] where we take \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\). Interestingly, we shall see that \(R_{\alpha}\) inherits both the minimization property and bisection property from that of \(D_{f}\) due to the one-to-one transformation between \(R_{\alpha}\) and \(D_{f}\), while owing to the concavity (\(\alpha>1\)) or convexity (\(\alpha\in(0,1)\)) of the transformation, the Pythagorean identity or parallelogram law in \(D_{f}\) becomes the Pythagorean inequality or parallelogram inequality respectively. **Theorem 3.8** (Renyi-divergence, \(P_{\alpha}\)-reversiblization and \(P_{1-\alpha}\)-reversiblization).: _Let \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\) for \(\alpha>0\) and \(\alpha\neq 1\). Suppose that \(L\in\mathcal{L}\)._ 1. _(_\(P_{1-\alpha}\)-reversiblization_) _The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto R_{\alpha}(M||L)\] _admits a unique minimizer the power mean_ \(P_{1-\alpha}\) _of_ \(L(x,y)\) _and_ \(L_{\pi}(x,y)\) _with_ \(p=1-\alpha\)_. given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[P_{1-\alpha}(x,y)=\left(\frac{L(x,y)^{1-\alpha}+L_{\pi}(x,y)^{1-\alpha}}{2} \right)^{1/(1-\alpha)},\] _In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution,_ \[P_{1-\alpha}(x,y)=\left(\frac{L(x,y)^{1-\alpha}+L^{*}(x,y)^{1-\alpha}}{2} \right)^{1/(1-\alpha)}.\] 2. _(_\(P_{\alpha}\)-reversiblization_) _The mapping_ \[\mathcal{L}(\pi)\ni M\mapsto R_{\alpha}(L||M)\] admits a unique minimizer the power mean \(P_{\alpha}\) of \(L(x,y)\) and \(L_{\pi}(x,y)\) with \(p=\alpha\). given by, for \(x\neq y\in\mathcal{X}\),_ \[P_{\alpha}(x,y)=\left(\frac{L(x,y)^{\alpha}+L_{\pi}(x,y)^{\alpha}}{2}\right)^{ 1/\alpha},\] _In particular, when \(L\) admits \(\pi\) as its stationary distribution,_ \[P_{\alpha}(x,y)=\left(\frac{L(x,y)^{\alpha}+L^{*}(x,y)^{\alpha}}{2}\right)^{ 1/\alpha}.\] 3. _(Pythagorean inequality) For any \(\overline{M}\in\mathcal{L}(\pi)\), for \(\alpha>1\) we have_ (3.18) \[R_{\alpha}(L||\overline{M}) \leqslant R_{\alpha}(L||M^{f^{*}})+R_{\alpha}(M^{f^{*}}|| \overline{M}),\] (3.19) \[R_{\alpha}(\overline{M}||L) \leqslant R_{\alpha}(\overline{M}||M^{f})+R_{\alpha}(M^{f}||L),\] _while for \(\alpha\in(0,1)\),_ \[R_{\alpha}(L||\overline{M}) \geqslant R_{\alpha}(L||M^{f^{*}})+R_{\alpha}(M^{f^{*}}|| \overline{M}),\] \[R_{\alpha}(\overline{M}||L) \geqslant R_{\alpha}(\overline{M}||M^{f})+R_{\alpha}(M^{f}||L).\] 4. _(Bisection property) We have_ \[R_{\alpha}(L||M^{f^{*}}) =R_{\alpha}(L_{\pi}||M^{f^{*}}),\] \[R_{\alpha}(M^{f}||L) =R_{\alpha}(M^{f}||L_{\pi}).\] _In particular, when_ \(L\) _admits_ \(\pi\) _as its stationary distribution, then_ \[R_{\alpha}(L||M^{f^{*}}) =R_{\alpha}(L^{*}||M^{f^{*}}),\] \[R_{\alpha}(M^{f}||L) =R_{\alpha}(M^{f}||L^{*}).\] 5. _(Parallelogram inequality) For any_ \(\overline{M}\in\mathcal{L}(\pi)\)_, for_ \(\alpha>1\) _we have_ \[R_{\alpha}(L||\overline{M})+R_{\alpha}(L_{\pi}||\overline{M}) \leqslant 2R_{\alpha}(L||M^{f^{*}})+2R_{\alpha}(M^{f^{*}}|| \overline{M}),\] \[R_{\alpha}(\overline{M}||L)+R_{\alpha}(\overline{M}||L_{\pi}) \leqslant 2R_{\alpha}(\overline{M}||M^{f})+2R_{\alpha}(M^{f}||L),\] _while for_ \(\alpha\in(0,1)\)_,_ \[R_{\alpha}(L||\overline{M})+R_{\alpha}(L_{\pi}||\overline{M}) \geqslant 2R_{\alpha}(L||M^{f^{*}})+2R_{\alpha}(M^{f^{*}}|| \overline{M}),\] \[R_{\alpha}(\overline{M}||L)+R_{\alpha}(\overline{M}||L_{\pi}) \geqslant 2R_{\alpha}(\overline{M}||M^{f})+2R_{\alpha}(M^{f}||L).\] _Proof._ First, we consider the mapping, for \(x\geqslant 0\), \[g(x):=\frac{1}{\alpha-1}\ln(1+\alpha(\alpha-1)x),\] \[\frac{d}{dx}g(x)=\frac{\alpha}{1+\alpha(\alpha-1)x},\] \[\frac{d^{2}}{dx^{2}}g(x)=-\frac{\alpha^{2}(\alpha-1)}{(1+\alpha(\alpha-1)x)^{2 }}.\] Thus, we see that \(g\) is a strictly increasing concave (resp. convex) function when \(\alpha>1\) (resp. \(\alpha\in(0,1)\)). Making use of Theorem 3.5, we calculate that \[R_{\alpha}(P_{1-\alpha}||L)=g(D_{f}(P_{1-\alpha}||L))\leqslant g(D_{f}(M||L)) =R_{\alpha}(M||L),\] \[R_{\alpha}(L||P_{\alpha})=g(D_{f}(L||P_{\alpha}))\leqslant g(D_{f}(L||M))=R_{\alpha} (L||M),\] which establish the first two items. We proceed to prove item (3). For \(\alpha>1\) (resp. \(\alpha\in(0,1)\)), as \(g\) is strictly concave (resp. convex) with \(g(0)=0\), \(g\) is thus subadditive (resp. superadditive), which together with the Pythagorean identity for \(\alpha\)-divergence in Theorem 3.5 yields \[R_{\alpha}(L||\overline{M}) =g(D_{f}(L||\overline{M}))=g(D_{f}(L||M^{f^{*}})+D_{f}(M^{f^{*}}|| \overline{M}))\] \[\leqslant g(D_{f}(L||M^{f^{*}}))+g(D_{f}(M^{f^{*}}||\overline{M}) )=R_{\alpha}(L||M^{f^{*}})+R_{\alpha}(M^{f^{*}}||\overline{M}).\] \[R_{\alpha}(\overline{M}||L) =g(D_{f}(\overline{M}||L))=g(D_{f}(M^{f}||L)+D_{f}(\overline{M}|| M^{f}))\] \[\leqslant g(D_{f}(M^{f}||L))+g(D_{f}(\overline{M}||M^{f}))=R_{ \alpha}(M^{f}||L)+R_{\alpha}(\overline{M}||M^{f}).\] The case of \(\alpha\in(0,1)\) can be computed similarly but with the direction of inequality flipped owing to the superadditivity of \(g\) in this case. For the bisection property, it can easily be seen as \(R_{\alpha}\) is a transformation by \(g\) of \(D_{f}\) and the \(\alpha\)-divergence enjoys the bisection property as stated in Theorem 3.5. Finally, for item (5), we apply the previous two items, that is, both the Pythagorean inequality and the bisection property to arrive at the stated conclusion. A Markov chain version of arithmetic-geometric-harmonic mean inequality for hitting time and mixing time parameters In previous subsections, we have seen that various power means \(P_{p}\) (recall that \(P_{p}\) is introduced in (2.4)) appear naturally as \(f\) and \(f^{*}\)-projections of appropriate \(f\)-divergences. For example, \(P_{1/2}\) appears as both the \(f^{*}\)-projection and \(f\)-projection under the squared Hellinger distance, while in the literature Billera and Diaconis (2001); Choi (2020); Choi and Huang (2020); Diaconis and Miclo (2009) the additive reversiblization \(P_{1}\) and the two Metropolis-Hastings reversiblizations \(P_{-\infty}\) and \(P_{\infty}\) appear as projections under the total variation distance, which is a special case of the \(f\)-divergence by taking \(f\) to be the mapping \(x\mapsto|x-1|\). The aim of this subsection is to offer comparison theorems between these reversiblizations for their hitting and mixing time parameters. To allow for effective comparison between these reversiblizations, we recall the notion of Peskun ordering of continuous-time Markov chains. This partial ordering was first introduced by Peskun (1973) in the context of discrete-time Markov chains on finite state space. Various generalizations have then been obtained, for example to general state space in Tierney (1998), by Leisen and Mira (2008) to continuous-time Markov chains and recently by Andrieu and Livingstone (2021) to the non-reversible setting. **Definition 3.3** (Peskun ordering).: Suppose that we have two continuous-time Markov chains with generators \(L_{1},L_{2}\in\mathcal{L}(\pi)\) respectively. \(L_{1}\) is said to dominate \(L_{2}\) off-diagonally, written as \(L_{1}\succeq L_{2}\), if for all \(x\neq y\in\mathcal{X}\), we have \[L_{1}(x,y)\geqslant L_{2}(x,y).\] For any functions \(f,g:\mathcal{X}\to\mathbb{R}\), we write the weighted inner product with respect to \(\pi\) by \(\langle\cdot,\cdot\rangle_{\pi}\), that is, \[\langle f,g\rangle_{\pi}=\sum_{x\in\mathcal{X}}f(x)g(x)\pi(x).\] The quadratic form of \(L\in\mathcal{L}(\pi)\) can then be expressed as \[\langle-Lf,f\rangle_{\pi}=\frac{1}{2}\sum_{x,y\in\mathcal{X}}\pi(x)L(x,y)(f(x )-f(y))^{2}. \tag{3.20}\] For \(L\in\mathcal{L}(\pi)\), we are particularly interested in the following list of parameters that assess or quantify the speed of convergence in terms of hitting and mixing time: * (Hitting times) We write \[\tau_{A}=\tau_{A}(L):=\inf\{t\geqslant 0;X_{t}\in A\}.\] to be the first hitting time to the set \(A\subseteq\mathcal{X}\) of the chain \(X=(X_{t})_{t\geqslant 0}\) with generator \(L\), and the usual convention of \(\inf\emptyset=\infty\) applies. We also adapt the notation that \(\tau_{y}:=\tau_{\{y\}}\) for \(y\in\mathcal{X}\). One hitting time parameter of interest is the average hitting time \(t_{av}\), defined to be \[t_{av}=t_{av}(L,\pi):=\sum_{x,y}\mathbb{E}_{x}(\tau_{y})\pi(x)\pi(y).\] The eigentime identity gives that \(t_{av}\) equals to the sum of the reciprocals of the non-zero eigenvalues of \(-L\), see for instance Cui and Mao (2010); Mao (2004). This is also known as the random target lemma in Levin and Peres (2017). * (Spectral gap) We write the spectral gap of \(L\) to be \[\lambda_{2}=\lambda_{2}(L,\pi):=\inf\big{\{}\langle-Lf,f\rangle_{\pi}:\ f\in \mathbb{R}^{\mathcal{X}},\pi(f)=0,\pi(f^{2})=1\big{\}}.\] The relaxation time \(t_{rel}\) is the reciprocal of \(\lambda_{2}\), that is, \[t_{rel}=t_{rel}(L,\pi):=\frac{1}{\lambda_{2}}.\] We see that in the finite state space setting, \(\lambda_{2}\) is the second smallest eigenvalue of \(-L\). * (Asymptotic variance) For a mean zero function \(h\), i.e., \(\pi(h)=0\), the central limit theorem for Markov processes (Komorowski et al., 2012, Theorem \(2.7\)) gives \(t^{-1/2}\int_{0}^{t}h(X_{s})ds\) converges in probability to a Gaussian distribution with mean zero and variance \[\sigma^{2}(h,L,\pi):=-2\langle h,g\rangle_{\pi},\] where \(g\) solves the Poisson equation \(Lg=h\). With the above notions in mind, we are now ready to state the main result of this subsection: **Theorem 3.9** (Peskun ordering of power mean reversiblizations and its consequences).: _For \(p,q\in\mathbb{R}\cup\{\pm\infty\}\) with \(p<q\), for any \(f\in\mathbb{R}^{\mathcal{X}}\) we have_ \[P_{q} \succeq P_{p},\] \[\langle-P_{q}f,f\rangle_{\pi} \geqslant\langle-P_{p}f,f\rangle_{\pi}.\] _Consequently, this leads to_ 1. _(Hitting times) For_ \(\lambda>0\) _and_ \(A\subseteq\mathcal{X}\)_, we have_ \[\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{p})})\leqslant\mathbb{E}_{\pi}(e^{- \lambda\tau_{A}(P_{q})}).\] _In particular, for any_ \(A\subseteq\mathcal{X}\)_,_ \[\mathbb{E}_{\pi}(\tau_{A}(P_{p}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(P_{q})).\] _Furthermore,_ \[t_{av}(P_{p},\pi)\geqslant t_{av}(P_{q},\pi).\] 2. _(Spectral gap) We have_ \[\lambda_{2}(P_{p},\pi)\leqslant\lambda_{2}(P_{q},\pi).\] _That is,_ \[t_{rel}(P_{p},\pi)\geqslant t_{rel}(P_{q},\pi).\] _ 3. _(Asymptotic variance) For_ \(h\in\ell_{0}^{2}(\pi)=\{h;\ \pi(h)=0\}\)_,_ \[\sigma^{2}(h,P_{p},\pi)\geqslant\sigma^{2}(h,P_{q},\pi).\] _Proof._ For \(q>p\), by the classical power mean inequality Lin (1974), we thus have for \(x\neq y\in\mathcal{X}\), \[P_{q}(x,y)\geqslant P_{p}(x,y),\] which consequently yields, according to (3.20), \[P_{q} \succeq P_{p},\] \[\langle-P_{q}f,f\rangle_{\pi} \geqslant\langle-P_{p}f,f\rangle_{\pi}.\] The remaining are consequences of the Peskun ordering between \(P_{q}\) and \(P_{p}\). Precisely, using the variational principle for the Laplace transform of hitting time as presented in (Huang and Mao, 2018, Theorem \(3.1\)), we arrive at \[\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{p})})\leqslant\mathbb{E}_{\pi}(e^{- \lambda\tau_{A}(P_{q})}).\] Subtracting by 1 on both sides and dividing by \(\lambda\) followed by taking \(\lambda\to 0\) gives \[\mathbb{E}_{\pi}(\tau_{A}(P_{p}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(P_{q})).\] Using the variational principle for eigenvalues of \(\pi\)-reversible generators, each eigenvalue of \(-P_{q}\) is greater than or equal to that of \(-P_{p}\). By means of the eigentime identity, we see that \[t_{av}(P_{p},\pi)\geqslant t_{av}(P_{q},\pi).\] In particular, for the second smallest eigenvalue, we have \[\lambda_{2}(P_{p},\pi)\leqslant\lambda_{2}(P_{q},\pi).\] Finally, for the asymptotic variances, the ordering readily follows from (Leisen and Mira, 2008, Theorem \(6\)). \(\square\) By comparing the power mean reversiblizations \(P_{p}\) with \(p\in\{-\infty,-1,0,1,2,\infty\}\) in the above theorem, we obtain the following Markov chain version of the classical quadratic-arithmetic-geometric-harmonic inequality: **Corollary 3.1** (Markov chain version of the classical quadratic-arithmetic-geometric-harmonic inequality).: _For \(p\in\mathbb{R}\cup\{\pm\infty\}\) and \(L\in\mathcal{L}\), we consider the power mean reversiblizations \(P_{p}\) with \(p\in\{-\infty,-1,0,1,2,\infty\}\) to arrive at_ 1. _(Hitting times) For_ \(\lambda>0\) _and_ \(A\subseteq\mathcal{X}\)_, we have_ \[\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{-\infty})})\leqslant\mathbb{E}_{\pi} (e^{-\lambda\tau_{A}(P_{-1})})\leqslant\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P _{0})})\leqslant\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{2})})\leqslant \mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{\infty})}).\] _In particular, for any_ \(A\subseteq\mathcal{X}\)_,_ \[\mathbb{E}_{\pi}(\tau_{A}(P_{-\infty}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(P_{ -1}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(P_{0}))\geqslant\mathbb{E}_{\pi}( \tau_{A}(P_{1}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(P_{2}))\geqslant\mathbb{E }_{\pi}(\tau_{A}(P_{\infty})).\] _Furthermore,_ \[t_{av}(P_{-\infty},\pi)\geqslant t_{av}(P_{-1},\pi)\geqslant t_{av}(P_{0},\pi) \geqslant t_{av}(P_{1},\pi)\geqslant t_{av}(P_{2},\pi)\geqslant t_{av}(P_{ \infty},\pi).\] 2. _(Spectral gap) We have_ \[\lambda_{2}(P_{-\infty},\pi)\leqslant\lambda_{2}(P_{-1},\pi)\leqslant\lambda_ {2}(P_{0},\pi)\leqslant\lambda_{2}(P_{1},\pi)\leqslant\lambda_{2}(P_{2},\pi) \leqslant\lambda_{2}(P_{\infty},\pi).\] _That is,_ \[t_{rel}(P_{-\infty},\pi)\geqslant t_{rel}(P_{-1},\pi)\geqslant t_{rel}(P_{0}, \pi)\geqslant t_{rel}(P_{1},\pi)\geqslant t_{rel}(P_{2},\pi)\geqslant t_{rel}(P _{\infty},\pi).\] _._ 3. _(Asymptotic variance) For_ \(h\in\ell_{0}^{2}(\pi)=\{h;\;\pi(h)=0\}\)_,_ \[\sigma^{2}(h,P_{-\infty},\pi)\geqslant\sigma^{2}(h,P_{-1},\pi) \geqslant\sigma^{2}(h,P_{0},\pi)\geqslant\sigma^{2}(h,P_{1},\pi)\geqslant \sigma^{2}(h,P_{2},\pi)\geqslant\sigma^{2}(h,P_{\infty},\pi).\] _All the above equalities hold if and only if \(L\) is \(\pi\)-reversible with \(L=L^{*}\) so that all the power mean reversiblizations \(P_{p}\) collapse to \(L\)._ In view of the above Corollary, we thus see that the power mean reversiblizations \(P_{p}\) with \(p\in\mathbb{R}\) interpolates between the two Metropolis-Hastings reversiblizations \(P_{-\infty}\) and \(P_{\infty}\). We also remark that in addition to the above hitting time and mixing time parameters, we should also take into account of the transition rates for comparison between different reversiblizations, since the transition rates of the same row (i.e. the sum of off-diagonal entries of the row) are in general different between \(P_{p}\) and \(P_{q}\) for \(p\neq q\) unless \(L\in\mathcal{L}(\pi)\) is \(\pi\)-reversible. Interested readers should also consult the discussion in (Diaconis and Miclo, 2009, discussion above Remark \(2.2\)). ### Approximating \(f\)-divergence by \(\chi^{2}\)-divergence and an approximate triangle inequality In this subsection, inspired by the technique of approximating \(f\)-divergence with Taylor's expansion Nielsen and Nock (2014), we investigate approximating \(f\)-divergence using Taylor's expansion by \(\chi^{2}\)-divergence for sufficiently smooth \(f\). In practice, one may wish to compute projections such as \(D_{f}(L||M^{f^{*}})\) and \(D_{f}(M^{f}||L)\), yet in general the \(f^{*}\)-projection \(M^{f^{*}}\) and \(f\)-projection \(M^{f}\) may not admit a closed-form. Our main result below demonstrates that \(D_{f}(L||M)\) can be approximated by \(D_{\chi^{2}}(L||M)\) (that is, the \(\chi^{2}\)-divergence with generator \(t\mapsto(t-1)^{2}\)) modulo a prefactor error coefficient \(\frac{f^{\prime\prime}(1)}{2}\) and an additive error term \(\frac{1}{3!}||f^{(3)}||_{\infty}(\overline{m}-\underline{m})^{3}\) in the Theorem below: **Theorem 3.10**.: _For strictly convex and three-times continuously differentiable \(f\), for any \(L,M\in\mathcal{L}\), we have_ \[\left|D_{f}(L||M)-\frac{f^{\prime\prime}(1)}{2}D_{\chi^{2}}(L||M)\right| \leqslant\frac{1}{3!}||f^{(3)}||_{\infty}(\overline{m}-\underline{m})^{3}, \tag{3.21}\] _where_ \[\overline{m}(L,M): =\max_{L(x,y),M(x,y)>0}\frac{L(x,y)}{M(x,y)},\quad\underline{m}( L,M):=\min_{L(x,y),M(x,y)>0}\frac{L(x,y)}{M(x,y)},\] \[||f^{(3)}||_{\infty}(L,M): =\sup_{x\in[\underline{m},\overline{m}]}|f^{(3)}(x)|.\] _In particular, for any \(\overline{M}\in\mathcal{L}(\pi)\) we have_ \[\left|D_{f}(L||\overline{M})-\left(D_{f}(L||M^{f^{*}})+D_{f}(M^{ f^{*}}||\overline{M})\right)\right|\] \[\leqslant\frac{1}{3!}||f^{(3)}||_{\infty}(L,\overline{M})( \overline{m}(L,\overline{M})-\underline{m}(L,\overline{M}))^{3}+\frac{1}{3!}|| f^{(3)}||_{\infty}(L,M^{f^{*}})(\overline{m}(L,M^{f^{*}})-\underline{m}(L,M^{f^{*}}))^{3}\] \[\quad+\frac{1}{3!}||f^{(3)}||_{\infty}(M^{f^{*}},\overline{M})( \overline{m}(M^{f^{*}},\overline{M})-\underline{m}(M^{f^{*}},\overline{M}))^ {3},\] _where \(M^{f^{*}}=P_{2}\) is the \(P_{2}\)-reversibilization as stated in Theorem 3.4. Similarly, we have_ \[\left|D_{f}(\overline{M}||L)-\left(D_{f}(M^{f}||L)+D_{f}(\overline {M}||M^{f})\right)\right|\] \[\leqslant\frac{1}{3!}||f^{(3)}||_{\infty}(\overline{M},L)( \overline{m}(\overline{M},L)-\underline{m}(\overline{M},L))^{3}+\frac{1}{3!}|| f^{(3)}||_{\infty}(M^{f},L)(\overline{m}(M^{f},L)-\underline{m}(M^{f},L))^{3}\] \[+\frac{1}{3!}||f^{(3)}||_{\infty}(\overline{M},M^{f})(\overline{m}( \overline{M},M^{f})-\underline{m}(\overline{M},M^{f}))^{3},\] _where \(M^{f}=P_{-1}\) is the \(P_{-1}\)-reversiblization as stated in Theorem 3.4._ We can interpret the expression \(\overline{m}(L,M)-\underline{m}(L,M)\) as quantifying the difference between the two generators \(L\) and \(M\). In the case when \(L=M\), equality is achieved in (3.21) as the right hand side yields \(\overline{m}(L,M)-\underline{m}(L,M)=0\) while the left hand side gives \(D_{f}(L||M)=D_{\chi^{2}}(L||M)=0\). _Proof._ For strictly convex and three-times continuously differentiable \(f\), by the integral form of Taylor's expansion and since \(f(1)=f^{\prime}(1)=0\), we see that, for \(x\in(\underline{m},\overline{m})\), \[f(x) =f(1)+f^{\prime}(1)(x-1)+\frac{f^{\prime\prime}(1)}{2}(x-1)^{2}+ \frac{1}{2!}\int_{\underline{m}}^{x}(x-t)^{2}f^{(3)}(t)\,dt\] \[=\frac{f^{\prime\prime}(1)}{2}(x-1)^{2}+\frac{1}{2!}\int_{ \underline{m}}^{x}(x-t)^{2}f^{(3)}(t)\,dt.\] As a result, we arrive at \[\left|D_{f}(L||M)-\frac{f^{\prime\prime}(1)}{2}D_{\chi^{2}}(L||M)\right|\leqslant \frac{1}{3!}||f^{(3)}||_{\infty}(\overline{m}-\underline{m})^{3}.\] By applying (3.21) three times each we obtain the two approximate triangle inequalities. \(\square\) ### \(f\) and \(f^{*}\)-projection centroids of a sequence of Markov chains Given a sequence of Markov generators \((L_{i})_{i=1}^{n}\), where \(L_{i}\in\mathcal{L}\) for each \(i=1,\ldots,n\), what is the closest \(\pi\)-reversible generator(s) \(M\in\mathcal{L}(\pi)\) on average, where the distance is measured in terms of \(f\)-divergence \(D_{f}\)? Precisely, we define the notions of \(f^{*}\)-projection centroid and \(f\)-projection centroid to be respectively \[M_{n}^{f^{*}} =M_{n}^{f^{*}}(L_{1},\ldots,L_{n},\pi):=\operatorname*{arg\,min} _{M\in\mathcal{L}(\pi)}\sum_{i=1}^{n}D_{f}(L_{i}||M),\] \[M_{n}^{f} =M_{n}^{f}(L_{1},\ldots,L_{n},\pi):=\operatorname*{arg\,min}_{M \in\mathcal{L}(\pi)}\sum_{i=1}^{n}D_{f}(M||L_{i}).\] Note that in the special case of \(n=1\), the above notions reduce to \(M_{1}^{f}=M^{f}\) and \(M_{1}^{f^{*}}=M^{f^{*}}\) respectively as introduced in (2.3). This notion is analogous to that of empirical risk minimization or loss minimization that arises in statistics and machine learning: given \(n\) pairs of \((x_{i},y_{i})_{i=1}^{n}\), what is the least square regression line that minimize the total squared residuals (i.e. \(\ell^{2}\) loss)? In the context of Markov chains, given \(n\) Markov generators \((L_{i})_{i=1}^{n}\), we are looking for a reversible \(M\in\mathcal{L}(\pi)\) that minimize the total deviation or discrepancy measured by \(\sum_{i=1}^{n}D_{f}(L_{i}||M)\) or \(\sum_{i=1}^{n}D_{f}(M||L_{i})\) with respect to \(D_{f}\). Similar notions of information centroids have also been proposed in the literature for probability measures, see for example Nielsen (2020); Nielsen and Boltz (2011); Nielsen and Nock (2009) and the references therein. Inspired by the graphs in Billera and Diaconis (2001); Choi and Huang (2020); Wolfer and Watanabe (2021) and to visualize the concept of centroid, we illustrate two \(f\)-projection centroids in a rectangle and in an eight-sided polygon in Figure 1. Similar graphs can be drawn for \(f^{*}\)-projection centroids but with the direction of the arrows flipped. Our first main result in this section proves existence and uniqueness of \(f\) and \(f^{*}\)-projection centroids under strictly convex \(f\), and its proof is delayed to Section 3.9.1. **Theorem 3.11** (Existence and uniqueness of \(f\) and \(f^{*}\)-projection centroids under strictly convex \(f\)).: _Given a sequence of Markov generators \((L_{i})_{i=1}^{n}\), where \(L_{i}\in\mathcal{L}\) for each \(i=1,\ldots,n\), and a \(f\)-divergence \(D_{f}\) generated by a strictly convex \(f\), a \(f\)-projection of \(D_{f}\) (resp. \(f^{*}\)-projection of \(D_{f^{*}}\)) centroid \(M_{n}^{f}\) that Figure 1. Two \(f\)-projection centroids. The \(f\)-divergence under consideration \(D_{f}\) can be any of the squared Hellinger distance, \(\chi^{2}\)-divergence, \(\alpha\)-divergence and Kullback-Leibler divergence as presented in Theorem 3.12, where both the bisection property and the Pythagorean identity have been shown. The red dashed line across the middle represents the set \(\mathcal{L}(\pi)\). minimizes the mapping_ \[\mathcal{L}(\pi)\ni M\mapsto\sum_{i=1}^{n}D_{f}(M||L_{i})\quad\left(\text{resp. }=\sum_{i=1}^{n}D_{f^{*}}(L_{i}||M)\right)\] _exists and is unique. A \(f^{*}\)-projection of \(D_{f}\) (resp. \(f\)-projection of \(D_{f^{*}}\)) centroid \(M_{n}^{f^{*}}\) that minimizes the mapping_ \[\mathcal{L}(\pi)\ni M\mapsto\sum_{i=1}^{n}D_{f}(L_{i}||M)\quad\left(\text{ resp. }=\sum_{i=1}^{n}D_{f^{*}}(M||L_{i})\right)\] _exists and is unique._ In the second main result of this section, we explicitly calculate the \(f\) and \(f^{*}\)-projection centroids \(M_{n}^{f}\) and \(M_{n}^{f^{*}}\) under various common \(f\)-divergences as discussed in previous sections. Its proof is postponed to Section 3.9.2. **Theorem 3.12** (Examples of \(f\) and \(f^{*}\)-projection centroids).: _Given a sequence of Markov generators \((L_{i})_{i=1}^{n}\), where \(L_{i}\in\mathcal{L}\) for each \(i=1,\ldots,n\)._ 1. _(_\(f\) _and_ \(f^{*}\)_-projection centroids under squared Hellinger distance) Let_ \(f(t)=(\sqrt{t}-1)^{2}\)_. The unique_ \(f\)_-projection centroid_ \(M_{n}^{f}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ (3.22) \[M_{n}^{f}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}\sqrt{M^{f}(L_{i},\pi)(x,y)} \right)^{2},\] _while the unique_ \(f^{*}\)_-projection centroid_ \(M_{n}^{f^{*}}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f^{*}}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}\sqrt{M^{f^{*}}(L_{i},\pi)( x,y)}\right)^{2},\] _where we recall_ \(M^{f^{*}}=M^{f}\) _are the_ \(P_{1/2}\)_-reversiblizations as given in Theorem_ 3.3_._ 2. _(_\(f\) _and_ \(f^{*}\)_-projection centroids under_ \(\chi^{2}\)_-divergence) Let_ \(f(t)=(t-1)^{2}\)_. The unique_ \(f\)_-projection centroid_ \(M_{n}^{f}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}(M^{f}(L_{i},\pi)(x,y))^{-1} \right)^{-1},\] _while the unique_ \(f^{*}\)_-projection centroid_ \(M_{n}^{f^{*}}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f^{*}}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}(M^{f^{*}}(L_{i},\pi)(x,y))^ {2}\right)^{1/2},\] _where we recall_ \(M^{f},M^{f^{*}}\) _are respectively the_ \(P_{-1},P_{2}\)_-reversiblizations as given in Theorem_ 3.4_._ 3. _(_\(f\) _and_ \(f^{*}\)_-projection centroids under_ \(\alpha\)_-divergence) Let_ \(f(t)=\frac{t^{\alpha}-\alpha t-(1-\alpha)}{\alpha(\alpha-1)}\) _for_ \(\alpha\in\mathbb{R}\) _and_ \(\alpha\notin\{0,1\}\) _The unique_ \(f\)_-projection centroid_ \(M_{n}^{f}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}\left(M^{f}(L_{i},\pi)(x,y) \right)^{1-\alpha}\right)^{1/(1-\alpha)},\] _while the unique_ \(f^{*}\)_-projection centroid_ \(M_{n}^{f^{*}}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f^{*}}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}\left(M^{f^{*}}(L_{i},\pi)(x, y)\right)^{\alpha}\right)^{1/\alpha},\] _where we recall_ \(M^{f},M^{f^{*}}\) _are respectively the_ \(P_{1-\alpha},P_{\alpha}\)_-reversiblizations as given in Theorem_ 3.5_._ 4. _(_\(f\) _and_ \(f^{*}\)_-projection centroids under Kullback-Leibler divergence) Let_ \(f(t)=t\ln t-t+1\)_. The unique_ \(f\)_-projection centroid_ \(M_{n}^{f}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f}(x,y)=\left(\prod_{i=1}^{n}M^{f}(L_{i},\pi)(x,y)\right)^{1/n},\] _while the unique_ \(f^{*}\)_-projection centroid_ \(M_{n}^{f^{*}}\) _is given by, for_ \(x\neq y\in\mathcal{X}\)_,_ \[M_{n}^{f^{*}}(x,y)=\frac{1}{n}\sum_{i=1}^{n}M^{f^{*}}(L_{i},\pi)(x,y),\] _where we recall_ \(M^{f},M^{f^{*}}\) _are respectively the_ \(P_{0},P_{1}\)_-reversiblizations as given in_ Diaconis and_ Miclo _(_2009_)__;_ Wolfer and Watanabe__(_2021_)__, that are, the geometric mean and the additive reversiblizations._ #### 3.9.1. Proof of Theorem 3.11 The proof is essentially a generalization of (Diaconis and Miclo, 2009, Proposition \(1.5\)). Pick an arbitrary total ordering on \(\mathcal{X}\) with strict inequality being denoted by \(\prec\). For \(i=1,\ldots,n\), we also write \[\alpha=\alpha(x,y)=\pi(x)M(x,y),\quad\alpha^{\prime}=\alpha^{ \prime}(y,x)=\pi(y)M(y,x),\] \[\beta_{i}=\beta_{i}(x,y)=\pi(x)L_{i}(x,y),\quad\beta_{i}^{\prime} =\beta_{i}^{\prime}(y,x)=\pi(y)L_{i}(y,x).\] Using \(M\in\mathcal{L}(\pi)\) which gives \(\alpha=\alpha^{\prime}\), we then see that \[\sum_{i=1}^{n}D_{f}(M||L_{i}) =\sum_{i=1}^{n}\sum_{x\prec y}\pi(x)L_{i}(x,y)f\left(\frac{M(x,y) }{L_{i}(x,y)}\right)+\pi(y)L_{i}(y,x)f\left(\frac{M(y,x)}{L_{i}(y,x)}\right)\] \[=\sum_{i=1}^{n}\sum_{x\prec y}\beta_{i}f\left(\frac{\alpha}{ \beta_{i}}\right)+\beta_{i}^{\prime}f\left(\frac{\alpha}{\beta_{i}^{\prime}}\right)\] \[=\sum_{x\prec y}\sum_{\{i,\;\beta_{i}>0\text{ or }\beta_{i}^{ \prime}>0\}}\beta_{i}f\left(\frac{\alpha}{\beta_{i}}\right)+\beta_{i}^{\prime }f\left(\frac{\alpha}{\beta_{i}^{\prime}}\right)\] \[=:\sum_{x\prec y}\Phi_{\beta_{1},\ldots,\beta_{n},\beta_{1}^{ \prime},\ldots,\beta_{n}^{\prime}}(\alpha).\] To minimize with respect to \(M\), we are led to minimize the summand above \(\phi:=\Phi_{\beta_{1},\ldots,\beta_{n},\beta_{1}^{\prime},\ldots,\beta_{n}^{ \prime}}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\), where \((\beta_{1},\ldots,\beta_{n},\beta_{1}^{\prime},\ldots,\beta_{n}^{\prime})\in \mathbb{R}_{+}^{2n}\) are assumed to be fixed. As \(\phi\) is convex, we denote by \(\phi_{+}^{\prime}\) to be its right derivative. It thus suffices to show the existence of \(\alpha_{*}>0\) such that for all \(\alpha\in\mathbb{R}_{+}\), \[\phi_{+}^{\prime}(\alpha)=\begin{cases}<0,&\text{ if }\quad\alpha<\alpha_{*},\\ >0,&\text{ if }\quad\alpha>\alpha_{*}.\end{cases} \tag{3.23}\] Now, we compute that for all \(\alpha\in\mathbb{R}_{+}\), \[\phi_{+}^{\prime}(\alpha)=\sum_{\{i;\ \beta_{i}>0\text{ and }\beta_{i}^{\prime}>0\}} f^{\prime}\left(\frac{\alpha}{\beta_{i}}\right)+f^{\prime}\left(\frac{ \alpha}{\beta_{i}^{\prime}}\right)+\sum_{\{i;\ \beta_{i}>0\text{ and }\beta_{i}^{\prime}=0\}} f^{\prime}\left(\frac{\alpha}{\beta_{i}}\right)+\sum_{\{i;\ \beta_{i}=0\text{ and }\beta_{i}^{\prime}>0\}} f^{\prime}\left(\frac{\alpha}{\beta_{i}^{\prime}}\right).\] As \(\phi^{\prime}(1)=0\) and \(\phi\) is strictly convex, for sufficiently small \(\alpha>0\)\(\phi_{+}^{\prime}(\alpha)<0\) while for sufficiently large \(\alpha>0\)\(\phi_{+}^{\prime}(\alpha)>0\) and \(\phi_{+}^{\prime}\) is increasing, we conclude that there exists a unique \(\alpha_{*}>0\) such that (3.23) is satisfied. Replacing the analysis above by \(f^{*}\), noting that \(f^{*}\) is also a strictly convex function with \(f^{*}(1)=f^{*\prime}(1)=0\), the existence and uniqueness of \(M_{n}^{f^{*}}\) is shown. #### 3.9.2. Proof of Theorem 3.12 We shall only prove (3.22) as the rest follows exactly the same computation procedure with different choices of \(f\). Pick an arbitrary total ordering on \(\mathcal{X}\) with strict inequality being denoted by \(\prec\). For \(i=1,\ldots,n\), we also write \[\alpha=\alpha(x,y) =\pi(x)M(x,y),\quad\alpha^{\prime}=\alpha^{\prime}(y,x)=\pi(y)M( y,x),\] \[\beta_{i}=\beta_{i}(x,y) =\pi(x)L_{i}(x,y),\quad\beta_{i}^{\prime}=\beta_{i}^{\prime}(y,x) =\pi(y)L_{i}(y,x).\] The \(\pi\)-reversibility of \(M\) yields \(\alpha=\alpha^{\prime}\), which leads to \[\sum_{i=1}^{n}D_{f}(M||L_{i}) =\sum_{i=1}^{n}\sum_{x\prec y}\pi(x)L_{i}(x,y)f\left(\frac{M(x,y) }{L_{i}(x,y)}\right)+\pi(y)L_{i}(y,x)f\left(\frac{M(y,x)}{L_{i}(y,x)}\right)\] \[=\sum_{i=1}^{n}\sum_{x\prec y}\alpha-2\sqrt{\alpha\beta_{i}}+ \beta_{i}+\alpha^{\prime}-2\sqrt{\alpha^{\prime}\beta_{i}^{\prime}}+\beta_{i} ^{\prime}\] \[=\sum_{x\prec y}\sum_{i=1}^{n}2\alpha-2\sqrt{\alpha\beta_{i}}-2 \sqrt{\alpha\beta_{i}^{\prime}}+\beta_{i}+\beta_{i}^{\prime}.\] We proceed to minimize the summand of each term above, which leads to minimizing the following strictly convex mapping as a function of \(\alpha\) \[\alpha\mapsto\sum_{i=1}^{n}2\alpha-2\sqrt{\alpha\beta_{i}}-2\sqrt{\alpha\beta _{i}^{\prime}}.\] By differentiation, this yields \[M_{n}^{f}(x,y)=\left(\frac{1}{n}\sum_{i=1}^{n}\sqrt{M^{f}(L_{i},\pi)(x,y)} \right)^{2}.\] ## 4. Generating new reversiblizations via generalized mean For \(a,b\in\mathbb{R}\) and \(\phi:\mathbb{R}\to\mathbb{R}\) a continuous and strictly increasing function, the Kolmogorov-Nagumo-de Finetti mean or the quasi-arithmetic mean Berger and Casella (1992); de Carvalho (2016); Nielsen and Nock (2017) is defined to be \[K_{\phi}(a,b)=\phi^{-1}\left(\frac{\phi(a)+\phi(b)}{2}\right).\] We recall from Section 3 that various power mean reversiblizations \(P_{\alpha}\) arise naturally as \(f\) and \(f^{*}\)-projections under suitable choice of \(f\)-divergences, which are in fact special instances of the Kolmogorov-Nagumo-de Finetti mean between \(L\) and \(L_{\pi}\). For \(\alpha\in\mathbb{R}\), by considering \(\phi(x)=x^{\alpha}\) for \(x>0\), we see that for \(x\neq y\in\mathcal{X}\), \[P_{\alpha}(x,y)=K_{\phi}(L(x,y),L_{\pi}(x,y)).\] Similarly, the geometric mean reversiblization \(P_{0}\) can be retrieved by taking \(\phi(x)=\ln x\) for \(x>0\). Thus, reversibling a given \(L\) with a given target distribution \(\pi\) can be broadly understood as taking a suitable mean or average between \(L\) and \(L_{\pi}\). This important point of view is exploited in this section to generate possibly new reversiblizations via other notions of generalized mean. In particular, we shall investigate the Lagrange, Cauchy and dual mean. ### Generating new reversiblizations via Lagrange and Cauchy mean In this subsection, we investigate reversiblizations generated by Lagrange and Cauchy mean. **Definition 4.1** (Lagrange and Cauchy mean (1998); Matkowski (2006)).: Let \(\phi_{1},\phi_{2}\) be two differentiable and strictly increasing functions and the inverse of the ratio of their derivatives \(\phi_{1}^{\prime}/\phi_{2}^{\prime}\) exists. For \(a,b\in\mathbb{R}\), the Cauchy mean is defined to be \[\mathcal{C}_{\phi_{1},\phi_{2}}(a,b):=\begin{cases}a,\quad a=b,\\ \left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1}\left(\frac{\phi_ {1}(b)-\phi_{1}(a)}{\phi_{2}(b)-\phi_{2}(a)}\right),\quad a\neq b.\end{cases} \tag{4.1}\] In particular, if we take \(\phi_{2}(x)=x\), the Lagrange mean is defined to be \[\mathcal{L}_{\phi_{1}}(a,b):=\begin{cases}a,\quad a=b,\\ \phi_{1}^{\prime-1}\left(\frac{\phi_{1}(b)-\phi_{1}(a)}{b-a}\right),\quad a \neq b.\end{cases} \tag{4.2}\] Capitalizing on the idea of Cauchy mean, we introduce a broad class of Cauchy mean reversiblizations where we take \(\phi_{1},\phi_{2}\) to be homogeneous functions: **Theorem 4.1**.: _Suppose \(\phi_{1},\phi_{2}\) satisfy the assumptions as in Definition 4.1, and in addition \(\phi_{1},\phi_{2}\) are homogeneous functions of degree \(p,q\) respectively, where \(p,q\in\mathbb{R}\backslash\{0\}\) and \(p\neq q\). Given \(L\in\mathcal{L}\), the Cauchy mean reversiblization \(C_{\phi_{1},\phi_{2}}\in\mathcal{L}(\pi)\) is defined to be, for \(x\neq y\in\mathcal{X}\),_ \[C_{\phi_{1},\phi_{2}}(x,y):=\begin{cases}0,\quad\text{if }L(x,y)=0\text{ or }L_{\pi}(x,y)=0,\\ L(x,y),\quad\text{if }L(x,y)=L_{\pi}(x,y),\\ \mathcal{C}_{\phi_{1},\phi_{2}}(L(x,y),L_{\pi}(x,y)),\quad\text{otherwise}, \end{cases} \tag{4.3}\] _and diagonal entries are such that the row sums are zero for each row._ _Remark 4.1_.: In the case of \(L(x,y)=0\) or \(L_{\pi}(x,y)=0\), setting \(C_{\phi_{1},\phi_{2}}=0\) is arbitrary. In fact, we can also set the value of \(C_{\phi_{1},\phi_{2}}(x,y)=(L(x,y)+L_{\pi}(x,y))/2\) in this case. Interestingly, unlike the power mean reversiblizations \(P_{\alpha}\), the Cauchy mean reversiblizations are based on possibly transformed differences such as \(\phi_{2}(L(x,y))-\phi_{2}(L_{\pi}(x,y))\). We shall discuss concrete examples of new reversiblizations of the form of \(C_{\phi_{1},\phi_{2}}\) that we call Stolarsky-type mean reversiblizations in Section 4.1.1. Proof.: For \(t>0\), let \[g(t):=\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1}\left(\frac{ \phi_{1}(t)-\phi_{1}(1)}{\phi_{2}(t)-\phi_{2}(1)}\right),\] where we take this choice of \(g\) as the balancing function in the context of locally-balanced Markov chains Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020). Now, we note that \(\phi_{1}^{\prime},\phi_{2}^{\prime}\) are homogeneous with degree \(p-1,q-1\) respectively, and so \(\phi_{1}^{\prime}/\phi_{2}^{\prime}\) is homogeneous with degree \(p-q\). This yields the inverse \(\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1}\) is homogeneous with degree \(1/(p-q)\). Now, we see that \[tg(1/t) =\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1} \left(t^{p-q}\frac{\phi_{1}(1/t)-\phi_{1}(1)}{\phi_{2}(1/t)-\phi_{2}(1)}\right)\] \[=\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1} \left(\frac{\phi_{1}(1)-\phi_{1}(t)}{\phi_{2}(1)-\phi_{2}(t)}\right)\] \[=\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1} \left(\frac{\phi_{1}(t)-\phi_{1}(1)}{\phi_{2}(t)-\phi_{2}(1)}\right)=g(t).\] According to Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020), it suffices to show that \[\mathcal{C}_{\phi_{1},\phi_{2}}(L(x,y),L_{\pi}(x,y))=g\left(\frac{L_{\pi}(x,y) }{L(x,y)}\right)L(x,y),\] which is indeed the case since, using again the homogeneous property of \(\phi_{1},\phi_{2}\) and for \(a\neq b\), \[\mathcal{C}_{\phi_{1},\phi_{2}}(a,b) =\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1} \left(\frac{\phi_{1}(b)-\phi_{1}(a)}{\phi_{2}(b)-\phi_{2}(a)}\right)\] \[=\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1} \left(a^{p-q}\frac{\phi_{1}(b/a)-\phi_{1}(1)}{\phi_{2}(b/a)-\phi_{2}(1)}\right) =g(b/a)a.\] Another class of Cauchy mean reversiblizations, that we call logarithmic mean reversiblizations, are generated by taking \(\phi_{1}\) to be a homogeneous function while \(\phi_{2}(x)=\ln x\). Some examples of new reversiblizations that fall into this class are discussed in Section 4.1.2. **Theorem 4.2**.: _Suppose \(\phi_{1}\) satisfies the assumptions as in Definition 4.1, and in addition \(\phi_{1}\) is a homogeneous function of degree \(p\in\mathbb{R}\backslash\{0\}\). We also take \(\phi_{2}(x)=\ln x\). Given \(L\in\mathcal{L}\), the logarithmic mean reversiblization \(C_{\phi_{1},\ln}\in\mathcal{L}(\pi)\) is defined to be, for \(x\neq y\in\mathcal{X}\),_ \[C_{\phi_{1},\ln}(x,y):=\begin{cases}0,\quad\text{if }L(x,y)=0\text{ or }L_{\pi}(x,y)=0,\\ L(x,y),\quad\text{if }L(x,y)=L_{\pi}(x,y),\\ \mathcal{C}_{\phi_{1},\phi_{2}}(L(x,y),L_{\pi}(x,y)),\quad\text{otherwise}, \end{cases} \tag{4.4}\] _and diagonal entries are such that the row sums are zero for each row._ Proof.: For \(t>0\), let \[g(t):=\left(\frac{\phi_{1}^{\prime}}{\phi_{2}^{\prime}}\right)^{-1}\left(\frac{ \phi_{1}(t)-\phi_{1}(1)}{\phi_{2}(t)-\phi_{2}(1)}\right)=(t\phi_{1}^{\prime})^{- 1}\left(\frac{\phi_{1}(t)-\phi_{1}(1)}{\ln t}\right),\] where we take this choice of \(g\) as the balancing function in the context of locally-balanced Markov chains Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020). Since \(\phi_{1}^{\prime}(t)\) is homogeneous with degree \(p-1\), \(t\phi_{1}^{\prime}(t)\) is homogeneous with degree \(p\) by Euler's homogeneous function theorem, and hence its inverse is homogeneous with degree \(1/p\). Now, we see that \[tg(1/t) =\left(t\phi_{1}^{\prime}\right)^{-1}\left(t^{p}\frac{\phi_{1}(1/ t)-\phi_{1}(1)}{-\ln t}\right)\] \[=\left(t\phi_{1}^{\prime}\right)^{-1}\left(\frac{\phi_{1}(1)- \phi_{1}(t)}{-\ln t}\right)\] \[=\left(t\phi_{1}^{\prime}\right)^{-1}\left(\frac{\phi_{1}(t)- \phi_{1}(1)}{\ln t}\right)=g(t).\] According to Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020), it suffices to show that \[\mathcal{C}_{\phi_{1},\phi_{2}}(L(x,y),L_{\pi}(x,y))=g\left(\frac{L_{\pi}(x,y) }{L(x,y)}\right)L(x,y),\] which is true since, using again the homogeneous property of \(\phi_{1},\phi_{2}\) and for \(a\neq b\), \[\mathcal{C}_{\phi_{1},\phi_{2}}(a,b) =\left(t\phi_{1}^{\prime}\right)^{-1}\left(\frac{\phi_{1}(b)-\phi _{1}(a)}{\ln(b/a)}\right)\] \[=\left(t\phi_{1}^{\prime}\right)^{-1}\left(a^{p}\frac{\phi_{1}(b/ a)-\phi_{1}(1)}{\ln(b/a)}\right)=g(b/a)a.\] #### 4.1.1. Stolarsky mean reversiblizations In this subsection, we investigate possibly new reversiblizations or recover known reversiblizations that belong to the Cauchy mean reversiblizations \(C_{\phi_{1},\phi_{2}}\) as introduced in (4.3). First, if we take \(\phi_{1}(x)=x^{p}\) and \(\phi_{2}(x)=x\) with \(p\in\mathbb{R}_{+}\backslash\{0,1\}\), then (4.3) now reads, for \(x\neq y\in\mathcal{X}\), \[C_{\phi_{1},\phi_{2}}(x,y)=\left(\frac{L^{p}(x,y)-L_{\pi}^{p}(x,y)}{p(L(x,y)-L_ {\pi}(x,y))}\right)^{1/(p-1)},\] which is known as the Stolarsky mean Nielsen and Nock (2017); Stolarsky (1975) of \(L(x,y),L_{\pi}(x,y)\). This is also an instance of the Lagrange mean as in Definition 4.1. In particular, if we take \(p=2\), the above expression reduces to the simple average between \(L(x,y)\) and \(L_{\pi}(x,y)\). On the other hand, if \(p\in\mathbb{N}\) with \(p\geqslant 3\), the above expression can be simplified to \[C_{\phi_{1},\phi_{2}}(x,y)=\left(\frac{1}{p}\sum_{i=0}^{p-1}L(x,y)^{p-1-i}L_{ \pi}(x,y)^{i}\right)^{1/(p-1)}.\] In general, we consider \(\phi_{1}(x)=x^{p}\) and \(\phi_{2}(x)=x^{q}\) with \(p,q\in\mathbb{R}_{+}\backslash\{0,1\}\) and \(p\neq q\), then (4.3) gives \[C_{\phi_{1},\phi_{2}}(x,y)=\left(\frac{q(L^{p}(x,y)-L_{\pi}^{p}(x,y))}{p(L^{q} (x,y)-L_{\pi}^{q}(x,y))}\right)^{1/(p-q)}.\] #### 4.1.2. Logarithmic mean reversiblizations In this subsection, we generate new reversiblizations that fall into the class of logarithmic mean reversiblizations as introduced in (4.4). Taking \(\phi_{1}(x)=x^{p}\), with \(p\in\mathbb{R}_{+}\backslash\{0,1\}\), then (4.4) now reads, for \(x\neq y\in\mathcal{X}\), \[C_{\phi_{1},\mathrm{ln}}(x,y)=\left(\frac{L^{p}(x,y)-L_{\pi}^{p}(x,y)}{p(\ln L (x,y)-\ln L_{\pi}(x,y))}\right)^{1/p}.\] In particular when \(p=1\), the above expression reduces to the classical logarithmic mean Lin (1974) of \(L(x,y),L_{\pi}(x,y)\): \[C_{\phi_{1},\mathrm{ln}}(x,y)=\frac{L(x,y)-L_{\pi}(x,y)}{\ln L(x,y)-\ln L_{\pi} (x,y)}.\] Note that this is also an instance of the Lagrange mean \(\mathcal{L}_{\mathrm{ln}}(L(x,y),L_{\pi}(x,y))\), and does not belong to the class of quasi-arithmetic mean. In the case of \(p=1\), using the arithmetic-logarithmic-geometric mean inequality Lin (1974), we obtain that \[P_{0}(x,y)\leqslant C_{\phi_{1},\mathrm{ln}}(x,y)\leqslant P_{1/3}(x,y) \leqslant P_{1}(x,y),\] where we recall that \(P_{0},P_{1/3},P_{1}\) are respectively the geometric mean, Lorentz mean and additive reversiblizations. This yields the following Peskun ordering between these reversiblizations, and its proof is omitted as it is similar to Theorem 3.9. **Theorem 4.3** (Markov chain version of arithmetic-logarithmic-geometric mean inequality).: _Given \(L\in\mathcal{L}\). Let \(\phi_{1}(x)=x\) and define the logarithmic mean reversiblization \(C_{\phi_{1},\mathrm{ln}}\), and recall the power mean reversiblizations as denoted by \(P_{p}\). We have_ \[P_{1}\succeq P_{1/3}\succeq C_{\phi_{1},\mathrm{ln}}\succeq P_{0}.\] _Consequently, this leads to_ 1. _(Hitting times) For_ \(\lambda>0\) _and_ \(A\subseteq\mathcal{X}\)_, we have_ \[\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{0})})\leqslant\mathbb{E}_{\pi}(e^{- \lambda\tau_{A}(C_{\phi_{1},\mathrm{ln}})})\leqslant\mathbb{E}_{\pi}(e^{- \lambda\tau_{A}(P_{1/3})})\leqslant\mathbb{E}_{\pi}(e^{-\lambda\tau_{A}(P_{1}) }).\] _In particular, for any_ \(A\subseteq\mathcal{X}\)_,_ \[\mathbb{E}_{\pi}(\tau_{A}(P_{0}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(C_{\phi_{ 1},\mathrm{ln}}))\geqslant\mathbb{E}_{\pi}(\tau_{A}(P_{1/3}))\geqslant\mathbb{ E}_{\pi}(\tau_{A}(P_{1})).\] _Furthermore,_ \[t_{av}(P_{0},\pi)\geqslant t_{av}(C_{\phi_{1},\mathrm{ln}},\pi)\geqslant t_{ av}(P_{1/3},\pi)\geqslant t_{av}(P_{1},\pi).\] 2. _(Spectral gap) We have_ \[\lambda_{2}(P_{0},\pi)\leqslant\lambda_{2}(C_{\phi_{1},\mathrm{ln}},\pi) \leqslant\lambda_{2}(P_{1/3},\pi)\leqslant\lambda_{2}(P_{1},\pi).\] _That is,_ \[t_{rel}(P_{0},\pi)\geqslant t_{rel}(C_{\phi_{1},\mathrm{ln}},\pi)\geqslant t_{ rel}(P_{1/3},\pi)\geqslant t_{rel}(P_{1},\pi).\] 3. _(Asymptotic variance) For_ \(h\in\ell_{0}^{2}(\pi)=\{h;\;\pi(h)=0\}\)_,_ \[\sigma^{2}(h,P_{0},\pi)\geqslant\sigma^{2}(h,C_{\phi_{1},\mathrm{ln}},\pi) \geqslant\sigma^{2}(h,P_{1/3},\pi)\geqslant\sigma^{2}(h,P_{1},\pi).\] _The above equalities hold if and only if \(L\in\mathcal{L}(\pi)\) is \(\pi\)-reversible._ ### Generating new reversiblizations via dual mean and generalized Barker proposal Another notion of mean that can be utilized to generate possibly new reversiblizations is the dual mean \(\mathcal{M}^{*}\) of a given mean function \(\mathcal{M}(\cdot,\cdot)\). Precisely, for any \(a,b\in\mathbb{R}\), \[\mathcal{M}^{*}(a,b):=\frac{ab}{\mathcal{M}(a,b)}.\] The mean function \(\mathcal{M}\) is said to be symmetric if \(\mathcal{M}(a,b)=\mathcal{M}(b,a)\), and homogeneous if \(\mathcal{M}(\lambda a,\lambda b)=\lambda\mathcal{M}(a,b)\) for any \(\lambda\geqslant 0\). The following theorem proposes an approach that systematically generates reversiblizations via dual mean: **Theorem 4.4**.: _Given \(L\in\mathcal{L}\) and a non-negative, symmetric and homogeneous mean function \(\mathcal{M}\). The dual mean reversiblization \(D_{\mathcal{M}}\in\mathcal{L}(\pi)\) is defined to be, for \(x\neq y\in\mathcal{X}\),_ \[D_{\mathcal{M}}(x,y):=\begin{cases}0,\quad\text{if }L(x,y)=0\text{ or }L_{\pi}(x,y)=0,\\ L(x,y),\quad\text{if }L(x,y)=L_{\pi}(x,y),\\ \mathcal{M}^{*}(L(x,y),L_{\pi}(x,y))=\frac{L(x,y)L_{\pi}(x,y)}{\mathcal{M}(L( x,y),L_{\pi}(x,y))},\quad\text{otherwise},\end{cases} \tag{4.5}\] _and diagonal entries are such that the row sums are zero for each row._ Note that in the special case when \(\mathcal{M}\) is the simple average, then we retrieve the Barker proposal or the harmonic reversiblization. Thus, \(D_{\mathcal{M}}\) can be broadly interpreted as a generalization of the Barker proposal. Proof of Theorem 4.4.: First, for \(t>0\), we define \[g(t):=\frac{1}{\mathcal{M}(1/t,1)}.\] Using the symmetric and homogeneous property of \(\mathcal{M}\) we see that \[tg(1/t)=\frac{t}{\mathcal{M}(t,1)}=\frac{1}{\mathcal{M}(1,1/t)}=g(t).\] According to Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020), it suffices to show, for \(x\neq y\) and \(L(x,y)\neq L_{\pi}(x,y)\), we have \[D_{\mathcal{M}}(x,y)=g(L_{\pi}(x,y)/L(x,y))L(x,y),\] and hence \(D_{\mathcal{M}}\in\mathcal{L}(\pi)\). #### 4.2.1. Dual power mean reversiblizations In this subsection, we take, for \(p\in\mathbb{R}\), \[\mathcal{M}(a,b)=\left(\frac{a^{p}+b^{p}}{2}\right)^{1/p},\] the power mean of \(a,b\) with index \(p\), which is symmetric, homogeneous and non-negative for \(a,b\geqslant 0\). (4.5) now reads \[D_{\mathcal{M}}(x,y)=\frac{L(x,y)L_{\pi}(x,y)}{\left(\frac{L(x,y)^{p}+L_{\pi}( x,y)^{p}}{2}\right)^{1/p}},\] in which we retrieve the Barker proposal when we take \(p=1\). Analogous to Theorem 3.9, we can develop a dual Peskun ordering between these dual power mean reversiblizations using the classical power mean inequality. #### 4.2.2. Dual Stolarsky mean reversiblizations Recall that in Section 4.1.1, we introduce the Stolarsky mean, which gives, for \(p,q\in\mathbb{R}_{+}\backslash\{0,1\}\) and \(p\neq q\), \[\mathcal{M}(a,b)=\left(\frac{q(a^{p}-b^{p})}{p(a^{q}-b^{q})}\right)^{1/(p-q)},\] which is symmetric, homogeneous and non-negative for \(a,b\geqslant 0\). The dual Stolarsky mean reversiblization (4.5) now reads \[D_{\mathcal{M}}(x,y)=\frac{L(x,y)L_{\pi}(x,y)}{\left(\frac{q(L^{p}(x,y)-L_{\pi }^{p}(x,y))}{p(L^{q}(x,y)-L_{\pi}^{q}(x,y))}\right)^{1/(p-q)}}.\] #### 4.2.3. Dual logarithmic mean reversiblizations Recall that in Section 4.1.2, we introduce the logarithmic mean, which gives, for \(p\in\mathbb{R}_{+}\backslash\{0,1\}\), \[\mathcal{M}(a,b)=\left(\frac{a^{p}-b^{p}}{p(\ln a-\ln b)}\right)^{1/p},\] which is symmetric, homogeneous and non-negative for \(a,b\geqslant 0\). The dual logarithmic mean reversiblization (4.5) now reads \[D_{\mathcal{M}}(x,y)=\frac{L(x,y)L_{\pi}(x,y)}{\left(\frac{L^{p}(x,y)-L_{\pi} ^{p}(x,y)}{p(\ln L(x,y)-\ln L_{\pi}(x,y))}\right)^{1/p}}.\] Analogous to Theorem 4.3, we can develop a dual Peskun ordering between these the dual logarithmic mean reversiblization and dual arithmetic mean reversiblization using the arithmetic-logarithmic-geometric mean inequality. 5. Generating new reversiblizations via balancing function, convex \(f\) and its conjugate \(f^{*}\) Given \(L\in\mathcal{L}\), we define \(F_{g}\) to be \[F_{g}(x,y):=\begin{cases}0,\quad\text{if }L(x,y)=0\text{ or }L_{\pi}(x,y)=0,\\ L(x,y),\quad\text{if }L(x,y)=L_{\pi}(x,y),\\ g\left(\frac{L_{\pi}(x,y)}{L(x,y)}\right)L(x,y),\quad\text{otherwise},\end{cases} \tag{5.1}\] where \(g\) is a non-negative function that satisfies \(g(t)=tg(1/t)\) for \(t>0\). This \(g\) is known as a balancing function recently introduced in the Markov chain Monte Carlo literature Livingstone and Zanella (2022); Vogrinc et al. (2022); Zanella (2020). According to these references, \(F_{g}\in\mathcal{L}(\pi)\), and a rich source of such \(g\) is to consider \((f+f^{*})/2\), where we recall \(f\) is a convex function with \(f^{*}\) being its conjugate as introduced in Section 2. In the following sections, we shall give a non-exhaustive list of new reversiblizations generated by a convex \(f\) under this approach. We refer readers to Sason and Verdu (2016) and the references therein for other possible and common choices of \(f\) that have been investigated in the information theory literature but are not listed in subsequent sections. ### Total variation reversiblization In the first example, we take \(f(t)=|t-1|\), where the \(f\)-divergence generated is the total variation distance. We also see that \(f=f^{*}=(f+f^{*})/2\), and (5.1) becomes, for \(L(x,y)\neq L_{\pi}(x,y)\) and both are non-zero, \[F_{f}(x,y)=|L(x,y)-L_{\pi}(x,y)|,\] that we call the total variation reversiblization. ### Squared Hellinger reversiblization In the second example, we take \(f(t)=(\sqrt{t}-1)^{2}\), where the \(f\)-divergence generated is the squared Hellinger distance as introduced in Section 3.2. We also see that \(f=f^{*}=(f+f^{*})/2\), and (5.1) becomes, for \(L(x,y)\neq L_{\pi}(x,y)\) and both are non-zero, \[F_{f}(x,y)=(\sqrt{L(x,y)}-\sqrt{L_{\pi}(x,y)})^{2},\] that we call the squared Hellinger reversiblization. ### Jensen-Shannon reversiblization In the third example, we take \(f(t)=t\ln t-(1+t)\ln((1+t)/2)\), where the \(f\)-divergence generated is the Jensen-Shannon divergence as introduced in Definition 3.1. We also see that \(f=f^{*}=(f+f^{*})/2\), and (5.1) becomes, for \(L(x,y)\neq L_{\pi}(x,y)\) and both are non-zero, \[F_{f}(x,y)=L_{\pi}(x,y)\ln\frac{L_{\pi}(x,y)}{L(x,y)}-(L(x,y)+L_{\pi}(x,y))\ln \left(\frac{L(x,y)+L_{\pi}(x,y)}{2L(x,y)}\right),\] that we call the Jensen-Shannon reversiblization. ### Vincze-Le Cam reversiblization In the fourth example, we take \(f(t)=\frac{(t-1)^{2}}{1+t}\), where the \(f\)-divergence generated is the Vincze-Le Cam divergence as introduced in Definition 3.2. We also see that \(f=f^{*}=(f+f^{*})/2\), and (5.1) becomes, for \(L(x,y)\neq L_{\pi}(x,y)\) and both are non-zero, \[F_{f}(x,y)=\frac{(L(x,y)-L_{\pi}(x,y))^{2}}{L(x,y)+L_{\pi}(x,y)},\] that we call the Vincze-Le Cam reversiblization. ### Jeffrey reversiblization In the final example, we take \(f(t)=(t-1)\ln t\), where the \(f\)-divergence generated is known as the Jeffrey's divergence. We also see that \(f=f^{*}=(f+f^{*})/2\), and (5.1) becomes, for \(L(x,y)\neq L_{\pi}(x,y)\) and both are non-zero, \[F_{f}(x,y)=(L(x,y)-L_{\pi}(x,y))(\ln L(x,y)-\ln L_{\pi}(x,y)),\] that we call the Jeffrey reversiblization. ## Acknowledgements Michael Choi would like to thank the kind hospitality of Geoffrey Wolfer and RIKEN AIP for hosting him for a visit, in which this work was initiated. He would also like to thank Youjia Wang for his assistance in producing Figure 1. He acknowledges the financial support from the startup grant of the National University of Singapore and the Yale-NUS College, and a Ministry of Education Tier 1 Grant under the Data for Science and Science for Data collaborative scheme. Geoffrey Wolfer is supported by the Special Postdoctoral Researcher Program (SPDR) of RIKEN.
2302.14005
Quantum key distribution in a packet-switched network
Packet switching revolutionized the Internet by allowing the efficient use of network resources for data transmission. In a previous work, we introduced packet switching in quantum networks as a path to the Quantum Internet and presented a proof-of-concept for its application to quantum key distribution (QKD). In this paper, we outline a three-step approach for key rate optimization in a packet-switched network. Our simulated results show that practical key rates may be achieved in a sixteen-user network with no optical storage capacity. Under certain network conditions, we may improve the key rate by using an ultra-low-loss fiber delay line to store packets during network delays. We also find that implementing cut-off storage times in a strategy analogous to real-time selection in free-space QKD can significantly enhance performance. Our work demonstrates that packet switching is imminently suitable as a platform for QKD, an important step towards developing large-scale and integrated quantum networks.
Reem Mandil, Stephen DiAdamo, Bing Qi, Alireza Shabani
2023-02-27T17:48:17Z
http://arxiv.org/abs/2302.14005v1
# Quantum key distribution in a packet-switched network ###### Abstract Packet switching revolutionized the Internet by allowing the efficient use of network resources for data transmission. In a previous work, we introduced packet switching in quantum networks as a path to the Quantum Internet and presented a proof-of-concept for its application to quantum key distribution (QKD). In this paper, we outline a three-step approach for key rate optimization in a packet-switched network. Our simulated results show that practical key rates may be achieved in a sixteen-user network with no optical storage capacity. Under certain network conditions, we may improve the key rate by using an ultra-low-loss fiber delay line to store packets during network delays. We also find that implementing cut-off storage times in a strategy analogous to real-time selection in free-space QKD can significantly enhance performance. Our work demonstrates that packet switching is imminently suitable as a platform for QKD, an important step towards developing large-scale and integrated quantum networks. ## I Introduction Packet-switched communication networks were introduced as an efficient and scalable alternative to circuit switching in the early sixties [1; 2]. Today, packet switching is the dominant mode of operation in the Internet. Recently we have introduced packet switching as a paradigm for quantum networks using hybrid (classical-quantum) data frames [3]. Inside a frame, a quantum payload is prepended with a classical header containing information for routing and more. Frames travel from sender to receiver through a series of routers which process the header to determine the channel forward based on the current conditions of the network (Fig. 1). This is in contrast to a circuit-switched network where a dedicated channel is established between sender and receiver and reserved until communication is complete (Fig. 1(b)). There are important considerations to be made when deciding whether packet switching or circuit switching is best suited for a network application. In a circuit-switched network, communication across multiple user pairs must be done in a coordinated fashion in order to enable bandwidth sharing (e.g., via time or wavelength-division multiplexing). In a packet-switched network, the communication need not be coordinated in advance. However, frames will experience delays at the intermediate nodes between users due to finite header processing times and, under some traffic conditions, queuing times. Furthermore, packet switching is generally advantageous over circuit switching when the traffic generated by network users is _bursty_, characterized by intervals of activity and intervals of inactivity. One important application in a quantum network is quantum key distribution (QKD), a procedure that allows two remote users (e.g., Alice and Bob) to establish shared encryption keys with information-theoretic security [4; 5]. An important feature of QKD is that it is robust against loss in transmission, meaning that a secure key can still be established even when most of the transmitted signals are lost. This suggests that data loss due to delays in a packet-switched network may be tolerated even without any storage of QKD signals at the routers. Moreover, the optical loss introduced by an imperfect storage medium may also be tolerated. Another important feature of QKD is that key generation is not time-critical, meaning that secure keys need not be generated immediately before their consumption. This implies that bursty frame generation may be sufficient since users can establish and store keys for later use. These features motivate our hypothesis that packet switching is imminently suitable as a platform for QKD. One may of course imagine a scenario where network users prefer access to a dedicated quantum channel for their key distribution (e.g., urgent requests or large size requirement for encryption keys). Furthermore, most existing demonstrations of multi-user QKD Figure 1: (a) Packet-switched network. The channel between sender (S) and receiver (R) is not predetermined and can be dynamically reconfigured. (b) Circuit-switched network. A dedicated channel between sender and receiver is set up before data is transferred between them. are conducted over dedicated networks [6; 7; 8; 9; 10; 11; 12] where QKD is the sole task. In this case, it may be beneficial to have a central controller to coordinate QKD among different user pairs, in a fashion similar to circuit switching. However, if we wish to integrate QKD with existing classical networks in order to extend its applications, packet switching is a natural choice. Therefore, the goal of this paper is to demonstrate the feasibility of performing QKD in a packet-switched network. To meet this goal, we take a three-step approach. First, we choose a network routing protocol which describes how a router handles a frame during network delays. In this paper, we will investigate three different routing protocols based on varying optical storage capacity. Second, we simulate the transport of frames in a network operating under a given routing protocol and traffic model. The simulation provides us with statistics for the dynamic channel between each Alice-Bob pair. Lastly, we use the simulated network statistics to predict the maximum secure key rate for each user pair in the network by performing a finite-key analysis. In our previous work [3], we presented a proof-of-concept for QKD in a packet-switched quantum network, and considered a basic model for a two-user communication scenario where the routers had no optical storage capacity. Packet switching in quantum networks is a relatively unexplored topic, but has been proposed as a solution for overcoming scalability issues in previous works [13; 14]. Moreover, Ref. [15] has investigated using leading classical signals to make routing decisions in a QKD network, although packet switching is not considered in their approach. In this work, we analyze a sixteen-user network with and without optical storage capacity at the routers. We also consider a finite-size security analysis for a practical decoy-state QKD protocol. Our results show that QKD is feasible in a packet-switched network with today's commercial technology and that optical storage can be used to improve its performance under certain conditions. This paper is organized as follows. In Sec. II, we describe the routing component of a packet-switched network, including network delays and the routing protocols considered in this work. We also present a router hardware design based on current technology. In Sec. III, we describe the QKD protocol and key rate analysis under consideration. In Sec. IV, we describe our software tool for simulating the dynamics of a packet-switched network. Finally, in Sec. V, we present and discuss the simulated QKD results. ## II Network routing In this section, we describe how the routers in a packet-switched network may handle frames that are intended for a QKD application. We review the frame structure and outline the network delays and routing strategies considered. ### Network Delays The total time a frame needs to move through a router is the sum of three sources of delay. First, there is the processing delay, \(d_{proc}\), which is the time to process the classical header and determine the next action for the frame as well as regenerate the header when needed. Depending on the network complexity, this delay can range from 10 \(\mu s\) to 1,000 \(\mu s\)[16]. In this work, we assume a \(d_{proc}\) of 100 \(\mu s\). Second, there is the queuing delay, \(d_{queue}\), which is the time the frame must wait before it can be forwarded from a router (after the header has been processed). This quantity depends on the traffic conditions of the network and can range from zero to infinity. Lastly, there is the transmission delay, \(d_{trans}\), which is the time required to transmit the entire frame onto an outgoing link. This is equal to the temporal frame length, \(T_{f}\), which may shrink at each router it traverses depending on the routing protocol employed. ### Routing Protocols The network routing protocol determines what happens to a frame during the network delays \(d_{proc}^{i}\) and \(d_{queue}^{i}\), where the superscript \(i\) is used to index each router in the frame's path from sender to receiver. Fig. 2 depicts a hybrid frame with a quantum payload consisting of weak laser pulses with repetition rate \(R_{t}\) (Hz). The frame may be configured to include a time delay between the end of the header and the beginning of the payload, referred to as the guard time, \(T_{g}\). In general, our network routing protocols fall into one of two categories based on the capacity to store frames at the routers. Figure 2: (a) The classical header and trailer (\(\lambda_{C}\)) and the quantum payload (\(\lambda_{Q}\)) are generated from a laser source and multiplexed into a hybrid data frame using time-division and wavelength-division multiplexing (not shown to scale). (b) The hybrid frame includes guard time—a time delay between the end of the header and the beginning of the payload. For protocols based on no storage, \(d_{trans}^{i}\) (\(=T_{f}^{i}\)) will shrink by a duration equal to \(d_{proc}^{i}+d_{queue}^{i}\) at each router the frame traverses. If \(T_{g}^{i}=0\), this corresponds to the discarding of \(R_{t}(d_{proc}^{i}+d_{queue}^{i})\) pulses in the leading portion of the payload (note that we consider the lengths of the classical header and trailer to be negligible compared to the quantum payload). If \(T_{g}^{i}>0\), then it will serve as a buffer to reduce the number of pulses that are lost (i.e., if \(T_{g}^{i}>d_{proc}^{i}+d_{queue}^{i}\), then no pulses are discarded as the frame shrinks but \(T_{g}^{i}\) will decrease accordingly). Note that in each routing protocol we consider, the guard time is not reset at each router. This alternative approach may be useful for a quantum network application in which the payload carries information that should not be lost. For protocols based on storage, the frame will enter a fiber delay line for a storage time \(T_{s}^{i}\leq d_{proc}^{i}+d_{queue}^{i}\). During \(T_{s}^{i}\), no pulses are discarded from the payload, but they will be subject to the attenuation of the fiber delay line. If \(T_{g}^{i}>0\), then it will again serve as a buffer to reduce \(T_{s}^{i}\) (i.e., if \(T_{g}^{i}>d_{proc}^{i}+d_{queue}^{i}\), then \(T_{s}^{i}=0\) but \(T_{g}^{i}\) will decrease accordingly). Note that the header may be configured to include a field that tracks the cumulative time spent in storage as a frame traverses the network. In this work, we investigate the following three routing protocols. 1. _No storage during delays_. At each router, a frame will have its payload discarded for a time \(d_{proc}^{i}+d_{queue}^{i}\) and \(d_{trans}^{i}\) will shrink by the same amount. If \(d_{trans}^{i}\) reaches zero, then the frame is discarded from the network. 2. _Storage during delays (unlimited)_. At each router, a frame will enter a fiber delay line for a storage time \(T_{s}^{i}=\max(0,d_{proc}^{i}+d_{queue}^{i}-T_{g}^{i})\) and \(d_{trans}^{i}\) will shrink by \(\min(T_{g}^{i},d_{proc}^{i}+d_{queue}^{i})\). 3. _Storage during delays (limited)_. At each router, a frame will enter a fiber delay line for a storage time \(T_{s}^{i}=\max(0,d_{proc}^{i}+d_{queue}^{i}-T_{g}^{i})\) and \(d_{trans}^{i}\) will shrink by \(\min(T_{g}^{i},d_{proc}^{i}+d_{queue}^{i})\). If the total time a frame has spent in storage reaches a predetermined storage time limit, the frame is immediately discarded from the network. In the no storage routing protocol, network delays introduce a controlled photon loss as a portion of the payload is discarded. In the storage routing protocols, network delays introduce random photon loss in the payload due to the attenuation of the fiber delay line. The regime in which one strategy may dominate over the other therefore depends on factors such as the frame length, the network delays, and the attenuation of the storage line. A more detailed motivation for the two types of routing protocols is provided in Appendix A. To motivate the limited storage routing protocol, we make the observation that the dynamic channel conditions in a packet-switched network are analogous to those in free-space QKD under turbulent conditions. In such scenarios, it has been shown that the key rate can be improved by rejecting key bits when the channel's transmittance is below a threshold [17; 18; 19]. In our case, since the routing history is recorded in the classical header, we can discard frames en-route, which has the additional benefit of reducing network congestion. Another option, more analogous to the technique used in free-space QKD, is to allow all frames to reach the receiver end via the unlimited storage routing protocol, but enforce a storage time limit (STL) in post-processing. That is, frames for which \(\sum_{i}T_{s}^{i}>STL\) will be excluded from key generation. In this work, we compare both options for implementing a cut-off channel transmittance. ### Router Hardware A conceptual router design is shown in Fig. 3. This router behaves as a quantum version of a reconfigurable optical add drop multiplexer (ROADM). Frames may arrive at the router from three different directions (North, East, West) after which a wavelength-division multiplexer is used to separate the quantum payload from the classical header and trailer. The header is fed into a control unit to decide how to further process the frame. Once the header has been processed, the frame will be forwarded towards the next node in the network (i.e., to another router via the East or West degree, or to a receiver via an Output channel). The control unit will regenerate the header with updated fields for the quantum payload duration, guard time, and time spent in storage prior to transmitting the frame to the next node. We assume the control unit is capable of processing up to \(k\) headers simultaneously and that the router has access to \(q\) variable optical fiber delay lines via its Add/Drop channels. To achieve an arbitrary delay, each fiber delay line can be combined with an active optical switch (not illustrated in figure). The router can also discard frames or partially discard the quantum payload via its Drop channels. The use of these channels depends on the network routing protocol being implemented. We also assume the router to have a minimum insertion loss of 4 dB, which accounts for the circulators, multiplexers, and optical switch fabric (excludes the fiber delay lines). Therefore the total loss (dB) at each router is given by \[loss_{r}^{i}=T_{s}^{i}v_{g}\alpha_{s}+4\ dB, \tag{1}\] where \(v_{g}\) is the speed of light in fiber and \(\alpha_{s}\) is the attenuation coefficient (dB/km) of the fiber storage line. Furthermore, we assume the router may compensate the polarization drift of all incoming channels by using a feedback signal generated from the measured drift of the classical pulses in the header. Lastly, we note that this router design is directly suitable for the network configuration in Fig. 4 although additional input fibers and ROADM degrees may be added to the router depending on the desired connectivity of the network. We also consider hardware that is directly suitable for the hybrid frame in Fig. 2 although the hardware can be modified according to the multiplexing scheme employed for the frame. ## III QKD Security Analysis Practical implementations of QKD adopt the decoy-state method [20, 21, 22, 23] to allow for use of a weak pulsed laser source instead of an ideal single-photon source. In this work, we consider a decoy-state asymmetric coding BB84 protocol [24] and we adopt the finite-size security analysis in Ref. [25] to calculate the secure key rate. In this section, we provide a brief summary of the QKD protocol and then describe our strategy for key rate optimization in a packet-switched network. ### Protocol Description _1. Preparation._ Alice chooses a bit value \(b_{A}\) uniformly at random. Then, she selects a basis \(\in\{X,Z\}\) with probabilities \(q_{x}\) and \(1-q_{x}\), respectively, and an intensity \(k_{i}\in\mathcal{K}:=\{\mu_{1},\mu_{2},\mu_{3}\}\) with probabilities \(p_{\mu_{1}}\), \(p_{\mu_{2}}\), and \(p_{\mu_{3}}=1-p_{\mu_{1}}-p_{\mu_{2}}\), respectively. If Alice chooses the \(X\) basis, she prepares a weak laser pulse of the chosen intensity in the horizontal polarization state \(|H\rangle\) for the bit value \(b_{A}=0\) or vertical state \(|V\rangle\) for the bit value \(b_{A}=1\). If the \(Z\) basis is chosen, she prepares the diagonal (45-degrees) polarization state \(|D\rangle\) for the bit value \(b_{A}=0\) or antidiagonal (135-degrees) state \(|A\rangle\) for the bit value \(b_{A}=1\). Lastly, she sends her prepared state to Bob. _2. Measurement._ Bob selects a basis \(\in\{X,Z\}\) with probabilities \(q_{x}\) and \(1-q_{x}\), respectively. Then, he performs a measurement in the chosen basis and records the outcome in a bit value \(b_{B}\). More precisely, he assigns \(b_{B}=0\) for a click in single-photon detector \(D_{0}\) and \(b_{B}=1\) for a click in detector \(D_{1}\). If both detectors click, he assigns a random value to \(b_{B}\). If neither detector clicks, he does not assign any value. _3. Basis reconciliation._ Alice and Bob announce their basis and intensity choices over an authenticated public channel. Based on the information announced, Alice and Bob identify their raw keys \(b_{A}\) and \(b_{B}\) from the instances where they both chose basis \(X\) and Bob observed a detection event. Note that all intensity levels are used for the key generation [25]. They use the instances where they both chose basis \(Z\) and Bob observed a detection event for phase error estimation. _4. Post-processing._ Alice and Bob perform classical error correction and privacy amplification on their raw key pair to extract a secure key. Figure 3: Hardware design of a router in a packet-switched network. A frame arrives at the router from the North, East, or West degree. Channels in the North degree are directly connected to senders. The links in the East and West degrees consist of a fiber directly connected to another router; a circulator is used to allow for bidirectional transmission. A frame passes through a wavelength-division multiplexer to separate the classical and quantum information. The classical information is processed in the control unit, which signals to the optical switch fabric where to route the frame (i.e., to another router via the East or West degree, or to a receiver via an Output channel) and regenerates the header prior to transmitting the frame. Add/Drop channels are used to access variable optical fiber delay lines. Drop channels are used for discarding pulses or entire frames. PD: photodiode; PBS: polarizing beam splitter; EPC: electronic polarization controller. ### Key Rate Optimization A convenient feature of QKD security proofs is that the quantum channel between users is assumed to be fully controlled by an adversary and thus we do not need to develop a new security proof for QKD in a packet-switched network. One may ask whether we need to trust the routers which control the discarding of pulses and frames. If a security proof allows for the adversary to fully control Bob's post-selection process, as is the case for the proof adopted in this work, then we need not trust the routers. Nonetheless, packet switching poses a unique challenge to QKD due to the dynamic nature of the quantum channel between users. In order to maximize the secure key rate in the decoy-state protocol described above, we must optimize over the free parameters \(\{q_{x},p_{\mu_{1}},p_{\mu_{2}},\mu_{1},\mu_{2}\}\)[25] which requires knowledge of the average channel transmittance, \(\langle\eta_{tot}\rangle\), where the average is taken over all frames contributing to the key. Furthermore, in order to conduct a finite-size analysis, we must determine the total number of QKD states, \(N\), passed to Bob. Depending on the network routing protocol employed, this may not be equivalent to the number of states transmitted by Alice, \(N_{0}\), due to discarding at the routers. Therefore, in order to predict the maximum secure key rates from QKD in a packet-switched network, we need a tool for assessing \(\langle\eta_{tot}\rangle\) and \(N\) for each user pair. One may consider an analytic approach to gathering these statistics, however this quickly becomes infeasible for increased complexity of the network. The theory of Jackson networks [26] allows us to calculate the average queuing delay at each router quite simply, but only if the network obeys a specific traffic model. Instead, we build a network simulation tool to numerically determine the channel statistics. Details of the key rate analysis, including noise and detection parameters, are given in Appendix B. ## IV Network Simulation In this section, we first provide a high-level description for the sequence of events that occur as a frame travels from sender to receiver in a packet-switched network and then describe our software tool for simulating these events in order to extract the dynamic channel statistics. We model the arrival of frames into the network as follows. Each sender is allowed to transmit frames one at a time, following an exponentially distributed inter-arrival time with average \(1/\gamma\). Note that all senders can be active simultaneously. We assume a repetition rate \(R_{t}=1\) GHz for the signals in the quantum payload. The destination for each frame is assigned randomly from the list of all receivers in the network. A frame travels from a sender towards its default router (i.e., the router to which the sender is directly connected). The default router and all subsequent routers a frame encounters will forward the frame according to the path determined by the routing algorithm for the network. The routing algorithm calculates the least-cost path from sender to receiver, where the cost of a path is the sum of the link costs along the path. In this work, we consider a load-insensitive routing algorithm, meaning the cost of each link in the network does not reflect its level of congestion and is determined solely by its physical length. Therefore, the least-cost path is simply the shortest path. Note that in the case of multiple least-cost paths, the router will select one at random. In general, the shortest path may not have the highest expected transmittance, depending on the number of routers it contains. In this case, the cost of the path may be modified to include router loss, although this scenario is not applicable in this work. A frame can be forwarded from a router only if there are fewer than \(c\) frames simultaneously being forwarded from the router and there are no frames preceding it in the queue (we refer to \(c\) as the number of servers for the queue); otherwise, the frame must join the queue. A frame may join the queue only if there are fewer than \(q\) frames already in the queue (we refer to \(q\) as the capacity of the queue); otherwise, the frame will be discarded. Frames will be forwarded from the queue according to a first-come first-served discipline. In order to simulate these events in a network, we developed a software tool based on a simulation method known as discrete-event simulation (DES) [27]. We build on the DES Python package _SimPy_[28] for the timing and resource management aspects of the network. For the network configuration, including path calculations and topology initialization, we use the Python package _NetworkX_[29]. The first step in using our simulation is to configure a topology of nodes (i.e., users and routers) and links (i.e., connections between nodes). Each node is able to generate frames as well as process any incoming frames. If the node is a sender, frames at the node do not undergo header processing and the frame need only wait to be sent into the network according to the frame arrival model. If the node is a router or a receiver, frames at the node will undergo a processing delay. In our simulation, routers can process \(k\gg 1\) headers simultaneously. In general, if \(k\) is small, the frames may experience a queuing delay prior to header processing. In our simulation, the queue in each router has \(c=1\) server and unlimited storage capacity (\(q\rightarrow\infty\)). The actions on the frame during the processing and queuing delays will depend on the network routing protocol, as outlined in Sec. II.2. Each frame in the network holds attributes (corresponding to header fields) for the storage time limit, how long it has spent in storage, the temporal frame length, the guard time, the path it has travelled, and its status (in transit, arrived, or discarded). We can simulate the network dynamics for a specified duration and collect data on the number of routed QKD signals, \(N\), as well as the path they have travelled, i.e., the number of routers traversed and the average total time spent in storage, \(\langle\sum_{i}T_{s}^{i}\rangle\). Note that signals from different frames will have a different total storage time, and so we take an average over all frames. We may then determine the average channel transmittance for each user-pairing, \[\langle\eta_{tot}\rangle=10^{(-\alpha L-\langle\sum_{i}loss_{r}^{i}\rangle)/ 10}, \tag{2}\] where \(\alpha\) is the attenuation coefficient (dB/km) of the network links, \(L\) is the distance between sender and receiver, and \(\langle\sum_{i}loss_{r}^{i}\rangle\) is the average loss over all routers in the channel, found by Eq. 1. The simulated \(N\) and \(\langle\eta_{tot}\rangle\) may then be used by senders in the network to optimize their decoy-state parameters. Note that the network statistics correspond to a particular network configuration; namely, the topology, number of users, frame inter-arrival time, and routing protocol. Thus, these parameters must be known and fixed prior to a QKD session in order for user pairs to have accurate knowledge of their transmittance statistics. This is feasible in practice. For example, the network can employ traffic shaping [30] to ensure that frames from each sender arrive one at a time with inter-arrival times following the intended distribution. The remaining parameters typically do not change very frequently and their status can be updated as needed to all network users. ## V Results and discussion In order to demonstrate the feasibility of performing QKD in a packet-switched network, we analyze the network shown in Fig. 4. We choose this topology as it combines properties of star, ring, and dumbbell networks. We emphasize, however, that our approach may be used to test an arbitrary network configuration. In our simulated network, sixteen users are connected through four routers by standard single-mode fiber. In practice, each user can operate as a sender or a receiver, but we assume that users do not operate in both modes simultaneously. Thus, half of the users are designated as senders ("Alices") and half as receivers ("Bobs"). In this section, we present the secure key rates _per sent pulse_ for Alice-Bob pairs separated by one, two, and three routers. We test each of the three routing protocols outlined in Sec. II.2. ### No Storage During Delays In Fig. 5, we show the key rate performance in a network with no storage during delays. We fix the number of frames sent between each user pair and examine the effects of the average frame inter-arrival time \(1/\gamma\), the initial frame length \(T_{f}^{0}\), and the initial guard time \(T_{g}^{0}\). In this routing protocol, these parameters affect the data size, \(N\), for key generation. The top and bottom rows contain the results for zero and non-zero guard times, respectively. The columns from left to right show the results for a user pair separated by one, two, and three routers. We interpret these results as follows. Firstly, the secure key rate is expected to decrease with higher channel loss. Therefore, we observe the highest key rates for \(A_{31}\) and \(B_{32}\) and the lowest for \(A_{22}\) and \(B_{31}\). We note that due to the symmetry of the network configuration, there are negligible differences between the results of different user pairs with the same separation. For small values of \(1/\gamma\), higher network traffic results in larger \(d_{queue}\) leading to more pulses being discarded and thus smaller \(N\). As a result, we observe a decrease in the key rate as \(1/\gamma\) decreases. In Figs. 5(a)-5(c), we observe the effect of \(T_{f}^{0}\). As this parameter increases, more pulses are generated. However, longer frames have a larger \(d_{trans}\) which increases the time for which the server is occupied at each router and therefore increases \(d_{queue}\). Thus we expect the upwards trend in the key rate to eventually stop, as is observed in Fig. 5(c). In Figs. 5(d)-5(f), we observe the effect of \(T_{g}^{0}\) for a fixed \(T_{f}^{0}\). A larger guard time means fewer pulses are discarded during delays but smaller payloads are generated. Due to this effect, we see a rise then fall in the key rate as \(T_{g}^{0}\) increases. Furthermore, for a given \(T_{f}^{0}\), a non-zero guard time is shown to slightly enhance the key rate. Ultimately, these results suggest that QKD can succeed in a packet-switched network even without any optical storage capacity at the routers. ### Storage During Delays (Unlimited) In Fig. 6, we show the key rate performance in a network with storage during delays, where frames have no storage time limit. We fix the number of frames sent between each user pair and examine the effect of the attenuation coefficient, \(\alpha_{s}\), for the fiber delay lines used as storage at the routers which will determine \(\langle\eta_{tot}\rangle\) for the QKD channel. The top and bottom rows consider scenarios of long and short frame lengths, respectively, where the ratio of frame length to \(1/\gamma\) is fixed in each such that the average network traffic is the same. The left and right columns consider zero and non-zero guard times, respectively. For each user pair, we compare the results of this routing protocol to the no storage routing protocol under the same network parameters. We interpret these results as follows. Firstly, the secure key rate decreases exponentially with \(\alpha_{s}\), as expected. A non-zero guard time is again shown to enhance the key rate since it reduces the storage time of each payload, which increases \(\langle\eta_{tot}\rangle\). Guard time also reduces \(d_{queue}\) since it shrinks \(d_{trans}\) at each router. The enhancement is more Figure 4: Sixteen-user network for simulation. Each of the four routers are connected to two Alices and two Bobs. The links are assumed to be standard single-mode optical fiber (0.2 dB/km) spanning 20 km between routers and 5 km between each user and their default router. pronounced in the long frames scenario since the guard time is \(\gg d_{proc}\) in this case. We observe that the short frames scenario is generally more robust to increasing \(\alpha_{s}\), which can be attributed to smaller storage times due to a smaller \(d_{trans}\). The distributions of the storage time in the long and short frames scenarios are shown in Fig. 8 for the case of zero guard time. In Figs. 6(a) and 6(b), we observe that the no storage routing protocol is generally superior when \(\alpha_{s}>0.01\) dB/km. We note that while attenuation coefficients as low as 0.14 dB/km have been achieved over telecom wavelengths using state-of-the-art technology [31], it is unrealistic to consider an attenuation much smaller than this. For a more efficient storage medium, we require long-lived quantum memories. In Figs. 6(c) and 6(d), we do not extract any secure keys with the no storage routing protocol except in the case of one router separating users. This can be explained since the frame length is on the order of \(d_{proc}\), so there are zero to few non-discarded pulses from each payload. Our results suggest that, for short frames, storage during network delays is a better strategy than discarding pulses. The opposite holds true for frame lengths \(\gg d_{proc}\) when we consider realistic fibers as our storage medium. This finding is important since frame lengths in a packet-switched network may have practical constraints. As mentioned previously, we may enforce a STL in post-processing, analogous to applying a cut-off \(\langle\eta_{tot}\rangle\), in order to improve the key rate. Fig. 7 shows the results for the same parameters as in Fig. 6, but with frames excluded from key generation if their storage time reached the STL. We consider an ultra-low-loss fiber with \(\alpha_{s}=0.16\) dB/km as our storage medium and examine the effect of the STL duration. It is clear that implementing a STL enhances the key rate in each scenario considered, and most significantly for frame lengths \(\gg d_{proc}\). In Fig. 7(a), the optimal STL for users separated by one, two, and three routers is 200 \(\mu s\), 300 \(\mu s\), and 400 \(\mu s\), respectively. From Fig. 8(a), we see that these STLs preserve 82%, 70%, and 58% of frames across the user pairs. In Fig. 7(b), the optimal STL is roughly 150 \(\mu s\) for all user pairs and the key rates approach those of the no storage routing protocol. Figure 5: Secure key rates in a network with no storage during delays. A total of 18,750 frames are generated by Alice in each user pair. The finite data size is \(N\approx 10^{12}\). In plots (a)-(c), we fix the initial guard time, \(T_{g}^{0}=0\) and vary the initial frame length, \(T_{f}^{0}\) and average frame inter-arrival time, \(1/\gamma\). In plots (d)-(f), we fix \(T_{f}^{0}=2,000\)\(\mu s\) and vary \(T_{g}^{0}\) and \(1/\gamma\). Columns (left to right) are for user pairs \(A_{31}\) and \(B_{32}\), \(A_{42}\) and \(B_{22}\), and \(A_{22}\) and \(B_{31}\) of Fig. 4. Color map changes from white to purple as the key rate increases. ### Storage During Delays (Limited) In Fig. 9, we show the key rate performance in a network with storage during delays, where frames have a storage time limit. Once again, we fix the number of frames sent between each user pair and consider \(\alpha_{\text{s}}=0.16\) dB/km. We examine the effect of the STL duration under various network parameters and in each case we compare the results with the unlimited storage routing protocol where a STL is implemented in post-processing. Note that for the network parameters in the previous subsection, the two methods for implementing a cut-off transmittance produce very similar results. Here we show scenarios in which discarding frames en-route provides a significant advantage due to its mitigation of network congestion. ## VI Outlook and Conclusions In this work, we have developed a framework for key rate optimization in a packet-switched network and assessed QKD performance in relation to several network parameters such as frame length, guard time, frame inter-arrival time, and storage efficiency. Notably, we found that practical secure key rates can be achieved without any optical storage capacity in the network and that guard time can generally be used to mitigate the effects of network delays. We also found that the transmittance threshold strategy used in free-space QKD can be applied in a packet-switched network to significantly enhance the key rate by limiting the permissible storage time of frames. We believe our results pave the way for future exploration of quantum applications in a packet-switched network. Future areas of investigation may include examining more complex network topologies and perhaps a topology deployed in the field. Given that our simulation tool can accommodate arbitrary network configurations, hardware specifications, and traffic models, it can be used to establish a performance benchmark for real-world systems. The simulation tool, which we aim to make publicly available in the near future, can also be extended to examine the performance of other quantum communication tasks besides QKD such as entanglement distribution. An interesting question to address is how QKD in a packet-switched network compares to a circuit-switched network. While we have a general idea of when packet switching outperforms circuit switching based on classical networks, determining specific conditions for this advantage in a quantum network may be useful. Lastly, future work may consider the security of QKD protocols other than BB84, such as protocols where all signals sent by Alice are required to be measured by Bob. Such protocols may require us to re-evaluate the security at the routers in a packet-switched network.
2308.01335
On the origin of planetary-mass objects in NGC1333
The dominant formation mechanism of brown dwarfs and planetary mass objects in star-forming regions is presently uncertain. Do they form like stars, via the collapse and fragmentation of cores in Giant Molecular clouds, or do they form like planets in the discs around stars and are ejected via dynamical interactions? In this paper, we quantify the spatial distribution of substellar objects in NGC1333, in particular focusing on planetary-mass objects that have been the target of recent deep imaging observations. We find that these objects have a spatial distribution that is indistinguishable from the stars, and more massive brown dwarfs. We also analyse N-body simulations and find that a population of ejected planets would have a significantly different spatial and kinematic distribution to stars, and brown dwarfs that also formed through gravitational collapse and fragmentation. We therefore conclude that the low-mass substellar objects in NGC1333 formed more like stars than planets, although we predict that a population of hitherto undetected ejected planetary mass objects may be lurking in this, and other star-forming regions.
Richard J. Parker, Catarina Alves de Oliveira
2023-08-02T18:00:00Z
http://arxiv.org/abs/2308.01335v1
# On the origin of planetary-mass objects in NGC 1333 ###### Abstract The dominant formation mechanism of brown dwarfs and planetary mass objects in star-forming regions is presently uncertain. Do they form like stars, via the collapse and fragmentation of cores in Giant Molecular clouds, or do they form like planets in the discs around stars and are ejected via dynamical interactions? In this paper, we quantify the spatial distribution of substellar objects in NGC 1333, in particular focusing on planetary-mass objects that have been the target of recent deep imaging observations. We find that these objects have a spatial distribution that is indistinguishable from the stars, and more massive brown dwarfs. We also analyse _N_-body simulations and find that a population of ejected planets would have a significantly different spatial and kinematic distribution to stars, and brown dwarfs that also formed through gravitational collapse and fragmentation. We therefore conclude that the low-mass substellar objects in NGC 1333 formed more like stars than planets, although we predict that a population of hitherto undetected ejected planetary mass objects may be lurking in this, and other star-forming regions. keywords: stars: formation - (stars:) - brown dwarfs - planets and satellites: gaseous planets - stars: kinematics and dynamics - open clusters and associations: individual: NGC 1333 - methods: numerical ## 1 Introduction Star and planet formation occur contemporaneously, yet they are often treated as distinct or separate processes. This simplification becomes unviable when assessing the substellar population in star-forming regions. Observations show that there is one brown dwarf (\(m<0.08\,\mathrm{M}_{\odot}\)) for every \(\sim\)2-7 H-burning stars in star-forming regions (e.g. Barrado y Navascues et al., 2002; Andersen et al., 2008; Geers et al., 2011; Muzic et al., 2015; Pearson et al., 2020; Kubiak et al., 2021), and most authors consider the brown dwarf regime an extension of the same process that formed stars with a continuous mass function into the substellar regime (e.g. Chabrier et al., 2014), though see Thies & Kroupa (2008). The origin of free-floating planetary mass objects is even less clear, especially as their masses are notoriously difficult to determine (e.g. Baraffe et al., 2002; Feiden & Chaboyer, 2012; Canty et al., 2013; Lueber et al., 2022), and the mass range of these objects overlaps with brown dwarfs (Espinin & Luhman, 2017; Gagne et al., 2017; Lodieu et al., 2021). Therefore, a population of free-floating planets in a star-forming region may have simply formed "like stars", i.e. from the gravitational collapse and fragmentation of Giant Molecular Clouds (e.g. Low & Lynden-Bell, 1976; Padoan & Nordlund, 2004; Gahm et al., 2007; Haworth et al., 2015); or could instead be the result of planet-planet scattering (e.g. Chatterjee et al., 2008; Boley et al., 2012; Veras & Raymond, 2012; Smullen et al., 2016), or direct encounters between two stars, leading to the ejection of planets around one or both stars (Bonnell et al., 2001; Parker & Quanz, 2012; Daffern-Powell et al., 2022). Recently, Scholz et al. (2022) used the numbers of free-floating planets from simulations to predict how many such objects could be observed with James Webb Space Telescope. Following this, Scholz et al. (2022) performed deep imaging on substellar objects in NGC 1333 to determine whether these objects hosted discs. Scholz et al. (2023) established that only one out of the six least massive PMOs in NGC 1333 hosts a disc, leading them to speculate that these objects may have formed more like planets, rather than like stars. Given the prospect of detailed high-resolution spectra with JWST (e.g the NIRSpec instrument, Jakobsen et al., 2022), which will enable accurate mass and velocity determinations of substellar objects, it is prescient to determine what - if any - signatures in the spatial and kinematic distri bution of substellar objects could be used to pinpoint their physical origins. Previous work has shown that if brown dwarfs are a continuous extension of star formation, we would not expect a significantly different spatial distribution of the brown dwarf population (Parker & Andersen, 2014). However, a similar analyses of the distribution of ejected planets has not been performed. In this paper, we exploit the likely complete census of stars and brown dwarfs in the NGC 1333 star-forming region (Luhman et al., 2016) to calculate the spatial distribution of brown dwarfs and planetary-mass objects, and how this compares to the stars. We then calculate the same metrics in _N_-body simulations whose initial conditions were derived from a previous analysis of the spatial distribution of stars in NGC 1333 (Parker & Alves de Oliveira, 2017). However, these simulations differ from those in Parker & Alves de Oliveira (2017) in that - in addition to a continuous IMF into the brown dwarf mass regime - they contain substellar objects on orbits around stars, which may be ejected from their host stellar system due to dynamical encounters. The paper is organised as follows. In Section 2 we describe the observational data we used to calculate the spatial distributions of stars, brown dwarfs and planetary mass objects in NGC 1333. In Section 3 we describe our methods to measure the spatial distributions, and also to set-up the _N_-body simulations. In Section 4 we present our results, in Section 5 we provide a discussion, and we conclude in Section 6. ## 2 Observational data We use the same dataset as in Parker & Alves de Oliveira (2017), which in turn is based on the Luhman et al. (2016) census of NGC 1333. Luhman et al. (2016) confirmed spectroscopically the membership of tens of brown dwarfs and stars, resulting in a final sample of 203 stars. Scholz et al. (2023) performed deep imaging on a subset of 14 of the brown dwarfs from Luhman et al. (2016), as well as an additional substellar object discovered by Esplin & Luhman (2017). We add this object to our sample, and use the spectral types to estimate the masses of stars earlier than M0 using the temperature scale from Schmidt-Kaler (1982), the scale from Luhman et al. (2003) for sources between M0 and M9.5 and for L dwarfs we used the scale from Lodieu et al. (2008). The masses were derived in the same way as in Parker & Alves de Oliveira (2017), assuming a distance of 235 pc to NGC 1333. Using this method, the masses of the L0 objects were set to 0.012 M\({}_{\odot}\), the L1 objects were set to 0.010 M\({}_{\odot}\) and the L3 object was assigned a mass of 0.05 M\({}_{\odot}\) in our subsequent analysis. Note that our results are not dependent on the exact mass values, but rather the relative masses, and we assume that the L-type objects are the lowest mass members of NGC 1333. We show the positions of the objects in the Luhman et al. (2016) sample in Fig. 1. The 15 planetary mass objects discussed in Scholz et al. (2023) are shown by the blue symbols, with the new object from Esplin & Luhman (2017) shown by the blue cross. The census of NGC 1333 is thought to be complete within the ACIS-I field, as discussed in Luhman et al. (2016), and this is shown within the dashed line. ## 3 Methods In this section we first describe our methods to quantify the spatial and kinematic distributions of stars, brown dwarfs and planetary mass objects, before describing the _N_-body simulations with which we compare the observations. ### Quantifying the spatial distributions of objects There are multiple methods in the literature for quantifying the spatial distribution of stars and substellar objects in star-forming regions (see Parker & Goodwin, 2015; Blaylock-Squibbs et al., 2022, for a discussion of the different methods). We utilise two different techniques to quantify the spatial distributions of stars, brown dwarfs and planetary mass objects in the observed census of NGC 1333, and our _N_-body simulations, Ams& (Allison et al., 2009) and \(\Sigma-m\)(Maschberger & Clarke, 2011). An enormous amount of confusion abounds in the literature when assessing the advantages and disadvantages of different techniques for quantifying spatial distributions, including the amount of mass segregation, in a star-forming region. Often, misunderstandings and apparent contradictions occur because mass segregation is often defined in different ways. For example, Ams&, which we will describe below, measures whether a subset of objects are closer together than a randomly chosen subset. \(\Sigma-m\) measures the relative local surface densities of the objects in a chosen subset. Typically, a smooth, centrally concentrated and mass-segregated star cluster will show mass segregation in both \(\Lambda_{\mathrm{MSR}}\) and high surface densities of the most massive objects in \(\Sigma-m\) Figure 1: Map of objects in NGC1333. The area within the dashed lines is the ACIS-I field from Luhman et al. (2016), which is observationally complete. The 15 planetary-mass objects discussed in Scholz et al. (2023) are shown in blue, with the object not in the original Luhman et al. (2016) census shown by the blue cross (Eapplin & Luhman, 2017). However, in an association, with multiple groups, or nodes, of stars, the massive stars may be as spread out as a randomly chosen subset, yet they may be in areas of higher than average surface density (and such behaviour often occurs in associations due to dynamical evolution, Parker et al., 2014). \(\Lambda_{\rm MSR}\) and \(\Sigma-m\) measure different properties, but both have the advantage that the subset of interest can be any group of objects, as defined by the user. Often, we are interested in the subset of the most massive stars, but in this paper we are interested in planetary mass objects and brown dwarfs. The strength of \(\Lambda_{\rm MSR}\) and \(\Sigma-m\) is that they are relative measures, and are not hamstrup by e.g. the absence of massive stars, as erroneously asserted by Guszejnov et al. (2022). #### 3.1.1 The \(\Lambda_{\rm MSR}\) mass segregation ratio The mass segregation ratio, \(\Lambda_{\rm MSR}\) is calculated by constructing a minimum spanning tree (MST), a graph of the shortest possible path between a set of points, or nodes, where there are no closed loops (Prim, 1957). We construct an MST between the chosen subset and calculate the length of this MST, \(l_{\rm subset}\), which contains \(N_{\rm MST}\) objects. We then calculate the average MST length in the star-forming region by taking a set of 100 randomly chosen MSTs, each containing the same number of objects as the chosen subset, and calculating the average MST length from this, \(\langle l_{\rm average}\rangle\). We conservatively estimate the lower (upper) uncertainty as being the length that lies 1/6 (5/6) through an ordered list of the random subset lengths, corresponding to a 66 per cent deviation from the random length, \(\langle l_{\rm average}\rangle\). This is summarised in the following equation: \[\Lambda_{\rm MSR}=\frac{\langle l_{\rm average}\rangle^{*\sigma_{\rm log}/l_{ \rm subset}}}{l_{\rm subset}}. \tag{1}\] If a subset of objects is mass-segregated (i.e. closer together than the average subset), then \(\Lambda_{\rm MSR}>>1\). If the objects in the chosen subset are more spread our than the average objects in the region (as might be expected for planetary-mass objects or brown dwarfs) then \(\Lambda_{\rm MSR}<<1\). If no mass segregation is present, \(\Lambda_{\rm MSR}=1\). There are two ways of determining \(\Lambda_{\rm MSR}\) to assess the significance of any deviation from unity. The original method in Allison et al. (2009) starts with the \(N_{\rm MST}\) most massive objects, and then calculates \(\Lambda_{\rm MSR}\) for successively larger \(N_{\rm MST}\) values. Allison et al. (2009) used this method to show that the four most massive stars in the Orion Nebula Cluster (i.e. the Trapezium system) are mass-segregated, but the amount of mass segregation decreases with larger \(N_{\rm MST}\) such that when the \(N_{\rm MST}=20\) most massive objects are considered, there is no mass segregation (by definition \(\Lambda_{\rm MSR}=1\) when \(N_{\rm MST}\) includes all the stars in the region). This version of \(\Lambda_{\rm MSR}\) has been used to quantify mass segregation (or inverse mass segregation) of massive stars, brown dwarfs and pre-stellar clumps/cores (Moeckel and Bonnell, 2009a, b; Olczak et al., 2011; Parker et al., 2011; Girichidis et al., 2012; Plunkett et al., 2018; Hetem and Gregorio-Hetem, 2019; Konyves et al., 2020; Morii et al., 2023). In this paper, we will apply this method to the five _least_ massive objects in NGC 1333, and then add successively higher-mass objects to \(N_{\rm MST}\). An alternative method, first proposed by Parker et al. (2011) and since used by other groups (e.g. Alfaro and Gonzalez, 2016; Gonzalez and Alfaro, 2017; Alfaro and Roman-Zuniga, 2018), keeps \(N_{\rm MST}\) fixed and instead slides through the dataset. For example, one can start with the 10 least massive objects, calculate \(\Lambda_{\rm MSR}\), and then move to the 11 - 20 least massive objects, and so on. This method is noisier than the original method, and care must be taken to avoid over-interpreting significant deviations from \(\Lambda_{\rm MSR}=1\), but it has the advantage that a specific subset of objects in the middle of the mass range can be examined in detail. This will be important later when we analyse _N_-body simulations with a population of planetary-mass objects that lie in the middle of a wider mass distribution of substellar objects. #### 3.1.2 \(\Sigma-m\) relative surface densities The \(\Sigma-m\) technique (Maschberger and Clarke, 2011) quantifies the relative surface density of objects in a star-forming region, and can then be used to determine whether a particular mass range have higher or lower surface densities compared to the region as a whole. For example, mass segregation of the most massive stars might be apparent in higher-than-average surface densities for these objects. Conversely, if substellar objects are preferentially ejected over low-mass stars, then we might expect substellar objects to have lower-than average surface densities. The surface density of an object of mass \(m\) is calculated using \[\Sigma=\frac{N-1}{\pi r_{N}^{2}}, \tag{2}\] where \(r_{N}\) is the distance to the \(N^{\rm th}\) nearest neighbour to the object of mass \(m\)(Casertano and Hut, 1985). We adopt \(N=10\), but the dependence of \(\Sigma\) on the choice of \(N\) only becomes important if the structure of the region changes abruptly between different values of \(N\). In practice, as long as \(N\) is higher than e.g. 2 or 3, then the determination of \(\Sigma\) is not biased by multiplicity (Kraus and Hillenbrand, 2008) and traces the typical local surface density of the stellar and substellar systems. Maschberger and Clarke (2011) showed that in simulated star-forming regions where the most massive stars form in the most dense regions, the massive stars have a significantly (as determined by a two-sided KS test) higher surface densities than the region as a whole. The method was further utilised by Kupper et al. (2011) and Parker et al. (2014), who showed that high surface densities in the massive stars can be used as a dynamical clock (in tandem with other metrics including \(\Lambda_{\rm MSR}\)) to determine the initial conditions of a star-forming region. ### Velocity distributions We do not have information on the velocities of the stars in our observational sample, but we can make predictions for the expected velocity distributions from our _N_-body simulations. We construct two distributions. First, we take the radial velocities, defined in the simulations as the component of the velocity vector along the \(z\)-axis. Secondly, we produce a distribution of proper motion velocities \(\mu\), where we take the on-sky positions (i.e. in the \(xy\)-plane in the simulations) between snapshots (at \(t_{a}\) and \(t_{n+1}\)) and then divide the difference in position by the time interval, thus: \[\mu=\frac{r_{\lambda_{\rm Zel}}-r_{\lambda_{\rm Zel}}}{t_{n+1}-t_{a}}. \tag{3}\] ### \(N\)-body simulations In order to compare the observed spatial distributions of substellar objects in NGC 1333 to models of brown dwarf and planet formation, we use \(N\)-body simulations to simulate the dynamical evolution of this star-forming region. We create populations of \(N=150\) stellar and substellar objects by drawing masses from a Maschberger (2013) Initial Mass Function (IMF) with a probability distribution of the form \[p(m)\propto\left(\frac{m}{\mu}\right)^{-\alpha}\left(1+\left(\frac{m}{\mu} \right)^{1-\alpha}\right)^{-\beta}. \tag{4}\] In Eqn. 4 \(\mu=0.2\,{\rm M}_{\odot}\) is the scale parameter, or 'peak' of the IMF (Bastian et al., 2010; Maschberger, 2013), \(\alpha=2.3\) is the Salpeter (1955) power-law exponent for higher mass stars, and \(\beta=1.4\) describes the slope of the IMF for low-mass objects. We randomly sample this distribution in the mass range 0.001 - 50 \({\rm M}_{\odot}\), such that we sample objects down to the planetary mass regime. We then create a separate population of planetary-mass objects, which we place on an orbit around stellar mass (\(0.08<m/{\rm M}_{\odot}\leq 3\)) objects. These 'planets' have a mass of \(10\,{\rm M}_{\rm Jup}\) (\(9.4\times 10^{-3}\,{\rm M}_{\odot}\), which overlaps with the mass range of the objects that form "like stars"), are assigned zero eccentricity and inclination. In one set of simulations the planets are all assigned a semimajor axis of 5 au (to be on a Jupiter-like orbit), and in another set of simulations the planets are all assigned a semimajor axis of 30 au (to be on a Neptune-like orbit). In a third set of simulations, the planets are again all placed at 30 au, but have masses of \(1\,{\rm M}_{\rm Jup}\) (\(9.4\times 10^{-4}\,{\rm M}_{\odot}\), which is slightly lower than the mass range of the objects that form as stars). We thus have a collection of systems, which are either single stars, single brown dwarfs or star-planet systems. We randomly distribute these systems within a fractal distribution (Goodwin & Whitworth, 2004), which is designed to mimic the filamentary and substructured stellar distributions in both observed (Cartwright & Whitworth, 2004; Sanchez & Alfaro, 2009; Hacar et al., 2013; Buckner et al., 2019) and simulated (Schmeja & Klessen, 2006; Bate, 2009) star-forming regions. We refer the interested reader to Daffen-Powell & Parker (2020) for a comprehensive description of the set-up of the fractal distributions, but briefly summarise them here. The fractals are constructed by placing a parent particle at the centre of a cube, and then determining the probability of that particle maturing and spawning further particles. The probability of this occurring goes as \(2^{3-D}\), where \(D\) is the desired fractal dimension. For a smooth distribution, the fractal dimension is \(D=3.0\) and so no further particles are spawned. For a substructured distribution, \(D=1.6\), which results in multiple generations of particles. The particles are assigned a velocity drawn from a Gaussian distribution of mean zero. The child particles inherent their parents' velocities, plus a small random offset that decreases with each subsequent generation of particles. We scale the velocities of the systems to a subvirial ratio (\(\alpha=0.3\), where \(\alpha=T/|\Omega|\) and \(T\) and \(|\Omega|\) are the total kinetic and potential energies, respectively, and \(\alpha=0.5\) is virial equilibrium). In our simulations, we mainly adopt a highly substructured distribution (\(D=1.6\)) with a radius \(r_{P}=0.5\,{\rm pc}\), which results in high stellar densities (\(\sim 10^{4}\,{\rm M}_{\odot}\,{\rm pc}^{-3}\)). These initial conditions are informed by earlier work to constrain the initial conditions of NGC 1333 (Parker & Alves de Oliveira, 2017), albeit towards the high end of the initial conditions as constrained by Parker & Alves de Oliveira (2017). However, as we want to test whether the PMOs in NGC 1333 might be the result of ejection from systems, we adopt these high densities as they more readily lead to the creation of a separate population of free-floating planetary mass objects, whose properties (spatial and velocity distribution) we can compare with that of the brown dwarfs in the simulations. However, we also ran sets of simulations with larger (1 pc) radii and higher fractal dimensions (\(D=2.0\), which are less substructured) resulting in densities of \(\sim 500\,{\rm M}_{\odot}\,{\rm pc}^{-3}\) and found no differences to our main results, save for fewer stars being ejected overall. The simulations are evolved for 10 Myr using the kira package within the Starlab environment (Portegies Zwart et al., 1999, 2001) and we analyse the data at 1 and 3 Myr, which spans the likely mean age range for this star-forming region. We do not include stellar evolution in the simulations. We summarise the different simulations in Table 1. In our analysis of the spatial and velocity distributions in the \(N\)-body simulations, we exclude any objects that are beyond a radius of 5 pc, in order to mimic an observer's field of view. ## 4 Results We first show the results of the \(\Lambda_{\rm MSR}\) and \(\Sigma-m\) analyses of the observational data for NGC 1333 before describing the similar analyses performed on \(N\)-body simulations with substellar objects that form in the same way as stars, and also substellar objects that were initially orbiting a star. \begin{table} \begin{tabular}{c c c c c c} \hline \(D\) & \(r_{F}\) & \(\tilde{\rho}\) & \(m_{p}\) & \(a_{p}\) & Fig. \\ \hline 1.6 & 0.5 pc & \(10^{4}\,{\rm M}_{\odot}\,{\rm pc}^{-3}\) & \(10\,{\rm M}_{\rm Jup}\) & 30 au & Figs. 5, 6 and 7 \\ 1.6 & 0.5 pc & \(10^{4}\,{\rm M}_{\odot}\,{\rm pc}^{-3}\) & \(1\,{\rm M}_{\rm Jup}\) & 30 au & Fig. 8 \\ \hline 1.6 & 0.5 pc & \(10^{4}\,{\rm M}_{\odot}\,{\rm pc}^{-3}\) & \(10\,{\rm M}_{\rm Jup}\) & 5 au & — \\ \hline 2.0 & 0.5 pc & \(500\,{\rm M}_{\odot}\,{\rm pc}^{-3}\) & \(10\,{\rm M}_{\rm Jup}\) & 30 au & — \\ 2.0 & 0.5 pc & \(500\,{\rm M}_{\odot}\,{\rm pc}^{-3}\) & \(1\,{\rm M}_{\rm Jup}\) & 30 au & — \\ \hline \end{tabular} \end{table} Table 1: A summary of the different initial conditions of our simulated star-forming regions. The columns show the fractal dimension, \(D\), the initial radius of the star-forming region, \(r_{F}\), the initial median local stellar density as a result of \(D\) and \(r_{F}\), \(\tilde{\rho}\), the mass of the planets, \(m_{p}\), and the initial semimajor axis of the planets \(a_{p}\). The final column indicates whether the simulation is shown in a Figure in Section 4.2. ### Ngc 1333 We show the calculation of \(\Lambda_{\rm MSR}\) for NGC 1333 in Fig. 2. In panel (a) we show the calculation of \(\Lambda_{\rm MSR}\) for successively larger values of \(N_{\rm MST}\), where progressively higher mass stars are added to the determination of \(\Lambda_{\rm MSR}\). The values for \(\Lambda_{\rm MSR}\) are all consistent with unity, i.e. there is no preferential spatial distribution for the substellar objects and in particular we highlight that there is no difference in the spatial distribution of the least massive PMOs discussed in Scholz et al. (2023). We then employ the'slide' version of \(\Lambda_{\rm MSR}\) in panel (b). As discussed in Section 3, this method is noisier, but allows us to identify different spatial distributions of objects in a narrow mass range (the horizontal 'error' bars in panel (b) show the mass range that the value of \(\Lambda_{\rm MSR}\) is associated with). Again, we see no significant deviation from unity in any mass range. We next plot the local surface density around each object \(\Sigma\) as a function of the object's mass \(m\) in Fig. 3. The various horizontal lines represent the median surface densities of subsets of objects; the black dashed line is the median surface density for all objects, the solid red line is the most massive stars, the orange line is for brown dwarfs (\(0.01<m/\mathrm{M}_{\odot}<0.08\)) and the solid blue line is for the planetary mass objects discussed in Scholz et al. (2023). Two-sided KS tests between the different subsets and the region as a whole produce high p-values, meaning we cannot reject the hypothesis that all types of object share the same underlying density distribution. The absence of any difference in the spatial distribution of substellar objects is very similar to previous results obtained for NGC 1333 with a very similar dataset (Parker & Alves de Oliveira, 2017), but provides an interesting null-result with which to compare our _N_-body simulations. ### _N_-body simulations We run 10 versions of the each simulation, where we alter the random number seed that sets the masses, positions and velocities. However, in the following we show the results for a representative simulation, and if necessary discuss any differences between the different runs. In Fig. 4 we show the positions of stars (grey points), brown dwarfs (orange triangles) and ejected planets (blue squares) after 3 Myr of dynamical evolution (similar to the age of NGC 1333). We also show the locations of the most massive stars by the red diamond symbols. From inspection, the planets and brown dwarfs appear more dispersed than the stars, but we quantify this in the following analysis. #### 4.2.1 Mass segregation We calculate the mass segregation ratio, \(\Lambda_{\rm MSR}\), using both the original method from Allison et al. (2009) and the slide method from Parker et al. (2011). In the original determination of \(\Lambda_{\rm MSR}\), we start with subsets of the lowest-mass objects (which are brown dwarfs) and add progressively more massive objects to the sample. This is shown in Fig. 5(a). The least massive brown dwarfs are consistent with \(\Lambda_{\rm MSR}=1\), indicating no mass segregation, before \(\Lambda_{\rm MSR}<<1\) for slightly more massive objects, which includes the planetary-mass objects. We then calculate \(\Lambda_{\rm MSR}\) for discrete mass bins containing ten objects, starting from the least massive subset (Fig. 5(b)). The two least massive subsets contain brown dwarfs drawn from the mass function and with identical velocity and spatial distributions to the stars. These have \(\Lambda_{\rm MSR}\) ratios consistent with unity. The next subset (third bin from the left) contains planetary mass objects that were orbiting stars but have been ejected through dynamical encounters. They are significantly more widely distributed than the other objects in the star-forming region, with \(\Lambda_{\rm MSR}=0.30^{\circ,441}_{-0.21}\). This is an extremely significant deviation from \(\Lambda_{\rm MSR}=1\). At higher masses, but still within the brown dwarf regime, \(\Lambda_{\rm MSR}\sim 1\). The only other subset that deviates significantly from \(\Lambda_{\rm MSR}=1\) are the most massive stars (the rightmost bin in Fig. 5(a)). #### 4.2.2 Relative surface densities In Fig. 6 we plot the local surface density, \(\Sigma\) of each object in the simulation snapshot, as a function of the mass of the object. We then compare bins of different types of objects. The median density of all objects in the star-forming region is shown by the dashed black line; this is \(93\,{\rm stars\,pc^{-2}}\) at \(3\,{\rm Myr}\). The median density of the brown dwarfs (all objects with mass \(m<0.08\,\mathrm{M}_{\odot}\)) is \(43\,{\rm stars\,pc^{-2}}\), shown by the solid orange line. As in most of our simulations, the most massive stars have attained higher than average surface densities (\(400\,{\rm stars\,pc^{-2}}\), the solid red line). Finally, we show the median surface density of ejected planets by the blue cross (\(2\,{\rm stars\,pc^{-2}}\)). We assess the significance of the differences in densities between the subsets by using a KS test where we reject the null hypothesis that two subsets share the same underlying parent distribution if the \(p\)-value is less than 0.1. The brown dwarfs do not have significantly lower densities than the star-forming region as a whole (a KS test returns a difference \(D=0.18\) with a \(p\)-value \(p=0.11\), and so we cannot reject the hypothesis that that the brown dwarfs share the same underlying density distribution as the stars). Note that the brown dwarf subset contains the ejected planetary-mass objects. If we take the ejected planets as their own subset, we find that these do have significantly lower densities than all of the objects in the star-forming region, with a KS test returning a difference \(D=0.46\) with an associated \(p\)-value \(p=2\times 10^{-3}\). Whilst measuring different things, both \(\Lambda_{\rm MSR}\) and \(\Sigma-m\) show that substellar objects that formed around stars but were then ejected are likely to have a significantly different spatial distribution to substellar objects that formed like the stars in the region. #### 4.2.3 Velocity distributions We now compare the velocity distributions of the brown dwarfs and ejected planets to the stars. In Fig. 7(a) we show the proper motions of the ejected planets (the blue line), the brown dwarfs (the orange line), single stars (i.e. stars without a planetary companion, the grey line) and all stars (the black line). The free-floating planets clearly are moving at faster velocities (likely to be as a result of their ejection), whereas the brown dwarfs are moving at similar (albeit slightly faster) velocities compared to the stars. Conversely, the radial velocity distributions (Fig. 7(b)) are quite similar. The velocity dispersion for the ejected planets is 0.49 km s\({}^{-1}\) (the blue line), whereas for single stars (grey line) it is 0.30 km s\({}^{-1}\). For all stars, the dispersion is larger (0.54 km s\({}^{-1}\), the black line), but this is likely to be inflated by the contribution from the planetary companions still orbiting the majority of stars (Gieles et al., 2010; Cottaar et al., 2012). #### 4.2.4 Lower planetary masses We repeat the above analysis, but this time for identical simulations, save for the planet masses, which are now 1 M\({}_{\rm Jup}\), i.e. 9.4\(\times\)10\({}^{-4}\) M\({}_{\odot}\). This means that the planetary mass objects Figure 4: A plot of the positions of objects at 3 Myr in a representative simulation. Stars are shown by the grey points (the most massive are shown by the red diamonds), and brown dwarfs are shown by the orange triangles. The ejected planets are shown by the blue squares. Figure 3: \(\Sigma-m\) plot for NGC 1333. The surface density of each object, \(\Sigma\) is plotted against its mass \(m\). The median surface density for the star-forming region is shown by the dashed black line, the median surface density for the most massive stars is shown by the solid red line. The median surface density for brown dwarfs (0.01 \(<\)\(m\)/M\({}_{\odot}\)\(<\) 0.08) is shown by the solid orange line, and the median surface density for the planetary mass objects is shown by the solid blue line. No mass regime/object type has significantly different densities from the region as a whole. Figure 2: Calculation of the spatial distribution of brown dwarfs and planetary mass objects in NGC 1333 with \(\Lambda_{\rm MSR}\). Panel (a) shows the results when calculating \(\Lambda_{\rm MSR}\) with the \(N_{\rm MST}=5\) least massive objects, and then subsequently adding the next 10 least massive objects. The mass of the highest mass object within the \(N_{\rm MST}\) subset is indicated on the top horizontal axis. Panel (b) shows the calculation of \(\Lambda_{\rm MSR}\) for subsets of 10 objects, and moving through the data. In both panels, the error bars in the vertical direction indicate the 1/6 and 5/6 percentile values in the distribution of the randomly chosen MSTs. In panel (b) the ‘error’ bars in the horizontal direction show the mass range for each calculation of \(\Lambda_{\rm MSR}\). \(\Lambda_{\rm MSR}=1\), which indicates no mass segregation, is shown by the dashed grey line in both panels. are slightly lower mass than the brown dwarfs that formed like stars in the simulation. In Fig. 8 we see that the signature of inverse mass segregation of the ejected planetary-mass objects is more obvious than when the planets have masses similar to (or higher than some of) the brown dwarfs. These simulations also display similar behaviour to the previous models in both the \(\Lambda_{\rm MSR}\) slide and \(\Sigma-m\) plots. There are 16 free-floating planetary mass objects, more spatially distributed than the brown dwarfs, and Fig. 8 shows that even when the bins are not independent of one another, the difference in the \(\Lambda_{\rm MSR}\) measurement disappears once a further 15-20 brown dwarfs are included in the calculation. Lowering the planetary companion masses has the effect of lowering the binding energy of the system, which theoretically makes the system more susceptible to destruction. However, Parker & Reggiani (2013) show for stellar binaries that the companion mass ratio has very little influence on whether the system will be destroyed, because the typical interaction that breaks apart a system has an energy more than ten times that of the binding energy of the system. We therefore do not expect to see a significant difference in the kinematic distributions of the ejected \(1\,{\rm M_{Jup}}\) planets compared to \(10\,{\rm M_{Jup}}\) and this is the case for our simulations. #### 4.2.5 Closer planetary orbits In one set of simulations, we placed the planets at \(5\,{\rm au}\) around their host stars, rather than \(30\,{\rm au}\). This difference reduces the number of planets that are liberated from their host stars (because the systems are dynamically 'harder' according to the Heggie-Hills law, Heggie 1975; Hills 1975) by a factor of two (similar to results previously reported by Parker & Quanz 2012), but we find that the spatial and kinematic distributions of these ejected planets are the same as in those simulations where the planets are originally \(30\,{\rm au}\) from their host stars. Figure 5: Calculation of the spatial distribution of brown dwarfs and planetary mass objects in our simulations with \(\Lambda_{\rm MSR}\). Panel (a) shows the results when calculating \(\Lambda_{\rm MSR}\) with the \(N_{\rm MST}=5\) least massive objects, and then subsequently adding the next 10 least massive objects. Panel (b) shows the calculation of \(\Lambda_{\rm MSR}\) for subsets of 10 objects, and moving through the data. In both panels, the error bars in the vertical direction indicate the 1/6 and 5/6 percentile values in the distribution of the randomly chosen MSTs. In panel (b) the ‘error’ bars in the horizontal direction show the mass range for each calculation of \(\Lambda_{\rm MSR}\). \(\Lambda_{\rm MSR}=1\), which indicates no mass segregation, is shown by the dashed grey line in both panels. The bin containing the ejected planetary-mass objects has a mass segregation ratio \(\Lambda_{\rm MSR}=0.35\), and is centred on a mass value of \(9.4\times 10^{-3}{\rm M_{\odot}}\). Figure 6: \(\Sigma-m\) for one of our simulations. The surface density of each object, \(\Sigma\) is plotted against its mass \(m\). The median surface density for the star-forming region is shown by the dashed black line, the median surface density for the most massive stars is shown by the solid red line. The median surface density for brown dwarfs (\(0.01<m/{\rm M_{\odot}}<0.08\)) is shown by the solid orange line, and the median surface density for the ejected planetary mass objects (all of which are \(10\,{\rm M_{Jup}}\), i.e. \(9.4\times 10^{-2}{\rm M_{\odot}}\)) is shown by the blue cross. The planetary mass objects, which were ejected from stellar systems, have significantly lower local surface densities, whereas the brown dwarfs do not have significantly lower surface densities than the region as a whole. #### 4.2.6 Lower stellar densities, less substructure In the simulations where the stellar density is lower, we produce fewer free-floating planets, but again, the planets that are liberated from their host stars have a different spatial and kinematic distribution to the brown dwarfs. ## 5 Discussion We find no evidence that brown dwarfs or planetary mass objects have a different spatial distribution to the stars in NGC 1333. However, in _N_-body simulations we show that a population of planetary-mass objects created via ejection from a bound orbit around a star would have a significantly different spatial distribution to the stars, and also any brown dwarfs that formed like stars, representing a continuous extension to the low-mass end of the IMF. Our results should be prefaced by several caveats. First, we may not be accurately representing the initial conditions of NGC 1333. In a previous study (Parker & Alves de Oliveira, 2017), we showed that NGC 1333 was likely initially quite dense, but in many of the simulations we use here, clear mass segregation of the most massive stars occurs, which is not observed in NGC 1333. Therefore, our simulations may be too dense, although reducing the initial stellar density would merely reduce the number of free-floating planetary-mass objects, thus unlikely to affect our conclusions. Second, the simulations - by definition - assume instantaneous star and planet formation. In reality, even the shortest estimates suggest star formation takes up to 1 Myr (Elmegreen, 2000), and gas giants are likely to take just as long to form (Alves et al., 2020; Segura-Cox et al., 2020). However, what we sacrifice in realism in the simulations is compensated for in the statistical significance we gain from running multiple versions of the same simulations. Third, our planetary'systems' consist of just one planet, placed at either 5 au (Jupiter's location in the Solar System) or 30 au (Neptune's location). Our fractal simulations are unable to accommodate mult-planet systems, although this is likely to change in the near future. Therefore, our simulations cannot account for planet-planet interactions once the outer orbiting planet has been destabilised by an interaction with a star (Malmberg et al., 2007). Further planet-planet interactions are more likely to produce even more free-floating planets with different spatial distributions to the brown dwarfs. Additionally, our choice of initial semimajor axes and other orbital parameters (zero eccentricity/inclination) could affect the spatial and velocity distribution of the free Figure 8: Calculation of the spatial distribution of brown dwarfs and planetary mass objects in our simulations with Amsr, where the planetary mass objects are less massive than the brown dwarfs (the planetary mass objects are all 1 M\({}_{\rm{Jup}}\)). We present the results when calculating Amsr with the \(N_{\rm{Mup}}=5\) least massive objects, and then subsequently adding the next 10 least massive objects. As the least massive objects are ejected, they are significantly more dispersed than the higher-mass brown dwarfs. Figure 7: Velocity distributions of stars, brown dwarfs and ejected planets in our simulations. The solid black lines are for all stars, the grey lines are stars that no longer host planets. The orange lines are the velocity distributions for brown dwarfs, and the blue lines are the ejected planets. Panel (a) shows the proper motion velocities, and panel (b) shows the radial velocities (the _v\({}_{\rm{i}}\)_ velocity component). floating planets. In practice, our chosen initial orbital parameters likely straddle the median values for the semimajor axis distributions (Forgan et al., 2015; Zheng et al., 2015), and as such the population of free-floating planets represents the average outcome of dynamical encounters in star-forming regions (Daffern-Powell et al., 2022). Finally, our simulations do not contain primordial stellar/brown dwarf binaries (with the only binary systems being the star-planet systems). Binary systems would slightly increase the number of destructive encounters due to the slightly higher collisional cross section. If binary companions were brown dwarfs, then systems broken up from encounters could produce a population of brown dwarfs with a similar spatial distribution to the free-floating planets. The binary fraction for brown dwarf-brown dwarf systems is quite low (\(\sim 15\) per cent, Burgasser et al., 2007), but brown dwarfs could be companions to M-dwarfs, which have a higher binary fraction (up to 30 per cent, Ward-Duong et al., 2015). We will investigate the effects of binarity on mass segregation more generally in a future paper. We also reiterate that mass segregation (and inverse mass segregation) can be a very transient phenomenon, in that it can appear, then disappear, then reappear. This often happens when stars are ejected, and in several of our simulations we see inverse mass segregation of the planetary mass objects at e.g. 3 Myr, but not later after the PMOs have been ejected (and hence discounted from the analysis as an observer would only be able to trace them back to the origin with e.g. Gaia, Schoettler et al., 2020). Whilst the PMOs appear more spatially distributed than stars and brown dwarfs in the majority of our simulation snapshots, we cannot fully rule out the possibility that the PMOs in NGC 1333 were previously more spread out, or will become more spread out at later times. Despite this, our analysis of the low-mass objects in NGC 1333 indicates that these objects follow the spatial distribution of the stars, whereas our simulations generally show that the planetary mass objects that were ejected from stellar systems would be more spread out, and also moving with faster proper motion velocities, than stars and substellar objects that formed like stars, i.e. from the collapse and fragmentation of the host GMC. This is even evident in the simulations in which the planets that are ejected from orbits around stars have a mass that overlaps with the brown dwarf mass regime. The planetary-mass objects that are ejected always have a very different spatial and kinematic distribution to those that formed more like stars, irrespective of their initial mass or semimajor axis. Aside from these spatial and kinematic signatures, there is not a clear diagnostic that can be used to distinguish between brown dwarf objects that formed like stars and objects that formed like planets, but a more accurate determination of their masses could help identify their formation mechanism. For solar metallicity GMCs, 1 M\({}_{\rm Jup}\) planets are much lower than the opacity limit for fragmentation (Rees, 1976; Whitworth and Stamatellos, 2006; Bate, 2014) and so would probably be ejected planets (though see e.g. Boss, 2001). On the other hand, forming \(>\)10 M\({}_{\rm Jup}\) planets by core accretion in a circumstellar disc is likely to be challenging (Bergez-Casalou et al., 2023; Helled, 2023), and these objects (which encompass the PMOs found in NGC 1333 to date) probably form more like stars, although some could form via disc fragmentation (e.g. Mayer et al., 2002; Stamatellos and Whitworth, 2009), depending on the physical conditions in the disc (Mera and Bate, 2012; Kratter and Lodato, 2016). The PMOs observed by Scholz et al. (2023) in NGC 1333 are therefore likely to be the tail of the initial mass function, and formed in the same way as stars, rather than being the result of ejections from planetary systems. Of course, there may be low- and planetary-mass objects in NGC 1333 that have not yet been discovered, and we would expect these to be preferentially found on the outskirts (where the observational completeness is lower). We also note that whilst dynamical encounters produce around 10 free-floating planets in our simulations, these planets are at relatively large distances from their host stars (30 au). For planets on smaller orbits (e.g. 5 au) the number of free-floating planets produced through dynamical encounters is reduced by a factor of two. Although none of the PMOs in NGC 1333 appear to have formed as planets and then been ejected form their host star, Zapatero Osorio et al. (2014) find evidence that planetary-mass objects in the Pleiades open cluster appear to be moving with faster proper motions than the stars. As star clusters never reach complete energy equipartition (Spitzer, 1969; Trenti and van der Marel, 2013; Parker et al., 2016; Spera et al., 2016), these objects are likely to be ejected planets that are now free-floating in the cluster, rather than objects that formed like stars that have susequently attained higher velocities due to repeated interactions. We therefore encourage further observational studies of the substellar population in star-forming regions to both characterise the substellar population and to determine whether planetary-mass objects could be the result of dynamical encounters. ## 6 Conclusions We have quantified the spatial distribution of the substellar population of NGC 1333 to determine whether planetary-mass objects have a different spatial distribution to stars. We then analyse _N_-body simulations containing both brown dwarfs that form in the same way as stars, and high-mass planets originally orbiting stars, to compare their respective spatial distributions. Our conclusions are the following. (i) The brown dwarfs and planetary mass objects in NGC 1333 follow the same spatial distribution as the stars according to the \(\Lambda_{\rm MS}\) mass segregation ratio, and the relative surface density metric \(\Sigma-m\). (ii) In _N_-body simulations planets are liberated from their host stars and form a spatially distinct population from the brown dwarfs, which were set up to form in the same way as stars. (iii) The difference between these populations can still be discerned even if the mass ranges overlap, i.e. if the planets have a higher mass than some of the brown dwarfs. (iv) The substellar objects observed in NGC 1333 are therefore unlikely to be free-floating planets created as a result of dynamical interactions, having previously orbited stars. Rather, the PMOs in NGC 1333 likely formed in a similar way to the stellar, and higher-mass substellar populations. * If there is a population of ejected free-floating planets in NGC 1333 (or in other star-forming regions), we would expect to observe these objects on the outskirts of the region, where current observations are likely incomplete. As such, observations with e.g. JWST NIRSpec will be crucial to untangling the substellar populations in star-forming regions. ## Data Availability Statement The data used to produce the plots in this paper will be shared on reasonable request to the corresponding author. ## Acknowledgments We thank the anonymous referee for a helpful report. RJP acknowledges support from the Royal Society in the form of a Dorothy Hodgkin Fellowship.
2301.02337
The permutability of $σ_i$-sylowizers of some $σ_i$-subgroups in finite groups
Let $\sigma=\{\sigma_{i}|i\in I\}$ be a partition of the set of all primes $\mathbb{P}$, $G$ a finite group and $\sigma(G)=\{\sigma_{i}|\sigma_{i}\cap \pi(|G|)\neq\emptyset\}$. A subgroup $S$ of a group $G$ is called a $\sigma_i$-sylowizer of a $\sigma_i$-subgroup $R$ in $G$ if $S$ is maximal in $G$ with respect to having $R$ as its Hall $\sigma_i$-subgroup. The main aim of this paper is to investigate the influence of $\sigma_i$-sylowizers on the structure of finite groups. We obtained some new characterizations of supersoluble groups by the permutability of the $\sigma_i$-sylowizers of some $\sigma_i$-subgroups.
Zhenya Liu, Wenbin Guo
2023-01-05T23:59:58Z
http://arxiv.org/abs/2301.02337v1
# The permutability of \(\sigma_{i}\)-sylowizers of some \(\sigma_{i}\)-subgroups in finite groups ###### Abstract Let \(\sigma=\{\sigma_{i}|i\in I\}\) be a partition of the set of all primes \(\mathbb{P}\), \(G\) a finite group and \(\sigma(G)=\{\sigma_{i}|\sigma_{i}\cap\pi(|G|)\neq\emptyset\}\). A subgroup \(S\) of a group \(G\) is called a \(\sigma_{i}\)-sylowizer of a \(\sigma_{i}\)-subgroup \(R\) in \(G\) if \(S\) is maximal in \(G\) with respect to having \(R\) as its Hall \(\sigma_{i}\)-subgroup. The main aim of this paper is to investigate the influence of \(\sigma_{i}\)-sylowizers on the structure of finite groups. We obtained some new characterizations of supersoluble groups by the permutability of the \(\sigma_{i}\)-sylowizers of some \(\sigma_{i}\)-subgroups. + Footnote †: Mathematics Subject Classification (2021): 20D10, 20D15, 20D20, 20D35 + Footnote †: Mathematics Subject Classification (2021): 20D10, 20D15, 20D20, 20D35 ## 1 Introduction Let \(\pi\) denotes a set of primes. The concept of \(\pi\)-Sylowizers has been introduced by W. Gaschutz [1]. If \(R\) is a \(\pi\)-subgroup of the group \(G\), then a \(\pi\)-Sylowizer of \(R\) in \(G\) is a subgroup \(S\) of \(G\) maximal with respect to containing \(R\) as a Hall \(\pi\)-subgroup. \(\mathbb{P}\) is the set of all primes and \(n\) is a natural number. Let \(\sigma=\{\sigma_{i}|i\in I\}\) is some partition of all primes \(\mathbb{P}\), that is, \(\mathbb{P}=\bigcup_{i\in I}\sigma_{i}\) and \(\sigma_{i}\cap\sigma_{j}=\emptyset\) for all \(i\neq j\). We write \(\sigma(G)=\{\sigma_{i}|\sigma_{i}\cap\pi(G)\neq\emptyset\}\). Following [5], two subgroups \(H\) and \(T\) of a group \(G\) are conditionally permutable (or in brevity, \(c\)-permutable) in \(G\) if there exists an element \(x\in G\) such that \(HT^{x}=T^{x}H\). ## 2 Preliminaries **Lemma 2.1**.: Let \(H\) be a \(\sigma_{i}\)-subgroup of \(G\) for some \(\sigma_{i}\in\sigma(G)\). Assume that \(K\) is a subgroup satisfying \(H\leq K\leq G\) and \(T\) is a \(\sigma_{i}\)-sylowizer of \(H\) in \(K\). Then there is a \(\sigma_{i}\)-sylowizer \(S\) of \(H\) in \(G\) such that \(T=S\cap K\). **Proof** Since \(H\) is a Hall \(\sigma_{i}\)-subgroup of \(T\), there is a \(\sigma_{i}\)-sylowizer \(S\) of \(H\) in \(G\) such that \(S\geq T\). Then \(H\) is a Hall \(\sigma_{i}\)-subgroup of \(S\cap K\). Since \(T\leq S\cap K\) and \(T\) is a \(\sigma_{i}\)-sylowizer of \(H\) in \(K\), we get \(T=S\cap K\) by the maximality of \(T\). \(\Box\) **Lemma 2.2.** Let \(R\) be a \(\sigma_{i}\)-subgroup of \(G\) for some \(\sigma_{i}\in\sigma(G)\). Assume that \(N\) is a normal subgroup of \(G\) and \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(RN\). Then \(S\) is a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\) if and only if \(S/N\) is a \(\sigma_{i}\)-sylowizer of \(RN/N\) in \(G/N\). **Proof** Let \(S\) be a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\). Since \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(RN\), \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(SN\). Thus \(N\leq S\) by the maximality of \(S\) and so \(RN/N\) is a Hall \(\sigma_{i}\)-subgroup of \(S/N\). If \(S/N\) is not a \(\sigma_{i}\)-sylowizer of \(RN/N\) in \(G/N\), then there is a \(\sigma_{i}\)-sylowizer \(S_{0}/N\) of \(RN/N\) in \(G/N\) such that \(S_{0}/N>S/N\). Now, \(S_{0}>S\) and \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(S_{0}\), which contradicts the fact that \(S\) is a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\). Thus \(S/N\) is a \(\sigma_{i}\)-sylowizer of \(RN/N\) in \(G/N\). Conversely, if \(S/N\) is a \(\sigma_{i}\)-sylowizer of \(RN/N\) in \(G/N\), then \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(S\). If \(S\) is not a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\), then there is a \(\sigma_{i}\)-sylowizer \(S_{0}\) of \(R\) in \(G\) such that \(S_{0}>S\). Therefore \(RN/N\) is a Hall \(\sigma_{i}\)-subgroup of \(S_{0}/N\), which contradicts the fact that \(S/N\) is a \(\sigma_{i}\)-sylowizer of \(RN/N\) in \(G/N\). Thus \(S\) is a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\). \(\Box\) **Lemma 2.3.** Let \(R\) be a \(\sigma_{i}\)-subgroup of a \(\sigma\)-full group \(G\) for some \(\sigma_{i}\in\sigma(G)\) and \(S\) a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\). If \(S\) is \(\sigma\)-permutable in \(G\), then \(O^{\sigma_{i}}(G)\leq S\). In particular, \(S=RO^{\sigma_{i}}(G)\) is the unique \(\sigma_{i}\)-sylowizer of \(R\) in \(G\). **Proof** Let Q be a Hall \(\sigma_{j}\)-subgroup of \(G\) with \(\sigma_{j}\in\sigma(G)\) and \(\sigma_{i}\cap\sigma_{j}=\emptyset\). Since \(S\) is \(\sigma\)-permutable, we have \(SQ\leq G\). Note that since \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(SQ\), we have \(QS=S\) by the maximality of \(S\). Hence \(Q\leq S\). It shows that \(O^{\sigma_{i}}(G)\leq S\). \(\Box\) **Lemma 2.4.** Let \(R\) be a \(\sigma_{i}\)-subgroup of a \(\sigma\)-full group of Sylow type \(G\) for some \(\sigma_{i}\in\sigma(G)\) and \(S\) a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\). Then \(S\) is \(c\)-permutable with every Hall \(\sigma_{j}\)-subgroup of \(G\) for all \(\sigma_{j}\in\sigma(G)\) if and only if \(|G:S|\) is a \(\sigma_{i}\)-number. **Proof** The sufficiency is evident, we only need to prove the necessity. Let \(Q\) be a Hall \(\sigma_{j}\)-subgroup of \(G\) with \(\sigma_{j}\in\sigma(G)\) and \(\sigma_{i}\cap\sigma_{j}=\emptyset\). Since \(S\) is \(c\)-permutable with \(Q\), we have \(SQ^{x}=Q^{x}S\) for some element \(x\in G\). Note that since \(R\) is a Hall \(\sigma_{i}\)-subgroup of \(SQ^{x}\), we have \(Q^{x}S=S\) by the maximality of \(S\). Hence \(Q^{x}\leq S\). It implies that \(|G:S|\) is a \(\sigma_{i}\)-number. \(\Box\) **Theorem 2.5.**_Let \(G\) be a \(\sigma\)-full group of Sylow type and \({\cal H}=\{H_{1},\cdots,H_{t}\}\) be a complete Hall \(\sigma\)-set of \(G\) such that \(H_{i}\) is a nilpotent \(\sigma_{i}\)-subgroup for all \(i=1,\cdots,t\). Suppose that for any \(\sigma_{i}\in\sigma(G)\), every maximal subgroup of any non-cyclic \(H_{i}\) has a \(\sigma_{i}\)-sylowizer that is \(c\)-permutable with every member of \({\cal H}\), then \(G\) is supersoluble._ **Proof** Assume that this is false and let \(G\) be a counterexample of minimal order. Then: (1) Let \(N\) be a minimal normal subgroup of \(G\). Then \(G\) is supersoluble. We consider the quotient group \(G/N\). It is clear that \(G/N\) is a \(\sigma\)-full group of Sylow type and \({\cal H}N/N\) is a complete Hall \(\sigma\)-set of \(G/N\) such that \(H_{i}N/N\) is nilpotent. Let \(H/N\) be a maximal subgroup of \(H_{i}N/N\) and \(H_{\sigma_{i}}\) be a Hall \(\sigma_{i}\)-subgroup of \(H\) contained in \(H_{i}\). Then \(H=H_{\sigma_{i}}N\). Since \(H_{\sigma_{i}}\cap N=N_{\sigma_{i}}=H_{i}\cap N\), where \(N_{\sigma_{i}}\) denotes a Hall \(\sigma_{i}\)-subgroup of \(N\), we have that \[|H_{i}:H_{\sigma_{i}}|=\frac{|H_{i}||N|}{|H_{i}\cap N|}\cdot\frac{|H_{\sigma_{i }}\cap N|}{|H_{\sigma_{i}}||N|}=|H_{i}N:H|=q\] for some \(q\in\sigma_{i}\). This shows that \(H_{\sigma_{i}}\) is a maximal subgroup of \(H_{i}\). If \(H_{i}N/N\) is non-cyclic, then so is \(H_{i}\). Thus if \(S/N\) is a \(\sigma_{i}\)-sylowizer of \(H/N\) in \(G/N\), then \(S\) is a \(\sigma_{i}\)-sylowizer of \(H_{\sigma_{i}}\) in \(G\) by Lemma 2.2. Moreover, if \(S\) is \(c\)-permutable with every member of \({\cal H}\), then \(S/N\) is \(c\)-permutable with every member of \({\cal H}N/N\) by Lemma 2.4. It shows that \(G/N\) satisfies the hypotheses. Thus \(G/N\) is supersoluble by the choice of \(G\). (2) \(N\) is the unique proper minimal normal subgroup of \(G\) and \(\Phi(G)=1\). Let \(p\) be the smallest prime divisor of \(G\) and \(p\in\sigma_{i}\). If \(H_{i}\) is cyclic, then \(G\) is \(p\)-nilpotent. This shows that \(G\) has a proper minimal normal subgroup. Thus we may assume that \(H_{i}\) is non-cyclic. Let \(M\) be a maximal subgroup of \(H_{i}\) of index \(p\) and \(S\) a \(\sigma_{i}\)-sylowizer of \(M\) in \(G\) that is \(c\)-permutable with every member of \({\cal H}\). Then \(|G:S|=p\) by Lemma 2.4 and so \(S\unlhd G\). Therefore we may choose a proper minimal normal subgroup of \(G\) contained in \(S\), say \(N\). By Claim (1), \(G/N\) is supersoluble. Moreover, \(N\) is the unique minimal normal subgroup of \(G\). Since the class of all supersoluble groups is a saturated formation, we may assume further that \(|\Phi(G)|=1\). (3) \(N\) is soluble. Assume that \(N\) is not soluble. Then \(p=2\) and \(2||N|\). Let \(P\) be a Sylow 2-subgroup of \(H_{i}\). Then \(N_{2}=P\cap N\) is a Sylow 2-subgroup of \(N\). If \(N_{2}\leq\Phi(H_{i})\), then \(N_{2}\leq\Phi(P)\), and so \(N\) is 2-nilpotent by Tate's theorem, a contradiction. Hence \(N_{2}\nleq\Phi(H_{i})\). Thus there is a maximal subgroup \(K\) of \(H_{i}\) such that \(H_{i}=KN_{2}\). Let \(S_{0}\) be a \(\sigma_{i}\)-sylowizer of \(K\) in \(G\) that is \(c\)-permutable with every member of \({\cal H}\). Then \(|G:S_{0}|=2\) by Lemma 2.4. Thus \(G=S_{0}H_{i}=S_{0}N_{2}=S_{0}N\). Now, \(|N:N\cap S_{0}|=|G:S_{0}|=2\), which implies that \(N\cap S_{0}\unlhd N\). Since \(N\cap S_{0}\unlhd S_{0}\), we have \(N\cap S_{0}\unlhd G\). Note that \(N\) is a minimal normal subgroup of \(G\), we have \(N\cap S_{0}=1\). Thus \(|N|=|G:S_{0}|=2\), a contradiction. (4) Final contradiction. By Claim (3), we may assume that \(N\) is a \(q\)-subgroup for some prime \(q\in\sigma_{j}\). Since \(\Phi(G)=1\), there is a maximal subgroup \(T\) of \(G\) such that \(G=TN\). Let \(T_{\sigma_{j}}\) be a Hall \(\sigma_{j}\)-subgroup of \(T\) contained in \(H_{j}\). Then \(H_{j}=T_{\sigma_{j}}N\) is a Hall \(\sigma_{j}\)-subgroup of \(G\). If \(H_{j}\) is cyclic, then \(G\) is supersoluble by the supersolublity of \(G/N\). Thus we may assume that \(H_{j}\) is non-cyclic. Let \(Q\geq T_{\sigma_{j}}\) be a maximal subgroup of \(H_{j}\) and \(Y\) a \(\sigma_{j}\)-sylowizer of \(Q\) in \(G\) that is \(c\)-permutable with every member of \({\cal H}\). Then \(|G:Y|=q\) by Lemma 2.4 and \(N\nleq Y\). Otherwise \(H_{j}=QN\leq Y\), which contradicts the fact that \(Q\) is a Hall \(\sigma_{j}\)-subgroup of \(Y\). Thus \(G=YN\) and so \(|N|=|G:Y|=q\). It implies that \(G\) is supersoluble, a contradiction. This contradiction completes the proof. \(\Box\) **Theorem 2.6**.: _Let \(\mathfrak{F}\) be a soluble saturated formation containing all supersoluble groups and let \(E\) be a normal subgroup of \(G\) with \(G/E\in\mathfrak{F}\). Suppose that \(G\) is a \(\sigma\)-full group of Sylow type and \(\mathcal{H}=\{H_{1},\cdots,H_{t}\}\) is a complete Hall \(\sigma\)-set of \(G\) such that \(H_{i}\) is a nilpotent \(\sigma_{i}\)-subgroup for all \(i=1,\cdots,t\). If for any \(\sigma_{i}\in\sigma(E)\), every maximal subgroup of any non-cyclic \(H_{i}\cap E\) has a \(\sigma_{i}\)-sylowizer that is \(c\)-permutable with every member of \(\mathcal{H}\), then \(G\in\mathfrak{F}\)._ **Proof** The conclusion holds when \(E=G\) by Theorem 2.5, thus we may assume that \(E<G\). Let \(N\) be a minimal normal subgroup of \(G\) contained in \(E\). (1) \(E\) is supersoluble. Let \(Q\) be a maximal subgroup of a non-cyclic Hall \(\sigma_{i}\)-subgroup \(H_{i}\cap E\) of \(E\) and \(S\) a \(\sigma_{i}\)-sylowizer of \(Q\) in \(G\) that is \(c\)-permutable with member of \(\mathcal{H}\). By Lemma 2.4, \(|G:S|\) is a \(\sigma_{i}\)-number. Let \(Y=S\cap E\). Since \(|E:Y|=|E:S\cap E|=|SE:S|\) divides \(|G:S|\), \(|E:Y|\) is a \(\sigma_{i}\)-number. Hence \(Y\) is a \(\sigma_{i}\)-sylowizer of \(Q\) in \(E\) and \(Y\) is \(c\)-permutable with every member of \(\mathcal{H}\cap E\) by Lemma 2.4. Thus \(E\) is supersoluble by Theorem 2.5. (2) \(N\) is the unique minimal normal subgroup of \(G\) contained in \(E\) and \(N\cap\Phi(G)=1\). Consider the quotient group \(G/N\), evidently \((G/N)/(E/N)\in\mathfrak{F}\). Since \(E\) is supersoluble by Claim (1), we have that \(N\) is a \(p\)-group for some prime \(p\). Without loss of generality, we may write \(E_{i}=H_{i}\cap E\) for all \(i\in\{1,\cdots,t\}\) and assume that \(p\in\sigma_{i}\) for some \(i\). Let \(J/N\) be a maximal subgroup of \(E_{i}/N\), then \(J\) is a maximal subgroup of \(E_{i}\). If \(S/N\) is a \(\sigma_{i}\)-sylowizer of \(J/N\) in \(G/N\), then \(S\) is a \(\sigma_{i}\)-sylowizer of \(J\) in \(G\) by Lemma 2.2. Moreover, if \(S\) is \(c\)-permutable with every member of \(\mathcal{H}\), then \(S/N\) is \(c\)-permutable with every member of \(\mathcal{H}N/N\) by Lemma 2.4. Let \(J/N\) be a maximal subgroup of \(E_{j}N/N\) and \(J_{\sigma_{j}}\) a Hall \(\sigma_{j}\)-subgroup of \(J\) contained in \(E_{j}\), where \(i\neq j\). Then \(J_{\sigma_{j}}\) is a maximal subgroup of \(E_{j}\). If \(S/N\) is a \(\sigma_{j}\)-sylowizer of \(J_{\sigma_{j}}N/N\) in \(G/N\), then \(S\) is a \(\sigma_{j}\)-sylowizer of \(J_{\sigma_{j}}\) in \(G\) by Lemma 2.2. Moreover, if \(S\) is \(c\)-permutable with every member of \(\mathcal{H}\), then \(S/N\) is \(c\)-permutable with every member of \(\mathcal{H}N/N\) by Lemma 2.4. This shows that \((G/N,E/N)\) satisfies the hypotheses. Thus we may have that \(G/N\in\mathfrak{F}\) by induction. Moreover, \(N\) is the unique minimal normal subgroup of \(G\) contained in \(E\) and \(N\cap\Phi(G)=1\). (3) \(N\) is an elementary abelian \(p\)-subgroup, where \(p\) is the largest prime divisor of \(|E|\). Since \(E\) is supersoluble by Claim (1), the Sylow \(p\)-subgroup \(E_{P}\) of \(E\) is normal in \(G\). Note that \(N\) is the unique minimal normal subgroup of \(G\) contained in \(E\), \(N\leq E_{P}\) is an elementary abelian \(p\)-subgroup. (4) \(G\in\mathfrak{F}\). Without loss of generality, we may assume that \(p\in\sigma_{i}\). If \(E_{i}\) is cyclic, then \(|N|=p\) and so \(G\in\mathfrak{F}\). Assume that \(E_{i}\) is non-cyclic. Since \(N\nleq\Phi(G)\), there is a maximal subgroup \(M\) of \(G\) such that \(G=MN\) and \(M\cap N=1\). Thus \(E_{i}=N(M\cap E_{i})\) and \(H_{i}=NM\cap H_{i}=N(M\cap H_{i})=NM_{i}\). Since \(M_{i}<H_{i}\), we may choose \(P\!<\!H_{i}\) such that \(M_{i}\leq P\). Since \(M\cap E_{i}\leq P\), \(P\cap E_{i}=P\cap N(M\cap E_{i})=(P\cap N)(M\cap E_{i})\). Note that \(M\cap N=1\), we have \[|E_{i}:E_{i}\cap P|=|N(M\cap E_{i}):(P\cap N)(M\cap E_{i})|=|N:P\cap N|=p.\] Hence \(R=E_{i}\cap P\) is a maximal subgroup of \(E_{i}\). Let \(S\) be a \(\sigma_{i}\)-sylowizer of \(R\) in \(G\) that is \(c\)-permutable with every member of \(\mathcal{H}\). Then \(|G:S|\) is a \(\sigma_{i}\)-number by Lemma 2.4. Since \(G\) is soluble, we may write \(S=RS_{\sigma_{i}^{\prime}}\) and \(M=M_{i}M_{\sigma_{i}^{\prime}}\). Note also that \(|G:S|\) and \(|G:M|\) are \(\sigma_{i}\)-number in \(G\), \(S_{\sigma_{i}^{\prime}}\) and \(M_{\sigma_{i}^{\prime}}\) are also Hall \(\sigma_{i}^{\prime}\)-subgroups of \(G\). Thus there is an element \(g\) of \(G\) such that \(S_{\sigma_{i}^{\prime}}^{g}=M_{\sigma_{i}^{\prime}}\). Since \(G=H_{i}S^{g}\), we may write \(g=xy\), where \(x\in H_{i}\) and \(y\in S^{g}\). Note that since \(R=E_{i}\cap P\unlhd H_{i}\), we have \(R^{y}=R^{xy}\leq S^{g}\) and so \(R\leq S^{g}\). Thus \(S^{g}=RM_{\sigma_{i}^{\prime}}\). Since \(RM_{i}=(P\cap E_{i})M_{i}=P\cap E_{i}M_{i}=P\cap NM_{i}=P\leq G\), we have \(RM\leq G\). Since \(M\) is a maximal subgroup, either \(RM=M\) or \(RM=G\). If \(RM=G\), then \(RM_{i}=P\) is a Hall \(\sigma_{i}\)-subgroup of \(G\), which is impossible. Thus \(RM=M\) and so \(R\leq M\cap E_{i}\). Since \(G=MN=ME_{i}\), we have \(E_{i}\nleq M\). Note that since \(R\lessdot E_{i}\), we have \(R=M\cap E_{i}\). Thus \(|N|=|G:M|=|E_{i}:E_{i}\cap M|=|E_{i}:R|=p\). By [7, Theorem 2], \(G\in\mathfrak{F}\), as required. \(\Box\)
2306.12183
Crouzeix's conjecture for classes of matrices
For a matrix $A$ which satisfies Crouzeix's conjecture, we construct several classes of matrices from $A$ for which the conjecture will also hold. We discover a new link between cyclicity and Crouzeix's conjecture, which shows that Crouzeix's Conjecture holds in full generality if and only if it holds for the differentiation operator on a class of analytic functions. We pose several open questions, which if proved, will prove Crouzeix's conjecture. We also begin an investigation into Crouzeix's conjecture for symmetric matrices and in the case of $3 \times 3$ matrices, we show Crouzeix's conjecture holds for symmetric matrices if and only if it holds for analytic truncated Toeplitz operators.
Ryan O'Loughlin, Jani Virtanen
2023-06-21T11:27:09Z
http://arxiv.org/abs/2306.12183v2
# Crouzek's conjecture for new classes of matrices ###### Abstract. For a matrix \(A\) which satisfies Crouzek's conjecture, we construct several new classes of matrices from \(A\) for which the conjecture will also hold. We discover a new link between cyclicity and Crouzek's conjecture, which affirms the conjecture in the positive for a class of matrices. We pose several open questions, which if proved, will prove Crouzek's conjecture. We also begin an investigation into Crouzek's conjecture for symmetric matrices and in the case of \(3\times 3\) matrices, we show Crouzek's conjecture holds for symmetric matrices if and only if it holds for analytic truncated Toeplitz operators. Keywords: Numerical ranges, Crouzek's Conjecture, Matrix inequalities, Hardy spaces, Norms of linear operators. MSC: 15A60, 15A39, 30H10, 47A30. _E-mail addresses: [email protected], [email protected]_ ## 1. Introduction Finding an upper bound for the norm of an operator is one of the most fundamental endeavours in functional analysis and Crouzek's conjecture provides a computational geometric approach to this. Despite its simplicity and strong numerical evidence, the conjecture has not been proved. The purpose of this article is to provide several new classes of matrices which satisfy Crouzek's conjecture and introduce novel methods to study the conjecture. In particular, for a matrix \(A\) which satisfies Crouzek's conjecture, we construct several new classes of matrices from \(A\) for which the conjecture will also hold. We discover a new link between cyclicity and Crouzek's conjecture. This affirms the conjecture in the positive for several new classes of matrices and gives elegant shortened algebraic proofs of several previously known results obtained by detailed complex analysis techniques. Our investigation leads to several open questions, which if proved, will prove Crouzeix's conjecture. Define the numerical range of an \(n\times n\) matrix \(A\) with complex entries by \[W(A):=\{\langle Ax,x\rangle:\langle x,x\rangle=1\},\] where \(\langle\cdot,\cdot\rangle\) refers to the Euclidean inner product on \(\mathbb{C}^{n}\). Crouzeix's conjecture can be stated as follows. **Conjecture 1.1**.: _For all square complex matrices \(A\) and all complex polynomials \(p\),_ \[\|p(A)\|\leq 2\sup_{z\in W(A)}|p(z)|,\] _where \(\|\cdot\|\) denotes the standard operator norm for matrices._ Crouzeix [9, 10] showed that for each polynomial \(p\), \(\|p(A)\|\leq 11.08\sup_{z\in W(A)}|p(z)|\) and the bound was later improved to \(1+\sqrt{2}\) by Crouzeix and Palencia [12]. Using the arguments developed by Crouzeix and Palencia, in [4] Caldwell, Greenbaum and Li improved the bound to \(2\) in some special cases. Since the operator norm and numerical range of a square matrix \(A\) are invariant under unitary equivalence Conjecture 1.1 holds for \(A\) if and only if it holds for all matrices in the unitary equivalence class of \(A\). This is a long exploited fact that we will use throughout. Consequently it can be shown that all normal matrices satisfy the conjecture. Further, the conjecture has been shown to hold for several other classes of matrices including \(2\times 2\) matrices [9] (by Crouzeix), certain tridiagonal matrices [18] (by Glader, Kurula and Lindstrom), certain contractions with eigenvalues that are sufficiently well-separated [3] (by Bickel, Gorkin, Greenbaum, Ransford, Schwenninger and Wegert) and numerous other specialised cases. Numerical investigations have also strongly supported the conjecture (see, for example, the work of Greenbaum and Overton [20] and the references therein) and there appear no obvious questions left to consider involving numerical analysis. Many results on the conjecture come from works in the more general setting of \(K\)-spectral sets, and other related questions involving intersections of \(K\)-spectral sets have also generated research interest (see for example the work of Badea, Berkermann, and Crouzeix [1]). Throughout, all matrices are assumed to be square. We denote by \(\text{conv}\{\mathcal{X}\}\) the convex hull of the set \(\mathcal{X}\), by \(W(A)^{o}\) the interior of \(W(A)\), by \(I_{d}\) the identity matrix, and by \(\mathbb{D}\) the open unit disk in \(\mathbb{C}\). This paper is organised as follows. Section 2 first gives a background on the techniques which have led to partial proofs of Conjecture 1.1. It then provides the preliminary results that we need for our study, and presents recent relevant new results on the conjecture. In Section 3, starting from a matrix \(A\) which is assumed to satisfy Conjecture 1.1, we construct several new classes of matrices for which the conjecture holds. We then develop a novel method for proving the conjecture. With this new approach, we show that if the conjecture holds for all \(N\times N\) matrices then it holds for \(n\times n\) matrices where \(n<N\), and we prove several results relating cyclicity and Conjecture 1.1. Section 4 discusses the conjecture for symmetric matrices (i.e. matrices which are self transpose). In particular, we show that the conjecture holds for \(3\times 3\) symmetric matrices if and only if it holds for \(3\times 3\) truncated Toeplitz operators, and finally we also show that truncated Toeplitz operators serve as model operators for numerical ranges of \(3\times 3\) matrices. ## 2. Previous work on Crouzeix's conjecture A commonly used approach to show certain matrices satisfy Conjecture 1.1 is to exploit von Neumann's inequality. For a contractive matrix \(A\), von Neumann's inequality states that for an analytic function \(g:\mathbb{D}\to\mathbb{C}\) which is continuous up to the boundary \(\partial\mathbb{D}\) \[\|g(A)\|\leq 2\sup_{z\in\overline{\mathbb{D}}}|g(z)|.\] Let \(\varphi:W(A)^{o}\to\mathbb{D}\) be a bijective conformal mapping extended to a homeomorphism of \(W(A)\) onto \(\overline{\mathbb{D}}\) and let \(X\) be an invertible matrix of the same dimension as \(A\) such that \(X\varphi(A)X^{-1}\) is a contraction and \[\kappa(X)=\|X\|\cdot\left\|X^{-1}\right\|\leq 2.\] Then, by von Neumann's inequality, for any polynomial \(p\), \[\|p(A)\| =\left\|X^{-1}\left(p\circ\varphi^{-1}\left(X\varphi(A)X^{-1} \right)\right)X\right\|\] \[\leq 2\max_{z\in\overline{\mathbb{D}}}|p\circ\varphi^{-1}(z)|=2 \max_{z\in W(A)}|p(z)|.\] Thus, \(A\) satisfies Conjecture 1.1. The difficulty of the above approach is finding \(\varphi\) and \(X\) with the requirements specified above. Nonetheless, this approach was used in [18, 29] to prove Conjecture 1.1 for certain \(3\times 3\) matrices. A similar yet alternative approach to proving Conjecture 1.1 for a diagonalisable matrix \(A\) is to write \(A=X\Lambda X^{-1}\) where \(\Lambda\) is a diagonal matrix with the eigenvalues of \(A\) on the diagonal and \(X\) is a invertible matrix such that \(\|X\|\|X^{-1}\|\leqslant 2\). Then for each polynomial \(p\), we have \(\|p(A)\|=\|Xp(\Lambda)X^{-1}\|\) and \[\|Xp(\Lambda)X^{-1}\|\leqslant\|X\|\|X^{-1}\|\|p(\Lambda)\|\leqslant 2\|p( \Lambda)\|\leqslant 2\sup_{z\in\sigma(A)}|p(z)|\leqslant 2\sup_{z\in W(A)}|p(z)|.\] Thus Conjecture 1.1 holds for \(A\). This approach was used in [3]. We wish to highlight that this approach can be generalised to the case where \(A\) is similar to block diagonal matrices. Assume \(A=XBX^{-1}\) where \(B=\operatorname{diag}(B_{1},B_{2},\ldots B_{n})\) and each \(B_{i}\) is a square matrix such that for all polynomials \(p\), \[\|p(B_{i})\|\leqslant b\sup_{z\in W(A)}|p(z)|\] for some \(b\leqslant 2\) and where \(\|X^{-1}\|\|X\|b\leqslant 2\). Then \(\|p(A)\|=\|Xp(B)X^{-1}\|\) and \[\|Xp(B)X^{-1}\|\leqslant\|X\|\|X^{-1}\|\|p(B)\|=\|X\|\|X^{-1}\|\max_{i}\|p(B_{ i})\|\leqslant 2\sup_{z\in W(A)}|p(z)|.\] Thus, under these assumptions Conjecture 1.1 will hold for \(A\). For a matrix \(A\), denote by \(\mathbf{A}(W(A))\) the algebra of functions which are analytic on the interior \(W(A)^{o}\) of \(A\) and continuous on the boundary of \(W(A)\). There are at least five slightly different equivalent formulations of Conjecture 1.1 appearing in the literature. For the sake of completeness we show that they are all equivalent with the following proposition. **Proposition 2.1**.: _Let \(\Omega\) be an open convex set with smooth boundary such that \(W(A)\subseteq\Omega\). The following are equivalent:_ 1. _For all complex polynomials_ \(p\)_,_ \[\|p(A)\|\leq 2\sup_{z\in W(A)}|p(z)|.\] 2. _For all functions_ \(f\)_, analytic on an open neighbourhood of_ \(W(A)\)__ \[\|f(A)\|\leq 2\sup_{z\in W(A)}|f(z)|.\] 3. _For all functions_ \(f\)_, analytic on an open neighbourhood of_ \(W(A)\) _such that_ \(\sup_{z\in W(A)}|f(z)|=1\)_,_ \[\|f(A)\|\leq 2.\] 4. _For all functions_ \(f\)_, analytic on_ \(W(A)^{o}\) _such that_ \(\sup_{z\in W(A)}|f(z)|=1\)_,_ \[\|f(A)\|\leq 2.\] 5. _For all functions_ \(f\in\mathbf{A}(W(A))\)_,_ \[\|f(A)\|\leq 2\sup_{z\in W(A)}|f(z)|.\] Proof.: To show \((a)\) is equivalent to \((b)\) note that any \(f\) which is analytic on an open neighbourhood of \(W(A)\) will also lie in \(\mathbf{A}(W(A))\). The result now follows from Mergelyan's theorem, which shows polynomials are dense in \(\mathbf{A}(W(A))\). The implications \((b)\implies(c)\) and \((c)\implies(d)\) are immediate. The implication \((d)\implies(a)\) follows from the following standard rescaling argument. For a polynomial \(p\), let \[p^{\prime}(z)=\frac{p(z)}{\sup_{z\in W(A)}|p(z)|}.\] Then \(p^{\prime}\) satisfies the hypothesis of \((d)\), so \(\|p^{\prime}\|\leqslant 2\), i.e., \(\|p(A)\|\leq 2\sup_{z\in W(A)}|p(z)|\). The equivalence of \((a)\) and \((e)\) follows from an identical argument to the argument showing \((a)\) and \((b)\) are equivalent. The following propositions have also been observed previously, but as we have never seen a proof written down of the statements and our later working heavily relies on these results, we include their proofs here. **Proposition 2.2**.: _Let \(A\) satisfy Conjecture 1.1, let \(\lambda,\mu\in\mathbb{C}\) and denote the transpose of \(A\) by \(A^{T}\). Then_ 1. \(B=\mu A+\lambda I_{d}\) _satisfies the conjecture,_ 2. \(A^{T}\) _satisfies the conjecture._ Proof.: (a) For any polynomial \(p(z)\), define \(q(z)=p(\mu z+\lambda)\), which means \(q(A)=p(\mu A+\lambda I_{d})=p(B)\). Since Conjecture 1.1 holds for \(A\), \(\|q(A)\|\leq 2\sup_{z\in W(A)}|q(z)|\) and so \(\|p(B)\|\leq 2\sup_{z\in W(A)}|p(\mu z+\lambda)|\). Since \(W(B)=\mu W(A)+\lambda\), this implies \(\|q(B)\|\leq 2\sup_{z\in W(B)}|p(z)|\). (b) A short computation (see [22, Chapter 1]) shows \(W(A)=W(A^{T})\), and as taking matrix transposes is norm invariant, for each polynomial \(p\) \[\|p(A^{T})\|=\|p(A)^{T}\|=\|p(A)\|\leqslant 2\sup_{z\in W(A)}|p(z)|=2\sup_{z\in W (A^{T})}|p(z)|.\] For matrices \(A_{1},A_{2},\ldots,A_{n}\) of dimensions \(a_{1}\times a_{1},a_{2}\times a_{2},\ldots,a_{n}\times a_{n}\) respectively, let the \((a_{1}+a_{2}+\cdots+a_{n})\times(a_{1}+a_{2}+\cdots+a_{n})\) matrix \(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n}\) be defined by \[A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n}=\begin{pmatrix}A_{1}&0&0&\cdots&0\\ 0&A_{2}&0&\cdots&0\\ 0&0&A_{3}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\cdots&A_{n}\end{pmatrix}, \tag{2.1}\] where each \(0\) denotes a block matrix of appropriate size. If a matrix, \(M\), is unitarily equivalent to a matrix of the form (2.1) for \(n>1\), then we say \(M\) is _reducible_. Otherwise we say \(M\) is _irreducible_. **Proposition 2.3**.: _If matrices \(A_{1},A_{2},\ldots,A_{n}\) satisfy Conjecture 1.1 then so does the matrix \(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n}\)._ Proof.: If \(A_{1},A_{2},\ldots,A_{n}\) satisfy Conjecture 1.1 then for any polynomial \(p\), \[\|p(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n})\|\] \[=\|p(A_{1})\oplus p(A_{2})\oplus\cdots\oplus p(A_{n})\|\] \[=\max\{\|p(A_{1})\|,\|p(A_{2})\|,\ldots,\|p(A_{n})\|\}\] \[\leqslant 2\max\left\{\sup_{z\in W(A_{1})}|p(z)|,\sup_{z\in W(A_{2 })}|p(z)|,\ldots,\sup_{z\in W(A_{n})}|p(z)|,\right\}\] \[\leqslant 2\sup_{z\in W(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n})} |p(z)|,\] where the final inequality holds because \[\bigcup_{i=1}^{n}W(A_{i})\subseteq\text{conv}\{W(A_{1}),W(A_{2}),\ldots,W(A_{ n})\}=W(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n}),\] and so \(A_{1}\oplus A_{2}\oplus\cdots\oplus A_{n}\) also satisfies the conjecture. The recent thesis [24] contains many new results (some of which are also contained in a paper by the same author [23]) centred on norm attaining vectors of \(\|f(A)\|\), where \(f\in\mathbf{A}(W(A))\). By Proposition 2.1 we see that Conjecture 1.1 holds if and only if \[\sup_{f\in\mathbf{A}(W(A))}\frac{\|f(A)\|}{\max_{z\in W(A)}|f(z)|}\leqslant 2. \tag{2.2}\] It was first observed in [9, Theorem 2.1] that there are functions \(\widehat{f}\) which attain the supremum in (2.2) and that such functions are of the form \(\mu B\circ\varphi\), where \(\mu\in\mathbb{C}\), \(\varphi\) is any conformal mapping from \(W(A)^{o}\) to \(\mathbb{D}\) and \[B(z)=\exp(i\gamma)\prod_{j=1}^{m}\frac{z-\alpha_{j}}{1-\bar{\alpha}_{j}z}, \quad m\leq n-1,\quad|\alpha_{j}|<1\] is a Blaschke product of degree \(m\). Such functions are called _extremal functions_ for \(A\) and we will denote the extremal function for \(A\) by \(\widehat{f}\) throughout. If one assumes by linearity that \(\max_{z\in W(A)}|\widehat{f}(z)|=1\), then \(\mu=1\), so that \(\widehat{f}\) is a function of the form \(B\circ\varphi\). We summarise some results in [24, Section 2.2] related to the uniqueness of extremal functions in the following theorem. **Theorem 2.4**.: 1. _For an_ \(n\times n\) _nilpotent Jordan block_ \[J=\begin{pmatrix}0&1&0&\cdots&0\\ 0&0&1&\cdots&0\\ 0&0&0&\ddots&0\\ 0&0&0&\cdots&1\\ 0&0&0&\cdots&0\end{pmatrix},\] _the unique (up to scalar multiplication) extremal function for_ \(J\) _is_ \(\widehat{f}(z)=z^{n-1}\)_._ 2. _The extremal function for a matrix_ \[\begin{pmatrix}0&1&0\\ 0&0&\frac{1}{\sqrt{3}}\\ 0&0&0\end{pmatrix}\] _is not unique._ Through the use of extremal functions, an alternate proof that certain \(2\times 2\) matrices satisfy Conjecture 1.1 is presented in [24, Section 2.3.1]. The following result is [2, Theorem 2.1], and although we don't later use this result, we expect it to prove complementary to our study. In the following for a matrix, \(A\), we denote by \(r(A):=\sup_{z\in W(A)}|z|\) the numerical radius of \(A\). **Theorem 2.5**.: _Conjecture 1.1 holds for \(A\) if and only if for each polynomial \(p\), we have \(r(p(A))\leqslant\frac{5}{4}\sup_{z\in W(A)}|p(z)|\)._ ## 3. New classes of matrices which satisfy Conjecture 1.1 This section presents new classes of matrices for which Conjecture 1.1 is satisfied. First, for \(u,v\in\mathbb{C}^{n}\), let \(u\otimes v:\mathbb{C}^{n}\to\mathbb{C}^{n}\) be the rank one map defined by \(u\otimes v(x)=\langle x,v\rangle u\). **Proposition 3.1**.: _Every rank one matrix satisfies Conjecture 1.1._ Proof.: Every \(n\times n\) rank one matrix, \(A\), is of the form \(A=u\otimes v\) for some \(u,v\in\mathbb{C}^{n}\). Without loss of generality we may assume \(u\notin\operatorname{span}v\), as if this is the case \(u\otimes v\) is a linear multiple of an orthogonal projection, which is normal and hence satisfies Conjecture 1.1. Let \(\mathcal{X}=\operatorname{span}\{u,v\}\). Then \(A\mathcal{X}\subseteq\mathcal{X}\) and \(A\mathcal{X}^{\perp}=\{0\}\). Let \(x_{1},x_{2}\) be an orthonormal basis for \(\mathcal{X}\) and let \(x_{3},x_{4},...,x_{n}\) be an orthonormal basis for \(\mathcal{X}^{\perp}\), then with respect to the orthonormal basis \(x_{1},x_{2},...,x_{n}\), \(A\) has the matrix representation \[A_{1}\oplus Z=\begin{pmatrix}A_{1}&0\\ 0&Z\end{pmatrix},\] where \(A_{1}\) is a \(2\times 2\) matrix and where \(Z\) denotes the \((n-2)\times(n-2)\) matrix with each entry equal to \(0\). By [9, Theorem 1.1] every \(2\times 2\) matrix satisfies Conjecture 1.1, and it is readily verifiable that \(Z\) satisfies Conjecture 1.1. So by Proposition 2.3, \(A_{1}\oplus Z\) satisfies Conjecture 1.1, and thus by unitarily equivalence, so does \(A\). Denote the \(n\times n\) matrix with \(1\) in the \(ij^{\prime}\)th entry and \(0\) in all other entries by \(e_{ij}\). As \(\lambda e_{ij}\) is rank one for each \(\lambda\in\mathbb{C}\), by the proposition above \(\lambda e_{ij}\) satisfies Conjecture 1.1. This leads us to pose the following question. **Question 3.2**.: _Is the set of all \(n\times n\) matrices which satisfy Conjecture 1.1 closed under addition?_ Since the matrices \(e_{ij}\) for \(i,j=1,2,\ldots,n\) form a basis for the space of all \(n\times n\) matrices, if one can provide a positive solution to the question above, this would prove Conjecture 1.1. For matrices \(A_{1},A_{2},\ldots,A_{n}\) of dimensions \(a_{1}\times a_{1},a_{2}\times a_{2},\ldots,a_{n}\times a_{n}\), respectively, the tensor product of \(A_{1},A_{2},\ldots,A_{n}\), \[A_{1}\otimes A_{2}\otimes\cdots\otimes A_{n}:\mathbb{C}^{a_{1}}\otimes \mathbb{C}^{a_{2}}\otimes\cdots\otimes\mathbb{C}^{a_{n}}\to\mathbb{C}^{a_{1}} \otimes\mathbb{C}^{a_{2}}\otimes\cdots\otimes\mathbb{C}^{a_{n}},\] is the unique linear map defined by \[x_{1}\otimes x_{2}\otimes\cdots\otimes x_{n}\mapsto A_{1}x_{1}\otimes A_{2}x_ {2}\otimes\cdots\otimes A_{n}x_{n}.\] Further details of the construction of the tensor product of operators may be found in [15]. **Proposition 3.3**.: _Let \(A\) be an \(a\times a\) matrix which satisfies Conjecture 1.1 and suppose that \(N_{1},N_{2},\ldots,N_{n}\) are normal matrices of dimension \(a_{1}\times a_{1},a_{2}\times a_{2},\ldots,a_{n}\times a_{n}\), respectively. Then_ 1. \(A^{*}\) _(the adjoint of_ \(A\)_) satisfies Conjecture_ 1.1_,_ 2. \(N_{1}\otimes N_{2}\otimes\cdots\otimes A\otimes\cdots\otimes N_{n}\) _satisfies Conjecture_ 1.1_._ Proof.: For a polynomial \(p\), we write \(p(z)=\sum_{k=0}^{N}p_{k}z^{k}\), where \(p_{k}\in\mathbb{C}\). (a) Note that \(p(A^{*})=(\tilde{p}(A))^{*}\), where \(\tilde{p}(z)=\sum_{k=0}^{N}\overline{p_{k}}z^{k}\). So as Conjecture 1.1 holds for the given matrix \(A\), \[\|p(A^{*})\| =\|(\tilde{p}(A))^{*}\|=\|\tilde{p}(A)\|\leqslant 2\sup_{z\in W(A)}| \tilde{p}(z)|\] \[\underbrace{=}_{*}2\sup_{z\in W(A^{*})}|\tilde{p}(\overline{z})|= 2\sup_{z\in W(A^{*})}|\overline{\tilde{p}(\overline{z})}|=2\sup_{z\in W(A^{*} )}|p(z)|,\] where the starred equality holds because \(z\in W(A)\) if and only if \(\overline{z}\in W(A^{*})\). (b) First consider the case when \(A\) lies in the last entry of the tensor product, i.e., \(N_{1}\otimes\cdots\otimes N_{n}\otimes A\). For each \(i=1,2,\ldots n\), we have \(U_{i}^{*}N_{i}U_{i}=D_{i}\) for some unitary \(U_{i}\) and diagonal \(D_{i}\). It is known (see, for example, [15, Section 3]) that \[(U_{1}\otimes\cdots\otimes U_{n}\otimes I_{d})^{*}=U_{1}^{*}\otimes\cdots \otimes U_{n}^{*}\otimes I_{d}.\] Thus \(U_{1}\otimes\cdots\otimes U_{n}\otimes I_{d}\) is unitary and we have the unitary equivalence \[(U_{1}\otimes\cdots\otimes U_{n}\otimes I_{d})^{*}N_{1}\otimes \cdots\otimes N_{n}\otimes A(U_{1}\otimes\cdots\otimes U_{n}\otimes I_{d})\] \[=D_{1}\otimes\cdots\otimes D_{n}\otimes A.\] Using the Kronecker product representation of \(D_{n}\otimes A\) we see \(D_{n}\otimes A\) is a direct sum of scalar multiples of \(A\). By the same argument, \(D_{n-1}\otimes(D_{n}\otimes A)\) is a direct sum of scalar multiples of \(A\) and so recursively using associativity of the tensor product we see that \(D_{1}\otimes\cdots\otimes D_{n}\otimes A\) will be a direct sum of scalar multiples of \(A\). Thus, by Propositions 2.2 and 2.3, \(D_{1}\otimes\cdots\otimes D_{n}\otimes A\) will satisfy Conjecture 1.1 and by unitary equivalence, so will \(N_{1}\otimes\cdots\otimes N_{n}\otimes A\). We now prove the general case, when \(A\) lies in the \(j\)'th entry of the tensor products, where \(j\in\{1,2,\ldots n\}\). Let \[\sigma_{j,n+1}:\mathbb{C}^{a_{1}}\otimes\mathbb{C}^{a_{2}}\otimes\ldots \otimes\mathbb{C}^{a_{n}}\otimes\mathbb{C}^{a}\rightarrow\underbrace{\mathbb{ C}^{a_{1}}\otimes\mathbb{C}^{a_{2}}\otimes\ldots\otimes\mathbb{C}^{a} \otimes\ldots\otimes\mathbb{C}^{a_{n}}}_{\mathbb{C}^{a}\text{ lies in the $j$'th entry}}\] be the linear map which permutes the \(j\)'th and the \(n+1\)'th entry, i.e., for \(x_{1}\otimes x_{2}\otimes\ldots\otimes x_{n}\otimes x_{n+1}\in\mathbb{C}^{a_ {1}}\otimes\mathbb{C}^{a_{2}}\otimes\ldots\otimes\mathbb{C}^{a_{n}}\otimes \mathbb{C}^{a}\), we have \[\sigma_{j,n+1}(x_{1}\otimes x_{2}\otimes\ldots\otimes x_{n}\otimes x_{n+1})= \underbrace{x_{1}\otimes x_{2}\otimes\ldots\otimes x_{n+1}\otimes\ldots \otimes x_{n}}_{x_{n+1}\text{ lies in the $j$'th entry}}.\] Then \(\sigma_{j,n+1}\) a is unitary map (see [15, Section 2]) and a computation reveals \[\sigma_{j,n+1}^{*}(N_{1}\otimes\cdots\otimes N_{j-1}\otimes A\otimes N_{j} \otimes\cdots\otimes N_{n})\sigma_{j,n+1}=N_{1}\otimes\cdots\otimes N_{n} \otimes A.\] As we know Conjecture 1.1 holds for \(N_{1}\otimes\cdots\otimes N_{n}\otimes A\) by unitary equivalence it must also hold for \(N_{1}\otimes\cdots\otimes N_{j-1}\otimes A\otimes N_{j}\otimes\cdots\otimes N _{n}\). We now present proofs of Conjecture 1.1 for several classes of matrices. As we expect the methodology to our approach may lead to a proof of Conjecture 1.1 for new classes of matrices we have not considered, we present the core idea as a theorem and discuss each of the classes of matrices which we show to satisfy Conjecture 1.1 as corollaries to this theorem. **Theorem 3.4**.: _Let \(A\) be a \(n\times n\) matrix._ 1. _If for each polynomial_ \(p\) _there exists a matrix_ \(B\) _(not necessarily of the same dimension as_ \(A\)) such that :_ 1. \(B\) _satisfies Conjecture_ 1.1_,_ 2. \(\|p(A)\|\leqslant\|p(B)\|\)_,_ 3. \(W(B)\subseteq W(A)\)_,_ _then \(A\) satisfies Conjecture 1.1._ 2. _If for an extremal function,_ \(\widehat{f}\)_, there exists a matrix_ \(B\) _(not necessarily of the same dimension as_ \(A\)) such that :_ 1. \(B\) _satisfies Conjecture_ 1.1_,_ 2. \(\|\widehat{f}(A)\|\leqslant\|\widehat{f}(B)\|\)_,_ 3. \(W(B)\subseteq W(A)\)_,_ _then_ \(A\) _satisfies Conjecture_ 1.1_._ Proof.: We first prove (a). Let \(p\) be a polynomial and let \(B\) be such that the assumptions of part (a) of the proposition hold, then \[\|p(A)\|\leqslant\|p(B)\|\leqslant 2\sup_{z\in W(B)}|p(z)|\leqslant 2\sup_{z \in W(A)}|p(z)|.\] To prove part (b), similarly observe that \[\sup_{f\in\mathbf{A}(W(A))}\frac{\|f(A)\|}{\max_{z\in W(A)}|f(z)|}=\frac{\| \widehat{f}(A)\|}{\max_{z\in W(A)}|\widehat{f}(z)|}\leqslant\frac{\|\widehat{ f}(B)\|}{\max_{z\in W(B)}|\widehat{f}(z)|}\leqslant 2,\] so \(A\) satisfies the conjecture. Previously matrices have been identified with the property that whenever \(B\) is a matrix that satisfies \(W(B)\subseteq W(A)\), one can write down an expression for \(B\)[7, 25, 26]. Unfortunately, the only such \(A\) which have been identified in this context are \(2\times 2\) matrices and reducible \(3\times 3\) matrices, and for both of these forms of matrices Conjecture 1.1 is known to hold. **Corollary 3.5**.: _If Conjecture 1.1 holds for all \(N\times N\) matrices then it holds for \(n\times n\) matrices where \(n<N\)._ Proof.: It suffices to prove the statement in the case that \(n+1=N\), as then inductively it will follow that the statement holds for all \(n<N\). Let \(A\) be an \(n\times n\) matrix and let \(p\) be a polynomial. If \[\|p(A)\|<\inf_{z\in W(A)}|p(z)|,\] then \(\|p(A)\|\leqslant 2\sup_{z\in W(A)}|p(z)|\). So we can assume without loss of generality that there exist a \(d\in W(A)\) such that \(|p(d)|\leqslant\|p(A)\|\). Set \(B=\begin{pmatrix}A&0\\ 0&d\end{pmatrix}.\) By assumption Conjecture 1.1 holds for all \((n+1)\times(n+1)\) matrices, so in particular it holds for \(B\). Also, \(\|p(B)\|=\|p(A)\|\) and \(W(B)=\text{conv}\{W(B),d\}=W(A)\), so by the previous theorem, Conjecture 1.1 holds for \(A\). For an \(n\times n\) matrix \(A\) and a column vector \(y\in\mathbb{C}^{n}\), the cyclic subspace generated by \(y\) is \(\operatorname{span}\{y,Ay,A^{2}y,\ldots\}\). We say that \(y\) is cyclic for \(A\) if the cyclic subspace generated by \(y\) is \(\mathbb{C}^{n}\). We say \(A\) is cyclic if there exists a \(y\in\mathbb{C}^{n}\) which is cyclic for \(A\). Studying cyclicity properties of matrices is an active area of research and the following corollary establishes a link between Conjecture 1.1 and cyclicity. For an extremal function \(\widehat{f}\) for \(A\), if \(x\) is such that \(\|\widehat{f}(A)x\|=\|\widehat{f}(A)\|\|x\|\), then \(x\) is called a corresponding _extremal vector_ for \(A\). **Corollary 3.6**.: _Let \(A\) be a \(n\times n\) matrix, let \(\mathcal{V}\subseteq\mathbb{C}^{n}\) be such that \(\dim\mathcal{V}\leqslant 2\) and let \(A_{\mathcal{V}}:=P_{\mathcal{V}}A_{|\mathcal{V}}:\mathcal{V}\to\mathcal{V}\) (where \(P_{\mathcal{V}}\) denotes the orthogonal projection onto \(\mathcal{V}\)) be the compression of \(A\) to \(\mathcal{V}\)._ 1. _If_ \(\|\widehat{f}(A)\|\leqslant\|\widehat{f}(A_{\mathcal{V}})\|\)_, then_ \(A\) _satisfies Conjecture_ 1.1_._ 2. _If the cyclic subspace generated by an extremal vector for_ \(A\)_,_ \(x\)_, is one or two dimensional, then_ \(A\) _satisfies Conjecture_ 1.1_._ 3. _If_ \(A\) _is a_ \(3\times 3\) _matrix and_ \(x\) _is not cyclic for_ \(A\)_, then_ \(A\) _satisfies Conjecture_ 1.1_. In particular_ \(3\times 3\) _non-cyclic matrices satisfy Conjecture_ 1.1_._ Proof.: \((a)\) As \(A_{\mathcal{V}}\) is unitarily equivalent to a \(1\times 1\) matrix (i.e. a constant) or a \(2\times 2\) matrix, and both of these satisfy Conjecture 1.1 (see [9]), \(A_{\mathcal{V}}\) satisfies Conjecture 1.1, and it is readily checked that \(W(A_{\mathcal{V}})\subseteq W(A)\). So the assumptions of part \((b)\) of Theorem 3.4 are satisfied with \(B=A_{\mathcal{V}}\). \((b)\) Set \(\mathcal{V}\) to be the cyclic subspace generated by \(x\). Since \(A(\mathcal{V})\subseteq\mathcal{V}\), we have \[\|\widehat{f}(A)\|=\|\widehat{f}(A)_{|\mathcal{V}}\|=\|\widehat{f}(A_{ \mathcal{V}})\|.\] Thus by part \((a)\), the matrix \(A\) satisfies Conjecture 1.1. \((c)\) If \(A\) is a \(3\times 3\) matrix and \(x\) is not cyclic for \(A\), then clearly the cyclic subspace generated by \(x\) is one or two dimensional. Thus the result follows from part \((b)\). **Remark 3.7**.: _In view of the two formulations of Theorem 3.4, one could restate the corollary above so that if for each polynomial \(p\), there exists a two dimensional \(\mathcal{V}\) (which can depend on \(p\)) such that \(\|p(A)\|\leqslant\|p(A_{\mathcal{V}})\|\), then \(A\) satisfies Conjecture 1.1. This formulation will be more applicable when one does not know an extremal function for \(A\)._ In the article [11], Crouzeix gives a detailed analysis of \(3\times 3\) nilpotent matrices and ultimately proves that \(3\times 3\) nilpotent matrices satisfy Conjecture 1.1. For a 2-nilpotent matrix, as the cyclic subspace generated by any vector is one or two dimensional, we immediately deduce the following corollary to part \((b)\) of Corollary 3.6. **Corollary 3.8**.: _Let \(A\) be a \(n\times n\) matrix such that \(A^{2}=0\). Then the matrix \(A\) satisfies Conjecture 1.1._ **Remark 3.9**.: _Corollary 3.8 provides a swift alternative algebraic proof of [24, Theorem 6], which proves Conjecture 1.1 holds for 2-nilpotent matrices. By combining [30] with [9] one can show that all matrices with minimum polynomial of degree \(2\) satisfy Conjecture 1.1, which provides another alternative proof of Corollary 3.8._ Corollary 3.6 may lead one to consider if every \(3\times 3\) matrix has an extremal function with a corresponding extremal vector which is non-cyclic (if this were true, this would prove Conjecture 1.1 in the positive for \(3\times 3\) matrices). However, the following example shows this is not the case. **Example 3.10**.: _Let_ \[J=\begin{pmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{pmatrix}\] _be a nilpotent Jordan block. Then by Theorem 2.4, the (unique up to scalar multiplication) extremal function is \(\widehat{f}(z)=2z^{2}\). The (unique up to scalar multiplication) extremal vector for_ \[2J^{2}=\begin{pmatrix}0&0&2\\ 0&0&0\\ 0&0&0\end{pmatrix}\] _is \(\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}\), which is cyclic for \(J\). Thus the only extremal vectors for \(J\) are cyclic._ Nonetheless there are examples of cyclic matrices for which we can still apply Corollary 3.6 as the following example shows. **Example 3.11**.: _Let_ \[A=\begin{pmatrix}0&1&0\\ 0&0&1-t\\ 0&0&0\end{pmatrix},\] _where \(1-\frac{1}{\sqrt{3}}\leqslant t\leqslant\sqrt{3}-1\). It is readily checked that \(\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}\) is a cyclic vector for \(A\). As highlighted in [24, Section 2.2.2] the extremal function for \(A\) is \(\widehat{f}(z)=z\), and the corresponding extremal vector for \(\widehat{f}(A)=A\) is \(\begin{pmatrix}0\\ 1\\ 0\end{pmatrix}\), which is not cyclic. Hence by part \((b)\) of Corollary 3.6, \(A\) satisfies Conjecture 1.1._ Our workings lead us to ask the following questions. **Question 3.12**.: _For which matrices \(A\), does there exist an extremal function for \(A\), \(\widehat{f}\), such that \(\|\widehat{f}(A)\|\leqslant\|\widehat{f}(A\mathcal{V})\|\) for some two dimensional subspace \(\mathcal{V}\)?_ **Question 3.13**.: _Which matrices \(A\) have the property that for each polynomial \(p\) one can find a subspace \(\mathcal{V}\) such that \(\dim\mathcal{V}\leqslant 2\) and \(\|p(A)\|\leqslant\|p(A\mathcal{V})\|\)?_ Any matrices with either of the properties above will satisfy Conjecture 1.1. ## 4. Symmetric matrices and truncated Toeplitz operators Complex symmetric operators are infinite dimensional generalisations of symmetric matrices. The past fifteen years has seen an explosion of research interest into complex symmetric operators. In pursuit of a proof of Conjecture 1.1 for symmetric matrices, in this section we investigate what role truncated Toeplitz operators play when studying the numerical ranges of symmetric matrices. The Hardy space \(H^{2}\) consists of all analytic functions on \(\mathbb{D}\) whose Taylor coefficients are square summable, that is, \[H^{2}:=\left\{f(z)=\sum_{n\in\mathbb{N}_{0}}a_{n}z^{n}:\sum_{n\in\mathbb{N}_{0 }}|a_{n}|^{2}<\infty\right\},\] which is a Hilbert space with the inner product defined by \[\left\langle\sum_{n\in\mathbb{N}_{0}}a_{n}z^{n},\sum_{n\in\mathbb{N}_{0}}b_{n}z^{ n}\right\rangle=\sum_{n\in\mathbb{N}_{0}}a_{n}\overline{b_{n}}.\] It is well known that, for \(f\in H^{2}\), the limits \[\tilde{f}(e^{it}):=\lim_{r\to 1}f(re^{it})\] exist for almost every \(t\), and \(\tilde{f}\in L^{2}(\mathbb{T})\) (here \(\mathbb{T}=\partial\mathbb{D}\) denotes the unit circle). If we set \(\widetilde{H}^{2}:=\{\tilde{f}:f\in H^{2}\}\subset L^{2}(\mathbb{T})\), then \(H^{2}\) and \(\widetilde{H}^{2}\) are isometrically isomorphic, and \(\widetilde{H}^{2}=\{f\in L^{2}(\mathbb{T}):f_{n}=0\text{ for }n<0\}\), where \(f_{n}\) denotes the \(n\)th Fourier coefficient of \(f\). We refer the reader to [13, 27] for a detailed background on the Hardy space. We say a function \(\theta\in H^{2}\) is _inner_ if \(|\theta|=1\) a.e. on \(\mathbb{T}\). For an inner function \(\theta\), we define the _model space_, \(K_{\theta}^{2}\), by \(K_{\theta}^{2}=(\theta H^{2})^{\perp}\cap H^{2}\). For example, if \(\theta(z)=z^{n}\), then \(K_{\theta}^{2}=\text{span}\{1,z,...,z^{n-1}\}\). If \(\theta(z)=\prod_{i=1}^{n}\frac{z-a_{i}}{1-a_{i}z}\) for distinct \(a_{1},...,a_{n}\) lying in the unit disc, then \(K_{\theta}^{2}=\text{span}\{k_{a_{1}},...,k_{a_{n}}\}\) where \(k_{a_{i}}=\frac{1}{1-\overline{a_{i}}z}\in H^{2}\) is the reproducing kernel at \(a_{i}\). For further details on model spaces, see [14]. The _truncated Toeplitz operator_ (which we will abbreviate to TTO), \(A_{g}^{\theta}:K_{\theta}^{2}\to K_{\theta}^{2}\), having symbol \(g\in L^{\infty}(\mathbb{T})\) is defined by \[A_{g}^{\theta}(f)=P_{\theta}(gf)\] where \(P_{\theta}\) is the orthogonal projection \(L^{2}(\mathbb{T})\to K_{\theta}^{2}\). In the special case when \(\theta(z)=z^{n}\), \(A_{g}^{\theta}\) is a \(n\times n\) Toeplitz matrix. Truncated Toeplitz operators have gained considerable interest from the operator theory community in the past fifteen years. We refer the reader to [5, 17] for survey articles on the topic and to [6, 19] for articles concerning the numerical ranges of truncated Toeplitz operators. **Lemma 4.1**.: _For any \(3\times 3\) matrix \(A\), there exist inner functions \(\theta_{1},\theta_{2}\) and \(g_{1},g_{2}\in L^{\infty}(\mathbb{T})\) such that \(W(A)=W(A_{g_{1}}^{\theta_{1}}\oplus A_{g_{2}}^{\theta_{2}})\) (summands may be 0)._ Proof.: The article [21] shows that for any matrix \(A\) there exists a symmetric matrix, \(S\), of the same dimensions as \(A\) such that \(W(A)=W(S)\). If \(S\) is normal, then it is unitarily equivalent to a TTO [8, Theorem 5.6]. If \(S\) is unitarily equivalent to a direct sum of a \(2\times 2\) and \(1\times 1\) matrix, then since every \(2\times 2\) matrix is unitarily equivalent to a TTO [8, Theorem 5.2], \(S\) is unitarily equivalent to a direct sum of two TTOs. If \(S\) is irreducible, then \(S\) is unitarily equivalent to a TTO [16, Theorem 5.2]. Thus, in all cases for the matrix \(S\) we can find (up to) two TTOs \(A_{g_{1}}^{\theta_{1}},A_{g_{2}}^{\theta_{2}}\) such that \(W(S)=W(A_{g_{1}}^{\theta_{1}}\oplus A_{g_{2}}^{\theta_{2}})\). The lemma above shows that TTOs serve as a building block for the numerical ranges of \(3\times 3\) matrices. An open conjecture listed in [17] states that every symmetric matrix is unitarily equivalent to a direct sum of TTOs. If this conjecture was proved, one could make straightforward adaptations to the lemma above, and show that TTOs serve as a building block for the numerical ranges of all matrices. Related to this, we also have the following proposition. **Proposition 4.2**.: _The following statements are equivalent:_ 1. _Conjecture_ 1.1 _holds for all truncated Toeplitz operators on three dimensional model spaces with analytic symbols,_ 2. _Conjecture_ 1.1 _holds for all_ \(3\times 3\) _symmetric matrices,_ 3. _Conjecture_ 1.1 _holds for all truncated Toeplitz operators on three dimensional model spaces._ Proof.: We first prove \(a\implies b\). By [16, Theorem 5.2], any \(3\times 3\) symmetric matrix \(S\) is unitarily equivalent to either one of the following: 1. The direct sum of matrices. 2. A rank one matrix. 3. A TTO with an analytic symbol. As Conjecture 1.1 holds for \(2\times 2\) matrices ([9, Theorem 1.1]), if (i) holds, by Proposition 2.3, \(S\) will satisfy the conjecture. If (ii) holds, then Proposition 3.1 shows \(S\) will satisfy the conjecture. If (iii) holds, then clearly by assumption \(S\) will satisfy the conjecture. Thus in all cases \(S\) satisfies the conjecture. The implication \(b\implies c\) follows from the fact that every TTO is unitarily equivalent to a symmetric matrix (see [28, Section 2] for details). The implication \(c\implies a\) is immediate.
2301.09262
AttMEMO : Accelerating Transformers with Memoization on Big Memory Systems
Transformer models gain popularity because of their superior inference accuracy and inference throughput. However, the transformer is computation-intensive, causing a long inference time. The existing works on transformer inference acceleration have limitations caused by either the modification of transformer architectures or the need of specialized hardware. In this paper, we identify the opportunities of using memoization to accelerate the self-attention mechanism in transformers without the above limitations. Built upon a unique observation that there is rich similarity in attention computation across inference sequences, we build a memoization database that leverages the emerging big memory system. We introduce a novel embedding technique to find semantically similar inputs to identify computation similarity. We also introduce a series of techniques such as memory mapping and selective memoization to avoid memory copy and unnecessary overhead. We enable 22% inference-latency reduction on average (up to 68%) with negligible loss in inference accuracy.
Yuan Feng, Hyeran Jeon, Filip Blagojevic, Cyril Guyot, Qing Li, Dong Li
2023-01-23T04:24:26Z
http://arxiv.org/abs/2301.09262v2
# MEMO : Accelerating Transformers with Memoization ###### Abstract Transformers gain popularity because of their superior prediction accuracy and inference throughput. However, the transformer is computation-intensive, causing a long inference time. The existing work to accelerate transformer inferences has limitations because of the changes to transformer architectures or the need for specialized hardware. In this paper, we identify the opportunities of using memoization to accelerate the attention mechanism in transformers without the above limitation. Built upon a unique observation that there is a rich similarity in attention computation across inference sequences, we build an attention database upon the emerging big memory system. We introduce the embedding technique to find semantically similar inputs to identify computation similarity. We also introduce a series of techniques such as memory mapping and selective memoization to avoid memory copy and unnecessary overhead. We enable 21% performance improvement on average (up to 68%) with the TB-scale attention database and with ignorable loss in inference accuracy. ## 1 Introduction Transformers [5, 10, 22] recently gain fame in various computing fields such as natural language processing (NLP) and computer vision (CV). Transformers provide superior prediction accuracy and throughput through parallelized attention mechanism [39]. To understand the meaning of a given sentence or predict the next words, former language models such as recurrent neural networks (RNN) [34, 11] accommodate the timing coordinates of given inputs (e.g., sequence of words in a sentence) by processing each input word sequentially. Transformers extract the relations and importance of individual input tokens in a unit of a whole sentence or a large chunk of words in parallel through attention computation. However, good performance is achieved at the cost of computation intensity. The highly parallel attention mechanism takes the largest portion of the total inference time of a transformer model. Figure 1 shows the inference time breakdown between attention computation and the other operations when testing three popular transformer models - BERT, RoBERTa, and DeBERTa - with different input sequence lengths, 256 and 512. In all cases, self-attention is the most time-consuming operation that takes as low as 43% up to over 80% of total inference time. The portion of the self-attention computation time is increasing as the longer sequence is fed in all three models. To reduce computations, several recent works [9, 43, 23, 38] proposed algorithms that exclude unimportant tokens from the computation. However, these studies achieve better performance at the cost of accuracy. The accuracy drop is significant, especially in complex tasks that require full-contextual information such as conversation generation and image generation. Furthermore, they require a specialized hardware accelerator to achieve the expected performance enhancement. Other works [8, 48, 17] proposed analytical models that exploit the architectural variances of diverse transformer models and introduced sparsity into self-attention computations to reduce the computation cost. Note that the self-attention computation has \(\mathcal{O}(L^{2})\) asymptotic time complexity to process a sequence with length \(L\). Thus, the attention computation cost increases quadratically as the input document length increases. To mitigate the scalability problem, the aforementioned studies leveraged sparsity-based optimizations by introducing different content-based methods such as locality sensitivity hashing (LSH) [18], early estimation [9] and cascaded filter [43]. They showed that the asymptotic time complexity can be reduced from \(\mathcal{O}(L^{2}D)\) to \(\mathcal{O}(KLD)\), where \(K\) is the selection hyper-parameter as in Top-K. For these solutions, the practicality is questionable because these optimizations require either significant changes in the model architecture or specialized hardware to achieve the expected performance improvement. In this paper, we propose a more scalable method to reduce attention computation overhead that does not require model architecture reconstruction or costly specialized accelerators. All of our evaluations are conducted on a CPU that is equipped with a big memory device, which is a common setting in data centers [7] and commodity computing systems [14]. Our proposed method significantly reduces attention computations by exploiting attention computation similarities. To reduce redundant computa tions that only generate very similar attention scores, we memoize the results of the most time-consuming tensor computations in the attention function and replace the expensive tensor computations with a lightweight database search. Our intuition to use memoization is the similarities commonly found in natural language sentences [16]. For example, _"I like apple."_ and _"I like banana."_ have different meanings but the important words (that have higher attention score) in these two sentences are obviously in the same position, which is _apple_ and _banana_. Once the absolute values of the tensors are close enough between the two sentences (as a human perceives the two sentences as very similar), we can reuse the tensor that is calculated by the first sentence for the second sentence. However, this seemingly intuitive idea has several challenges to be practically implemented. The first challenge is in finding a proper data representation to extract similarities in attention computations. Though humans can recognize similarities among sentences, it is unclear if those similarities are reflected numerically in the attention tensors. To identify similarities from the 2-dimensional attention matrix, we need to design a proper data representation through embedding. The embedding algorithm should be lightweight because otherwise highly parallel attention computations can easily make the search overshadow the benefits of memoization. The second challenge is the expensive memory accesses for storing and fetching pre-populated (memoized) tensors. Even for one attention head, the tensor values can be diverse depending on the input sequences. Therefore, a larger database is better for increasing the search hit rate. However, as the attention tensors do not have spatial locality (e.g., neighbouring attention computations do not have a certain pattern to generate similar tensors), the large database search leads to highly sparse memory accesses. To make things worse, modern deep learning frameworks like PyTorch require the tensors to be placed in consecutive memory addresses to enable vectorized data accesses (SIMD operations). Therefore, once the tensors are fetched from the pre-populated memoization database, the tensors are copied to another consecutive memory space and then loaded to the processor registers to be used by the attention function. Therefore, one tensor fetch generates two memory reads and one write, which may deterrent the performance gain of memoization. Therefore, it is critical to cut the chain of the cascaded memory accesses for individual tensor searches. The third challenge is in finding the optimal memoization level by considering the performance and accuracy tradeoffs. If performance improvement is the first priority goal, we can enforce the same amount of computations to be replaced with memoized tensors of all layers (e.g., all layers replace 50% of computations with memoized tensors). But, in that case, some attention might be replaced with less similar tensors and hence the inference accuracy cannot be guaranteed. On the other hand, if there is a strict accuracy requirement, tensors should replace computations only when the expected similarity is high enough. Therefore, the memoization opportunity might be imbalanced across layers. The problem is that the performance penalty in the layers having lower chances of memoization is significant because the tensor search should be done to know if there is a similar tensor in the database. If the search fails to find a tensor, the attention computation should be executed. In other words, the performance can be even worse than when memoization is not used at all because a tensor search should be done anyways. Therefore, there should be a way to determine the memoization effectiveness for individual attention without an actual tensor search. To tackle these challenges, we propose an efficient memoization framework (named _MEMO_ ) that 1) uses a vision-inspired novel multi-layer perceptron (MLP)-based embedding method that generates similar embedding vectors for the attention inputs that are likely to generate similar attention tensors, 2) finds the best matching tensor for each embedding vector by leveraging approximate nearest-neighbour search, 3) eliminates expensive tensor copies through lightweight memory mapping between a consecutive virtual memory space with the scattered physical addresses of individual tensors, and 4) runs a performance model that enables selective memoization and reduces the burden of search overhead. According to our experiments, the proposed embedding and approximate nearest neighbour search methods together derive a near-optimal search result while making the searching time 300\(\times\) faster than the exhaustive search. With the proposed memory mapping, we can eliminate memory copies completely and achieve an over 500\(\times\) speedup than using unoptimized tensor fetch. Selective memoization helps guarantee we only pay overhead when the performance gain can cover it. With all the methods together, MEMO improves the end-to-end transformer inference performance by 1.21\(\times\) on average (up to 1.68\(\times\).) MEMO trades memory capacity for computation effi Figure 1: Inference Time Breakdown ciency to improve self-attention computation by memoization. Particularly, we replace expensive self-attention matrix multiplications with nearest-neighbour search and lookups in big memory. Such a big memory system can be built based on an Intel Optane DC persistent memory module (PMM) [14] or a CXL-enabled disaggregated memory system [1], which provides tens of Tera-bytes of memory in a single machine. The contributions of this paper are as below. * We shed light on attention memoization, which has not been successfully demonstrated by earlier studies. We show that the attention similarities can be extracted through a proper embedding method. * We design an end-to-end framework MEMO for attention memoization. Our proposed design such as memory mapping, selective memoization, and performance modeling enables 21% performance improvement on average (up to 68%) with the TB-scale attention database and with ignorable loss in inference accuracy (less than 5%). * MEMO is highly scalable. It does not require transformer model reconstruction or a specialized accelerator. We achieve significant speedup on a general-purpose CPU with a commodity big memory system. ## 2 Background ### Self-Attention Mechanism **Function description.** The self-attention mechanism is a core building block in a transformer model. Given an input sequence of entities (e.g., words), self-attention aims to find correlations between different entities of the input in order to indicate the syntactic and contextual structure of the input sequence. Figure 2 depicts the computation involved in a typical self-attention block. The computation includes four major steps. The input sequence of the transformer model is tokenized and embedded into a set of vectors (e.g., the input hidden states). In 1, the vectors are multiplied with three weight matrices (\(W_{Q}\), \(W_{K}\) and \(W_{V}\)) to generate three intermediate tensors \(Q\), \(k\), and \(V\). In 2, \(Q\) and \(K\) perform an inner product, whose result is called the _attention matrix_ (AM). The dimensionality of the AM is \(L\times L\), where \(L\) is the length of the input sequence. In 3, each row of the attention matrix is softmax-normalized. The normalization result is the _attention probability matrix_ (APM). In 4, APM is multiplied with \(V\) to get the final output. Figure 2 shows the workflow of one self-attention layer. Empirically, one self-attention layer is followed by the other post-attention layers, e.g. feed-forward network, CNNs; and a transformer model can have many such combinations stacked on top of each other. For example, the BERT-base model has 12 Self-Attention layers and each of them is followed by a feed-forward layer and a residential connection. Furthermore, in a self-attention layer, the self-attention mechanism is split into multiple sub-domains to capture the different aspects of information, which is called multi-head attention. **Self-attention similarity.** In essence, self-attention is a mechanism to reveal the relationship between the entities within a given sequence. Each entity is matched with the most relevant other entities, and their relationships are quantified with high attention scores through the multiplication of \(Q\) and \(K\) and normalization by softmax. Since the result of the softmax function is a probability distribution, if the syntactic structure and entity relationship in two input sequences are similar, then it is possible that the probability distributions (i.e., the self-attention results) of the two input sequences are the same. **Transformer model inference.** The transformer models have been widely deployed on the CPU for model inference [6, 12, 42, 30]. The common machine learning (ML) frameworks (such as PyTorch and TensorFlow) all provide infrastructure support for transformer model inference on the CPU. Compared with GPU, using CPU can reduce the production cost by 16 times [32]. In this work, we focus on reducing the inference time of the transformer model on the CPU. ### Big memory system The recent emergence of memory technologies (such as Compute Express Link (CXL) [1] or persistent memory [14]) enables big memory systems at the terabyte scale. Such a big memory system can not only accommodate highly memory-consuming applications but also provide opportunities to enable new programming and computation paradigms [47, 25, 13]. ## 3 Related Work **Applying memoization for performance optimization.** Memoization techniques have been studied in many fields. Figure 2: Self-Attention Mechanism Sifla et al. [35] proposed a binary neural network (BNN)-based memoization scheme to adaptively decide when to use memoization on neurons to speed-up RNN training. Xie et al. [47] proposed to build an efficient two-phase LSM tree as a lookup table by leveraging the capacity advantages of Intel Optane Persistent Memory to speed up the Molecular Dynamic (MD) simulation. Some studies [29, 45, 28] empirically proved that a similar computation exists in CNN training and inference and proposed to project similar CNN computation results into buckets by locality-sensitive hashing (LSH) to speed up the CNN training and inference. **Similarity in DNNs** Prior works in [29, 28, 45, 35] report that high similarity exists in convolution computation in a single image data or RNN neuron activation in a consecutive time step. Cao et al. [4] proposed a decoupled transformer architecture dedicated to Question-Answering (QA) tasks. It reuses the same encoding results in shallow layers for the question part to speed up the inference. However, the proposed method is limited to QA, where there are explicit duplications of texts. Srinadh et al. [2] studied the impact of reusing attention probabilities in some consecutive layers of the self-attention layer for the same input texts. However, none of them revealed the high similarity in attention probability for general text inputs and the key intermediate variables in self-attention layers across different input sequences. **Optimizing performance of the transformer.** Recently, a few studies have been proposed to improve task performance as well as the computation efficiency of self-attention mechanisms. Some of those efforts led to variants of attention. Google BERT [5] is a bidirectional transformer for language modelling. ALBERT [20] reduces memory consumption by sharing weights across self-attention blocks. In [37], each token only performs self-attention with their neighbouring tokens within an adaptive window. Reformer [18] utilizes random projection-based LSH to project query-key pairs into hash buckets and then compute the attention score of each bucket. Wu et al. [46] modeled the attention computation as the maximum inner product search problem and design a recommendation system based on a single layer self-attention utilizing the histogram of the encoded items in the sequence. Ham et al. [9] proposed a random projection-based transformer approximation scheme and showed good performance with a specialized hardware. Unlike these solutions that either require significant model reconstruction or specialized hardware, we replace the costly attention computation with lightweight database lookup for better transformer inference performance. ## 4 Motivation ### Cost of Self-Attention Mechanism As explained in Section 2.1, the self-attention mechanism consists of four key steps. The first step (\(K\), \(Q\) and \(V\) projection) requires \(3\times L\times H\) multiply-and-accumulate (MAC) computation (where \(H\) is the dimension of the token embedding, which is also referred as hidden states). The second step (\(Q\cdot V\)) requires \(L^{2}\times H\) MAC computation. The third step (softmax) requires (\(L^{2}\)) exponent computation. The fourth step ( \(V\) - attention probability) requires \(L^{2}\times H\) computations. We measure the execution time of self-attention in the context of end-to-end transformer model inference. We use SST2 dataset from GLUE benchmark suite [41] and evaluate three transformers (BERT, RoBERTa, and ALBERT). The input sequence length is 128, 256, and 512. Our evaluation is performed on an Intel Xeon Gold 6252 CPU (24 cores in total). More detailed discussions on the transformer models can be found in Section 6.1. Figure 1 reveals that the self-attention mechanism takes more than 40% of the total inference time in all three models. As we increase the input sequence length, self-attention takes a larger portion (up to 83%). We conclude that the self-attention mechanism is the major performance bottleneck in the transformer model inference. ### Opportunity in Similarities The success of memoization depends on the existence of computation redundancy, manifested as the similarity between computation results. In the self-attention mechanism, we identify that the attention probability matrices often show similarity across input sequences. To quantify the similarity, we use a metric based on the total variation (TV) distance [2]. Our metric (named the _similarity score, Figure 3: Distribution of Similarity Scores in BERT (SC)) is defined in Equation 1. \[\begin{split} SC(A,A^{\prime})&=1-\frac{1}{n}\sum_{p=1}^ {L}TV(A[p,:],A^{\prime}[p,:])\\ &=1-\frac{1}{n}\sum_{p=1}^{L}\frac{1}{2}||(A[p,:]-A^{\prime}[p,: ])||_{1}.\end{split} \tag{1}\] where \(A\) and \(A^{\prime}\) are two input matrices; \(L\) denotes the input sequence length; \(||.||_{1}\) denotes the \(L1\) norm; \(A[p,:]\) denotes the \(p\)th token in the input sequence or the \(p\)th row in APM (\(A\)). Since the attention probability follows a probability distribution for each token in the sequence, the total variation distance falls in [0,1], so as the similarity score. A similar definition is used in prior work [2]. We use BERT model [5] and SST2 (from the GLUE benchmark suite [41]) as a case study. There are 12 self-attention layers in BERT. We collect the APMs from all of them by feeding them with 60K sequences in the SST2 training set. Those matrices are used to build an _attention database_. We use another 600 sequences from the SST2 testing set for BERT inferences. During each inference, once we calculate APM for the first attention head, we search the attention database to find the most similar record using the similarity score. Figure 3 shows the distribution of similarity scores in the APMs for four layers in BERT. The similarity for each APM is calculated using the most similar matrix found in the attention database. We have two observations: * _A large percentage of the APMs can find similar records in the attention database with a high similarity score (e.g., between 0.7 and 0.9)_. For example, in Layer 0, 49.21% of APMs have high similarity. * _The similarity distribution is diverse across layers in BERT._ Hence, we must apply memoization differently to maintain high accuracy. ### Impact of Applying Memoization on Accuracy As a preliminary study, a similarity threshold is used to control whether memoization should be used. We use the same attention database, BERT model, and similarity score as in Section 4.2. In this study, we reveal the impact of applying memoization on BERT model accuracy. The searching result from the attention database will be used only when the similarity score is larger than the threshold. In this study, we only search according to the first attention head in each layer of BERT. We change the similarity threshold from 1 (i.e., no memoization) to 0 (i.e., all memoization), and then measure the BERT inference accuracy. Figure 4 shows the results. We have two observations. * As we reduce the threshold, the memoization rate is increased. When the threshold is 0.8, 42% of APMs can be replaced with the results from the attention database. * The accuracy loss can be very small even when the memoization rate is high. For example, when the threshold is 0.8 and the memoization rate is 42%, the accuracy loss is less than 2%, ignorable in the BERT inference task in this study. ## 5 System Design ### Overview MEMO has three major components: memoization database, offline profiler, and online inference engine, depicted in Figure 5. The memoization database consists of an attention database where pre-computed APMs are stored and an index database where indices to APMs are stored and searched (Section 5.3). Given a transformer inference request, the online inference engine embeds the input (a hidden state) to each attention layer based on a lightweight embedding model (Section 5.2), and then uses the embedding result (a feature vector in Figure 5) as a key to query the index database. Using embedding, MEMO is able to find semantically similar hidden states, hence increasing the memoization opportunities. Once an APM of the similar hidden state is returned, the inference engine uses a memory mapping technique (Section 5.3) to avoid the memory copy overhead. The offline profiler is used to build a performance model for selectively applying memoization (Section 5.4). The selective memoization is necessary, because memoization cannot be always _successfully_ applied but the memoization overhead has to be paid. The offline profiler is based upon the transformer training process to build a performance model to predict whether using memoization on a specific layer can lead to performance benefit. We describe the three components in details as follows. ### Hidden State Embedding The input to self-attention is a hidden state, which is the output of the immediate preceding layer of self-attention. **Why embedding?** To match a hidden state with a pre-recorded hidden state in the attention database, we could decide the match based on the direct application of the similarity score defined in Equation 1. However, such a match ignores semantic similarity. In particular, two hid Figure 4: Impact of Memoization on BERT Accuracy den states may be very different in terms of the similarity score but lead to similar APMs. We call these two hidden states, _semantically similar_. The semantically similar hidden states bring more opportunities for memoization. To quantify the semantic similarity, we use embedding. When the embeddings of two hidden states are similar (in terms of the similarity score), the two hidden states are matched during a search in the database. The embedding is essentially an internal representation of input features, and an embedding layer learns this representation during training such that features that have similar semantics have similar embeddings. The embedding has been used in retrieval problems [44, 36]. Our objective is to search for hidden states producing similar APMs. Besides identifying hidden states with semantic similarity, embedding allows us to map the hidden state from a higher dimension space to a lower one and reduces computation complexity of measuring the similarity. For an input sequence of a transformer with length \(L\) and the dimension of hidden state \(H\), the shape of the hidden state tensor is \(L\times H\) which is typically in the scale of \(O(10^{3})\). With appropriate embedding, we can project the hidden states into a lower dimension (e.g., 128), which significantly reduces computation complexity and search space. **Embedding network structure** is critical to the accuracy and efficiency of the search process. We use a Multi-layer Perceptron (MLP) as the embedding model. The MLP is a neural network model with multiple layers of fully-connected nodes. Our MLP has two layers (one input layer and one output layer) with 256 neurons and a hidden dimension size of 128. All the neurons are linear neurons (\(y=wx+b\)). Such an MLP is lightweight. Besides MLP, we explore other models for embedding, such as convolutional neural network (CNN) or transformer, which is reported with higher accuracy in some retrieval problems [19]. However, the CNN and transformer have more computational complexity and require much longer inference time, which easily kills the performance benefit of using memoization. For example, with similar accuracy, on Intel Xeon Gold 6252 CPU, the inference time of our MLP for a 64-sequence batch with an input length of 128 takes only 5 ms, while the two-layer CNN- or single-layer transformer-based embedding takes about 100 ms and 150 ms respectively. **Training** the embedding model is challenging in our case, because of data annotation. Given a big memory system with billion-scale hidden states in the attention database, deciding the similarity between hidden states to label them is prohibitively expensive. To address this problem, we use the Siamese network [19], a training technique shown in Figure 6. The Siamese network is a neural network containing two or more identical sub-networks which share the same weights and train on the same dataset. In our context, the Siamese network contains two identical embedding models (two MLPs). Once the Siamese network is trained, one of the embedding models is used for memoization. The Siamese network is trained to minimize the distance between hidden states whose embedding results (feature vectors in Figure 6) have similarities while maximizing the distance between hidden states with different semantic similarities. During the training, in each iteration, two hidden states are used as input to the Siamese network. After embedding, the Siamese network calculates the Euclidean distance between the two embedding results. In addition, the Siamese network measures the similarity score using two the first head of APMs corresponding to the two input hidden states. This similarity score is used as ground truth, while the Euclidean distance is used as the network output. The training iteratively optimizes embedding-model parameters to minimize the difference between the ground truth and Euclidean distance. With the above training process, we do not need to label hidden states. **Embedding quality evaluation.** We evaluate the search results based on embedding and compare them with an exhaustive search. Given a hidden state as input, the exhaustive search returns a hidden state in the attention database whose corresponding APM is the most similar to that of the input hidden state. We use a 12-layer BERT model. We build the attention database using 60,000 input sequences from SST-2 dataset from the GLUE Benchmark suite and use 640 input sequences for validation. The embedding-based search is effective as shown in Figure 7. The average similarity-score difference between the exhaustive search and embedding-based NN search is only less than 0.1. Furthermore, the exhaustive search takes 1.5 s on average for each search, while the embedding-based search takes only 5 ms, which Figure 5: End-to-end Workflow of MEMO Figure 6: Siamese Network in Embedding Model Training shows about 300\(\times\) speedup than the exhaustive search. ### APM Construction After getting the embedding model, we build and populate an attention database by storing embedding results of hidden states collected from a training dataset. To avoid an expensive search of hidden states in the attention database due to expensive memory accesses, we build an index database where indices to the hidden states are stored. Each index includes an embedding result for a hidden state. Having the embedding result is necessary to compute similarity for future queries to the index database. The indices in the index database are organized as a hierarchical tree structure to accelerate the search. We use Faiss [15], a library for efficient similarity search using Hierarchical Navigable Small Worlds [24] (a nearest-neighbour search algorithm) to build the index database. With Faiss, searching 100k of vectors with a dimension size of 128 takes less than 0.5 ms, which is 360\(\times\) and 10\(\times\) shorter than the time taken for self-attention and embedding respectively. Hence, search does not create a performance bottleneck for memoization. Figure 8 shows the process of retrieving APMs from the attention database. A batch of hidden states is embedded (Line 6). The embedding results (feature_vectors) are used to query the index database (Line 7). Once the query finishes, the indices with the highest similarity scores are returned. If a similarity score is larger than a threshold (Line 9), the corresponding index is used to retrieve an APM from the attention database. APMs are retrieved as a batch (Line 10), then used in the following computation. **Performance problem due to memory copy.** The above process introduces a performance problem due to memory copy. In particular, in the attention database, the APMs are stored in large memory spaces. During the APM retrieval, APMs are sliced from this space, and then gathered into another contiguous memory space as a tensor for the following computation. The APM gathering is necessary in order to exploit SIMD parallelism for high performance. However, APMs gather copies scattered APMs from the attention database into a contiguous memory space, which introduces nontrivial overhead. For example, using BERT, when the input sequence length is 512, gathering 64 scattered APMs as a batch for a self-attention layer takes 731ms on an Intel Optane-based big-memory platform (see Section 6.1 for hardware details), which is 1.45\(\times\) larger than the performance cost of self-attention itself. Hence, we must address this performance problem in order to enable the performance benefit of memoization. To address this problem, one could change the ML framework to work on tensors with nonconsecutive memory layouts. However, this requires bookkeeping of tensor allocation and careful manipulation of tensor pointers, which is difficult and loses the performance benefit of SIMD. **Solution based on memory mapping.** We introduce a solution based on memory mapping. When constructing the attention database, each APM is stored as a file object in memory. When retrieving APMs as a batch, APM file objects are mapped into a contiguous virtual-memory space as a tensor without causing memory copy. After self-attention, the file objects are unmapped. Figure 9 depicts the difference between the traditional memory copy- and our memory mapping-based approaches when the pages mapped to two APMs (green and yellow boxes) are gathered in consecutive memory. **Performance analysis.** Our solution replaces the expensive memory copy with page table mapping. Behind Figure 8: Code for APM Retrieval from Attention Database Figure 7: Exhaustive Search vs. Embedding-based Search Figure 9: APM Gathering: Copy-based vs. Mapping-based the scene, OS inserts or updates page table entries (PTE) such that the consecutive virtual addresses can be translated to individual APMs' physical addresses. As different APMs are used at different layers of the transformer, the mapping is updated at each layer. However, the PTEs can be reused across layers. Once the initial mapping happens in a prior layer, the same PTEs mapped to the consecutive virtual addresses can be simply updated with the physical addresses of APMs of the following layers without additional PTE insertion or removal. The page table update does not incur extra overhead, because the APMs found from the attention database have to be mapped to the page table even when the ML framework (such as PyTorch) does not mandate memory copy. Also, as each APM is typically several tens of MBs, this tensor-wise memory mapping does not cause page fragmentation. TLB might be thrashed. However, as APM accesses do not have a high locality (see Section 6.4), we cannot expect a benefit from TLB in memoization. With the dynamic memory mapping, we can remove memory copy completely and our experimental results show more than 500\(\times\) speedup over unoptimized APMs fetch. ### Selective Memoization Applying memoization successfully to self-attention (i.e., finding a similar APM in the attention database and mapping APM to self-attention) leads to performance benefits. However, if a similar APM cannot be found and the memoization cannot be successfully applied, then there is no performance benefit and the embedding and search overhead cannot be covered, which causes performance loss. Among all self-attention layers in a transformer, there should be enough opportunities to apply memoization, such that the overhead can be offset by the performance benefit. If there is not enough opportunity, then the original self-attention computation should happen instead of trying memoization with embedding and searching, such that there is no performance loss. Our observation in Section 4.2 (Figure 3) reveals that the memoization opportunity is different from one layer to another. We study how to use performance modelling to guide the application of memoization, such that there is no performance loss (if there is no performance benefit). We apply the memoization at the granularity of a layer: a layer either uses memoization to all of its attention heads or uses no memoization at all. We do not choose a finer granularity (e.g., an individual attention head) to apply memoization, because of the following reason: the attention heads in the same layer are reported to have high modelling redundancy [2, 3, 40] and tend to make the similar decision on using memoization; also, embedding and search for multiple attention heads can lead to larger performance overhead. In the following discussion, we use a term, _memoization rate_, defined as follows. Assume that there are \(N\) input sequences for a transformer model with \(L\) layers. During the inferences of those \(N\) inputs, we count the number of times memoization is _successfully_ applied to individual layers, denoted as \(M\). Then, the memoization rate is defined as \(M/(N\times L)\). **Performance model** quantifies the performance benefit of memoization, excluding the overhead (including embedding and searching). Given a layer \(i\) in a transformer and a batch of input sequences (the batch size is \(N\)), the performance benefits \(PB_{i}\) after applying memoization is formulated in Equation 2. \[PB^{i}=T^{i}_{Att}\times\alpha^{i}-T^{i}_{overhead} \tag{2}\] where \(T^{i}_{Att}\) and \(T^{i}_{overhead}\) are the execution time of the attention mechanism without using memoization in the layer \(i\), and the overhead of memoization, respectively, for all \(N\) input sequences. \(\alpha_{i}\) is the memoization rate in the layer \(i\) (calculated with \(L=1\)). We want \(PB_{i}>0\). **How to build the performance model.** The performance model is constructed during the training of the transformer model. Given a training dataset with \(N\) input sequences, for each input sequence, memoization is applied to each attention layer. Then, we check whether the memoization brings performance benefit in each layer to calculate \(\alpha^{i}\) for each layer. **How to use the performance model.** We use \(\alpha\) during online transformer-model inference to guide the application of memoization. The training dataset and online inference dataset of the transformer should have similar properties in order to ensure that the transformer model has meaningful high inference accuracy. Hence, \(\alpha\) measured during the training process can be used online to guide memoization. During the online inference, a batch of inference requests (or a batch of sequences) is fed into the transformer model. Before inferences happen, we estimate \(T_{Att}\) and \(T_{overhead}\) based on the length of those input sequences. The estimation is based on the scaling of \(T_{Att}\) and \(T_{overhead}\) measured with the training dataset. The scaling factor is the ratio of the total length of inference sequences to the total length of training sequences. Given \(T_{Att}\), \(T_{overhead}\), and \(\alpha\), we use Equation 2 to calculate the performance benefit (\(PB\)) for each layer. **Impact of memoization on inference accuracy.** The performance model does not consider the impact of memoization on transformer inference accuracy. Besides the performance reason, using memoization or not is also controlled by a threshold (shown in Figure 8) to ensure the same inference accuracy as the inference without memoization. When threshold-based control and performance-based control have divergent decisions on whether memoization should be used, we do not use memoization. We expect that the user specifies the threshold to guard inference accuracy, similar to the user specifies other model hyperparameters (such as learning rate and batch size). But an autotuner [26, 33, 21] can be employed to automatically decide an appropriate threshold. ## 6 Evaluation ### Experimental Setup **Hardware Platform.** We evaluate MEMO on a server equipped with two Intel Xeon Gold 6252N 24-core processors running Linux 5.15.0. Each socket has 12 DIMM slots, six of which are for 16-GB DDR4 DRAM and another six are for 128-GB Intel Optane DC modules. The platform provides up to 1.6 TB of heterogeneous memory. **Software Platform.** We implement our design on top of PyTorch 1.11 with oneDNN (previously known as MKLDNN) support. In the evaluation, all 48 cores are used. We refer to the pure computation-based transformer inference (without memoization) with all 48 cores and MKLDNN as the baseline. **Evaluation models.** We evaluate our design with four different transformer-based models: BERT [5], RoBERTa [22], DeBERTa [10] and GPT-2 [31], where BERT, RoBERTa and DeBERTa are built with transformer encoders, and GPT-2 is built with transformer decoders. BERT and RoBERTa have the same model architecture. DeBERTa uses an improved self-attention, namely disentangled self-attention, for better accuracy though it is more computationally expensive, as shown in Figure 1. The number of model parameters and model size are listed in Table 1. The input datasets we used in the evaluation are SST-2 dataset from the GLUE [41] benchmark suite for the encoder-based transformers and WikiText-v2 [27] for the decoder-based transformer. ### End-to-End Performance We evaluate the performance and stability of MEMO in three ways. First, we evaluate the performance of the proposed memoization with three typical input lengths: 256 and 512 for BERT, RoBERTa and DeBERTa, and 1024 for GPT-2. For each input length, we evaluate three typical batch sizes: 1, 32 and 64. For each batch size, we evaluate the performance of three different levels of memoization: _conservative_, _moderate_ and _aggressive_ with respect to the memoization threshold, as depicted in Table 3. We collect APMs while feeding 40K, 20K and 10K input sequences with sequence length 256 from the training dataset to the transformer models; 8K, 6K, and 4K inputs of sequence length 512; 2K, 1.5K, and 1K inputs of sequence length 1024. The pre-populated database size, the embedding training time, and the index database building time of each configuration are summarized in Table 2. The attention database size ranges from 410 GB to 1.4 TB. Figure 10 presents the end-to-end performance improvement over the baseline with 40K sequences as attention database with 256 as input sequence length; 8K sequences as attention database with 512 as input sequence length; 1K sequences as attention database with 1024 as input sequence length. In the appendix, we have results for other attention database sizes (i.e. 20K and 10K for sequence length of 256, 6K and 4K for sequence length of 512, and 1.5K and 1K for sequence length of 1024). The performance benefit of MEMO is more significant when the input sequence length is longer and the batch size is larger. The reason is that the memoization opportunity is higher in that case and hence more computations can be replaced with database lookup. However, when the sequence length reaches 1024 or the batch size is larger than 32, the embedding model takes more time to embed the hidden states to the feature vectors. Therefore, the performance benefit of memoization is diminished. When a database is developed with more sequences as can be compared across the figures, i.e., 40K, 20K, and 10K with 256 as the sequence length; 8K, 6K, and 4K with 512 as the sequence length; 2K, 1.5K, and 1K with 1024 as the sequence length (See Figure 10, Figure 11 and Figure 12 in the appendix), the memoization rate increases even under the same aggressiveness level. The memoization rate is plotted with green triangles in all graphs. Thus, we observe a higher speedup with a larger database. Among the tested models, DeBERTa shows the most speedup. This is because DeBERTa uses modified (optimized) self-attention layers for better accuracy which are more computationally expensive and take a larger portion of overall inference time. Thus, memoization helps improve performance more significantly. This sheds the light on the co-optimization of a transformer model for both accuracy and performance by applying our proposed memoization for the highly optimized models. (more conservative) is set, a high memoization rate is expected and it leads to a high speedup but the accuracy loss might be increased. However, in some models like DEBERTa, we find that a noticeably high memoization rate can be achieved with even improved accuracy (than the baseline) when conservative memoization is used (the red stars are below 0% accuracy drop line when the sequence length is 256). Hence, finding a more favourable transformer model for memoization is also important to reduce the burden of tradeoff exploration. The pre-populated database size also contributes to the accuracy loss. A larger database leads to higher possibility of finding similar records, which leads to higher accuracy (evidenced as red stars at the lower levels in Figure 12 (a) than in (b)). ### Attention Database Analysis We analyze the attention database in terms of record access pattern. If one record is reused by multiple attention computations, keeping the hot records in the database can effectively reduce the size overhead. Unfortunately, from our experiments with several database prepopulated with different number of sequences, we could not observe _hot_ records, which are reused multiple times. Figure 13 shows the number of accesses of individual records in an attention database that is pre-populated with the first layer of the BERT model trained with 10000 sequences from the SST-2 dataset and inferenced with 640 sequences from the validation set. In the figure, it is clear that most records are reused only once or twice. The most frequently accessed record is reused only five times. This indicates that there are not hot records and hence, every record is almost equally important to memoization. In other words, to increase the memoization opportunity, it is necessary to keep as many records in the database. Therefore, a big memory system is crucial to provide both large capacity and low latency in the memoization. ### Efficiency of Mapping-based APM Gathering We measure the efficiency of the proposed mapping-based APM gathering by comparing the APM fetch latency with the baseline copy-based approach. As plotted in Figure 14, our proposed mapping-based APM gathering is 300\(\times\) faster than the baseline copy-based approach. Due to the significantly lower latency, the mapping-based approach bar charts look missing but their latency results are annotated near the x-axis of the graph. The superior performance of mapping-based approach is sourced from the completely removed data copy. Though our approach requires PTE update, as the PTE entries are likely to be kept in the page table throughout the model inference to be reused by all layers as the tensor storage, the performance overhead is negligible compared to data copy. \begin{table} \begin{tabular}{l l l l l l l|l l l} \hline Sequence Length & \multicolumn{3}{c}{256} & \multicolumn{3}{c}{512} & \multicolumn{3}{c}{1024} \\ \hline \# of Sequences & 10K & 20K & 40K & 4K & 6K & 8K & 1K & 1.5K & 2K \\ Pre-populated DB Size (GB) & 410 & 723 & 1400 & 575 & 855 & 1130 & 630 & 940 & 1250 \\ Embed. Training Time (h) & \(\sim\)0.5 & \(\sim\)1 & \(\sim\)3 & \(\sim\)1 & \(\sim\)1.5 & \(\sim\)3 & \(\sim\)1.2 & \(\sim\)1.4 & \(\sim\)2 \\ Vector DB Build Time (s) & 300 & 852 & 2124 & 192 & 278 & 454 & 128 & 176 & 384 \\ \hline \end{tabular} \end{table} Table 2: Pre-populated Database Size, Embedding Training Time, and Index Database Building Time Figure 11: GPT with 2K Sequences Database w.r.t. Memoization Levels, Batch Sizes. \begin{table} \begin{tabular}{l l l l l} \hline Model & BERT & RoBERTTa & DeBERTTa & GPT-2 \\ \hline Conservative & 0.98 & 0.97 & 0.9995 & 0.99950 \\ Moderate & 0.97 & 0.96 & 0.9993 & 0.99935 \\ Aggressive & 0.96 & 0.95 & 0.9990 & 0.99920 \\ \hline \end{tabular} \end{table} Table 3: Memoization Threshold Settings Figure 10: Speedup with Database Populated from 40K and 8K Sequences w.r.t. Memoization Levels, Batch Sizes. ### Impact of Selective Memoization We evaluate the impact of selective memoization by measuring the average speedup of model-guided selective memoization and the corresponding average reuse rate drop as shown in Table 4. We used sequence length 256 in the experiment. Memoization selection improves the performance of all tested models as low as 3.0% up to 12.3% speedup. The results show that even when we give up some memoization (on purpose), we can have better performance because unnecessary APM search overhead can be removed by excluding the layers that cannot be benefited from the memoization through performance modelling. Interestingly, in GPT-2, even if we apply selective memoization, the memoization rate increases. This is because the unsuccessful memoization is replaced with actual computation, and the succeeding layers find more memoization opportunities from the computed results. ### Sensitivity of Dataset The memoization opportunity depends on the input dataset because some datasets in nature have high semantic similarities across texts. Thus, we evaluate the impact of the dataset on memoization performance. We Figure 14: APM Fetch Latency Figure 12: Inference Speedup w.r.t. Memoization Levels, Batch Sizes, and Input Sequence Lengths Figure 13: Per-APM Reuse Count tested two more datasets, CoLA and QNLI [41]. CoLA is a sentence-level classification dataset with 8,551 training samples and 1,037 test samples. QNLI is a sentence pair classification dataset with 104,743 training samples and 10,000 test samples. We use 8,000 samples from the training set each to set up the attention database. As can be seen in Figure 15, we achieve 1.14\(\times\) and 1.08\(\times\) speedup even with conservative memoization from CoLA and QNLI, respectively. The accuracy drops less than 1% compared to the evaluation loss of the baseline (star plots in the first columns of two datasets in Figure 15). With aggressive memoization, we achieve 1.15\(\times\) and 1.14\(\times\) speedup with less than 5% accuracy drop. This shows that memoization has good generalization. ### Sensitivity of Input Lengths on Similarity Similar to the database sensitivity, the input sequence length might influence the memoization performance because longer sequences might have more similarities across them. To evaluate the impact of the sequence length, we use 10,000 sequences from the SST2 dataset and develop an attention database while running a BERT model. While running the inference, if an input sequence is longer than the pre-defined length threshold, we trunk the input to match the length limit. In Figure 16, we present the similarity histogram of the APMs for different input sequence lengths. We cannot observe a clear pattern in the similarity distribution across diverse sequence lengths. Therefore, the input sequence length cannot induce biased performance. More importantly, there exist high similarity scores in all input lengths, which indicates that memoization is effective for all sequence lengths. ## 7 Conclusions The emerging big memory system brings new performance optimization opportunities. In this paper, we accelerate transformers on big memory systems based on memoization. Our work is based on a unique observation that there is similarity in self-attention computation across transformer inferences. We introduce a framework MEMO using embedding, memory mapping, and performance modeling to make memoization feasible to accelerate transformer inferences. MEMO brings large performance improvement by 21% on average (up to 68%), compared with no memoization.
2302.10263
Cosine and Sine addition and subtraction law with an automorphism
Let $S$ be a semigroup. Our main results is that we describe the complex-valued solutions of the following functional equations \[g(x\sigma (y)) = g(x)g(y)+f(x)f(y),\ x,y\in S,\] \[f(x\sigma (y)) = f(x)g(y)+f(y)g(x),\ x,y\in S,\] and \[f(x\sigma (y)) = f(x)g(y)-f(y)g(x),\ x,y\in S,\] where $\sigma :S\rightarrow S$ is an automorphism that need not be involutive. As a consequence we show that the first two equations are equivalent to their variants. We also give some applications.
Youssef Aserrar, Elhoucien Elqorachi
2023-02-20T19:53:23Z
http://arxiv.org/abs/2302.10263v1
[ ###### Abstract Let \(S\) be a semigroup. Our main results is that we describe the complex-valued solutions of the following functional equations \[g(x\sigma(y)) =g(x)g(y)+f(x)f(y),\ x,y\in S,\] \[f(x\sigma(y)) =f(x)g(y)+f(y)g(x),\ x,y\in S,\] and \[f(x\sigma(y)) =f(x)g(y)-f(y)g(x),\ x,y\in S,\] where \(\sigma:S\to S\) is an automorphism that need not be involutive. As a consequence we show that the first two equations are equivalent to their variants. We also give some applications. Functional equation, semigroup, addition law, automorphism]Cosine and Sine addition and subtraction law with an automorphism Y. Aserrar]Clhoucien Elgorachi 1]Youssef Aserrar E. Elgorachi ## 1 Introduction Let \(S\) be a semigroup and \(\sigma:S\to S\) an automorphism, i.e \(\sigma(xy)=\sigma(x)\sigma(y)\) for all \(x,y\in S\). The cosine subtraction formula, sine addition formula and the sine subtraction formula are respectively the functional equations \[g(x\sigma(y)) =g(x)g(y)+f(x)f(y),\quad x,y\in S, \tag{1.1}\] \[f(x\sigma(y)) =f(x)g(y)+f(y)g(x),\quad x,y\in S,\] (1.2) \[f(x\sigma(y)) =f(x)g(y)-f(y)g(x),\quad x,y\in S, \tag{1.3}\] for unknown functions \(f,g:S\to\mathbb{C}\). These functional equations generalizes respectively the trigonometric identities \[\cos{(x-y)} =\cos(x)\cos(y)+\sin(x)\sin(y),\quad x,y\in\mathbb{R},\] \[\sin{(x+y)} =\sin(x)\cos(y)+\sin(y)\cos(x),\quad x,y\in\mathbb{R},\] \[\sin{(x-y)} =\sin(x)\cos(y)-\sin(y)\cos(x),\quad x,y\in\mathbb{R},\] and have been investigated by many authors. In the case of an involutive automorphism \(\sigma\), i.e an automorphism satisfying \(\sigma(\sigma(x))=x\) for all \(x\in S\), equation (1.1) was solved on abelian groups by Vincze [13], and on general groups by Chung et al. [7]. Poulsen and Stetkaer [11] described the continuous solutions of (1.1), (1.2) and (1.3) on topological groups. In [3, 4] Ajebbar and Elqorachi obtained the solutions of (1.1), (1.2) and (1.3) on semigroups generated by their squares. The solutions of (1.2) with \(\sigma=id\) are also described in [9, Theorem 3.1] on a semigroup not necessarily generated by its squares. In [8], Ebanks solved (1.1) and (1.3) on general monoids. Recently the authors [5, 6] solved (1.1), (1.2) and (1.3) on semigroups. We also refer to [1, Section 3.2.3], [2, Chapter 13] and [12, Chapter 4] for further contextual and historical discussions. The purpose of this paper is to solve the functional equations (1.1), (1.2) and (1.3) on a semigroup \(S\), where \(\sigma:S\to S\) is an automorphism not necessarily involutive, our results are natural extensions of previous results about the solutions of (1.1), (1.2) and (1.3). The character involutive of the automorphism \(\sigma\) is used in the proofs of [12, Theorem 4.12, Theorem 4.16], [8, Theorem 4.1, Corollary 4.3] and [6, Theorem 4.2, Theorem 5.1]. The present paper shows that this condition is not crucial for the giving proofs even in the setting of semigroups. The contributions of the present work to the knowledge about solutions of (1.1), (1.2) and (1.3) are the following: (1) By the help of Theorem 2.2 and Theorem 2.3 (See section 2) we find all complex-valued solutions of (1.1), (1.2) and (1.3) on a semigroup \(S\) in terms of multiplicative functions and solutions \(\phi\) of the special sine addition law \[\phi(xy)=\phi(x)\chi(y)+\phi(y)\chi(x),\ \ x,y\in S, \tag{1.4}\] where \(\chi:S\to\mathbb{C}\) is a multiplicative function. The new result here is that the automorphism \(\sigma\) is not involutive. That makes the exposition more involved and explains why our proofs are longer than those of previous papers about the same functional equations. (2) We consider the variants of (1.1) and (1.2) respectively \[g(\sigma(y)x)=g(x)g(y)+f(x)f(y),\ \ \ x,y\in S, \tag{1.5}\] \[f(\sigma(y)x)=f(x)g(y)+f(y)g(x),\ \ \ x,y\in S. \tag{1.6}\] We show that (1.5) is equivalent to (1.1), and (1.6) is equivalent to (1.2). (3) As an application, we determine the complex-valued solutions of the new functional equations \[g(x+\beta y)=g(x)g(y)+f(x)f(y),\ \ \ x,y\in\mathbb{R},\] \[f(x+\beta y)=f(x)g(y)+f(y)g(x),\ \ \ x,y\in\mathbb{R},\] where \(\beta\in\mathbb{R}\backslash\{0,-1,1\}\). Obviously these equations generalizes the functional equations \[g(x\pm y)=g(x)g(y)+f(x)f(y),\ \ \ x,y\in\mathbb{R},\] \[f(x\pm y)=f(x)g(y)+f(y)g(x),\ \ \ x,y\in\mathbb{R},\] which have been studied by many authors (See for example [10, Corollary 3.56.a, Corollary 3.56.c] and [12, Corollary 4.17]). The outline of the paper is as follows: In the next section we give some notations and terminology. The complete solution of (1.1) is given in section 3. In section 4 we solve the functional equation (1.2). The sine subtraction formula (1.3) is solved in section 5. Section 6 contains some applications. ## 2 Notations and terminology In this section we give some notations and notions that are essential in our discussion. Throughout this paper \(S\) denotes a semigroup. That is a set equipped with an associative binary operation. A multiplicative function on \(S\) is a function \(\mu:S\to\mathbb{C}\) satisfying \(\mu(xy)=\mu(x)\mu(y)\) for all \(x,y\in S\). A function \(f:S\to\mathbb{C}\) is central if \(f(xy)=f(yx)\) for all \(x,y\in S\), and \(f\) is abelian if \(f\) is central and \(f(xyz)=f(xzy)\) for all \(x,y,z\in S\). We define the set \(S^{2}:=\{xy\ |\ x,y\in S\}\). Let \(\sigma:S\to S\) be an automorphism, for any function \(f:S\to\mathbb{C}\) we define the function \(f^{*}:=f\circ\sigma\). A topological semigroup is a pair, consisting of a semigroup \(S\) and a topology on \(S\), such that the product \((x,y)\mapsto xy\) is continuous from \(S\times S\) to \(S\), when \(S\) is given the product topology. If \(S\) is a topological semigroup, let \(C(S)\) denote the set of continuous functions mapping \(S\) into \(\mathbb{C}\). **Notation 2.1**.: _Let \(\chi\) be a non-zero multiplicative function. The symbol \(\phi_{\chi}\) shall denote a solution of the special sine addition law (1.4). i.e_ \[\phi_{\chi}(xy)=\phi_{\chi}(x)\chi(y)+\phi_{\chi}(y)\chi(x),\ \ x,y\in S.\] The following are respectively [8, Theorem 3.2] and [9, Theorem 3.1], but for brivety some formulas of solution are expressed with the use of Notation 2.1. **Theorem 2.2**.: _The solutions \(g,f:S\to\mathbb{C}\) of the functional equation_ \[g(xy)=g(x)g(y)-f(x)f(y),\ \ x,y\in S,\] _are the following pairs:_ 1. \(g=f=0\)_._ 2. \(g=\frac{\delta^{-1}\chi_{1}+\delta\chi_{2}}{\delta^{-1}+\delta}\) _and_ \(f=\frac{\chi_{1}-\chi_{2}}{i\left(\delta^{-1}+\delta\right)}\)_, where_ \(\chi_{1},\chi_{2}:S\to\mathbb{C}\) _are two different multiplicative functions and_ \(\delta\in\mathbb{C}\setminus\{0,i,-i\}\)_._ 3. \(f\) _is any non-zero function such that_ \(f=0\) _on_ \(S^{2}\) _and_ \(g=\pm f\)_._ 4. \(g=\chi\pm\phi_{\chi}\) _and_ \(f=\phi_{\chi}\)_, where_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function._ **Theorem 2.3**.: _The solutions \(g,f:S\to\mathbb{C}\) of the functional equation_ \[f(xy)=f(x)g(y)+f(y)g(x),\ \ x,y\in S,\] _with \(f\neq 0\) can be listed as follows:_ 1. \(f=c\left(\chi_{1}-\chi_{2}\right)\) _and_ \(g=\dfrac{\chi_{1}+\chi_{2}}{2}\)_, where_ \(\chi_{1},\chi_{2}:S\to\mathbb{C}\) _are two different multiplicative functions and_ \(c\in\mathbb{C}\backslash\left\{0\right\}\)_._ 2. \(f\) _is any non-zero function such that_ \(f=0\) _on_ \(S^{2}\) _and_ \(g=0\)_._ 3. \(f=\phi_{\chi}\) _and_ \(g=\chi\)_, where_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function._ The following lemma will be used throughout the paper without explicit mentionning. **Lemma 2.4**.: _Let \(f:S\to\mathbb{C}\) be a non-zero function satisfying_ \[f(x\sigma(y))=\beta f(x)f(y),\quad\text{for all}\quad x,y\in S, \tag{2.1}\] _where \(\beta\in\mathbb{C}\backslash\{0\}\) is a constant. Then there exists a non-zero multiplicative function \(\chi:S\to\mathbb{C}\) such that \(\beta f=\chi\) and \(\chi^{*}=\chi\)._ Proof.: By using the associativity of the semigroup operation we compute \(f(x\sigma(y)\sigma(z))\) using the identity (2.1) first as \(f((x\sigma(y))\sigma(z))\) and then as \(f(x(\sigma(y)\sigma(z)))\) and compare the results to obtain \[\beta^{2}f(x)f(y)f(z)=\beta f(x)f(yz),\quad\text{for all}\quad x,y,z\in S. \tag{2.2}\] Since \(f\neq 0\) and \(\beta\neq 0\) Eq. (2.2) can be written as \[f(yz)=\beta f(y)f(z),\quad\text{for all}\quad y,z\in S. \tag{2.3}\] This implies that the function \(\chi:=\beta f\) is multiplicative. That is \(f=\dfrac{1}{\beta}\chi\), and then Eq. (2.1) becomes \[\dfrac{1}{\beta}\chi(x)\chi^{*}(y)=\dfrac{1}{\beta}\chi(x)\chi(y). \tag{2.4}\] Since \(\chi\neq 0\) and \(\beta\neq 0\), we deduce from (2.4) that \(\chi^{*}=\chi\). This completes the proof of Lemma 2.4. ## 3 The cosine subtraction formula (1.1) The most recent result on the cosine subtraction formula (1.1), namely \[g(x\sigma(y)=g(x)g(y)+f(x)f(y),\quad x,y\in S,\] on semigroups is [6, Theorem 4.2]. By using similar computations to those of [6, Theorem 4.2] we will solve (1.1) on general semigroups, but here \(\sigma\) is not assumed to be involutive. It should be mentioned that in the case of an involutive automorphism \(\sigma\), the general solution of (1.1) on monoids can be found in [8, Theorem 4.1]. **Lemma 3.1**.: _Let \(f,g:S\to\mathbb{C}\) be a solution of Eq. (1.1), and suppose that \(f\) and \(g\) are linearly independent. Then \(g^{*}=g\) and \(f^{*}=f\) or \(f^{*}=-f\)._ Proof.: By using the associativity of the semigroup operation we compute \(g(x\sigma(y)\sigma(z))\) by the help of Eq. (1.1) first as \(g((x\sigma(y))\sigma(z))\) and then as \(g(x(\sigma(y)\sigma(z)))\) and compare the results. We obtain after some rearrangement that \[f(x)\left[f(yz)-f(y)g(z)\right]+g(x)\left[g(yz)-g(y)g(z)\right]=f(z)f(x\sigma(y )). \tag{3.1}\] Since \(f\neq 0\), there exists \(z_{0}\in S\) such that \(f(z_{0})\neq 0\) and then \[f(x)h(y)+g(x)k(y)=f(x\sigma(y)), \tag{3.2}\] where \[h(y)=\frac{f(yz_{0})-f(y)g(z_{0})}{f(z_{0})},\] and \[k(y)=\frac{g(yz_{0})-g(y)g(z_{0})}{f(z_{0})}.\] By using (1.1) and the fact that \(\sigma\) is a bijection, we obtain \[k=c_{1}g+c_{2}f, \tag{3.3}\] for some constants \(c_{1},c_{2}\in\mathbb{C}\). Substituting (3.2) into (3.1), we obtain \[\begin{split} f(x)\left[f(yz)-f(y)g(z)\right]+g(x)\left[g(yz)-g( y)g(z)\right]\\ =f(x)f(z)h(y)+g(x)f(z)k(y).\end{split} \tag{3.4}\] Since \(f\) and \(g\) are linearly independent we deduce from (3.4) that \[g(yz)=g(y)g(z)+f(z)k(y), \tag{3.5}\] and \[f(yz)=f(y)g(z)+f(z)h(y). \tag{3.6}\] Substituting (3.3) into (3.5), we get \[g(yz)=\left[g(z)+c_{1}f(z)\right]g(y)+c_{2}f(z)f(y). \tag{3.7}\] So, by applying (3.7) to the pair \((y,\sigma(z))\), we obtain \[g(y\sigma(z))=\left[g^{*}(z)+c_{1}f^{*}(z)\right]g(y)+c_{2}f^{*}(z)f(y). \tag{3.8}\] By comparing (3.7) and (1.1), and using the linear independence of \(f\) and \(g\), we get \[g=g^{*}+c_{1}f^{*}, \tag{3.9}\] \[f=c_{2}f^{*}. \tag{3.10}\] Since \(f\neq 0\), we deduce from (3.10) that \(c_{2}\neq 0\). First case :\(c_{1}=0\). That is \(g^{*}=g\). By applying Eq. (1.1) to the pair \((\sigma(x),y)\) we find that \[g^{*}(xy)=g^{*}(x)g(y)+f^{*}(x)f(y).\] So since \(f^{*}=\frac{1}{c_{2}}f\) and \(g^{*}=g\), we get that for all \(x,y\in S\) \[g(xy)=g(x)g(y)+\frac{1}{c_{2}}f(x)f(y). \tag{3.11}\] Now if we apply Eq. (3.11) to the pair \((x,\sigma(y))\), we get \[g(x\sigma(y))=g(x)g(y)+\frac{1}{c_{2}^{2}}f(x)f(y). \tag{3.12}\] Comparing (1.1) and (3.12) and using that \(f\neq 0\), we deduce that \(c_{2}^{2}=1\). So \(c_{2}=\pm 1\), and then \(f=f^{*}\) or \(f=-f^{*}\). Second case : \(c_{1}\neq 0\). We get from (3.9) and (3.10) that \(g^{*}=g-\frac{c_{1}}{c_{2}}f\), and then by applying Eq. (1.1) to the pair \((\sigma(x),y)\) we get \[g(xy)-\frac{c_{1}}{c_{2}}f(xy)=g(x)g(y)-\frac{c_{1}}{c_{2}}f(x)g(y)+\frac{1}{c _{2}}f(x)f(y). \tag{3.13}\] Then by using Eq. (3.7), we get from (3.13) after some rearrangement that \[f(xy)=f(y)\left(c_{2}g(x)+\left(\frac{c_{2}^{2}-1}{c_{1}}\right)f(x)\right)+g( y)f(x). \tag{3.14}\] Comparing Eq. (3.6) and Eq. (3.14), and using that \(f\neq 0\) we deduce that \[h=c_{2}g+\left(\frac{c_{2}^{2}-1}{c_{1}}\right)f. \tag{3.15}\] Taking (3.15) and (3.3) into account Eq. (3.2) becomes \[f(x\sigma(y))=g(x)\left(c_{2}f(y)+c_{1}g(y)\right)+f(x)\left(\left(\frac{c_{2} ^{2}-1}{c_{1}}\right)f(y)+c_{2}g(y)\right). \tag{3.16}\] Now, if we apply Eq. (3.14) to the pair \((x,\sigma(y))\) we get \[f(x\sigma(y))=f(x)\left(g(y)+\left(\frac{c_{2}^{2}-c_{1}^{2}-1}{c_{1}c_{2}} \right)f(y)\right)+g(x)f(y). \tag{3.17}\] Comparing Eq. (3.16) and Eq. (3.17) and using the linear independence of \(f\) and \(g\), we get \[g=\left(\frac{1-c_{2}}{c_{1}}\right)f.\] This is a contradiction since \(f\) and \(g\) are linearly independent. So this case does not occur. This completes the proof of Lemma 3.1. **Remark 3.2**.: _The result of Lemma 3.1 is also true for the variant (1.5) of equation (1.1)._ Proof.: Let \(f,g:S\to\mathbb{C}\) be a solution of Eq. (1.5) such that \(f\) and \(g\) are linearly independent, if we compute \(g(\sigma(yz)x)\) in two different ways we get by using the linear independent of \(f\) and \(g\) that \[f(\sigma(z)x)=f(x)h(z)+g(x)\left[a_{1}g(z)+a_{2}f(z)\right], \tag{3.18}\] \[f(yz)=g(y)f(z)+f(y)h(z), \tag{3.19}\] \[g(yz)=g(z)\left[g(y)+a_{1}f(y)\right]+a_{2}f(y)f(z), \tag{3.20}\] for some constants \(a_{1},a_{2}\in\mathbb{C}\) and \(h\) is a function. If we apply Eq. (3.20) to \((\sigma(y),z)\) and compare the preceding equation with Eq. (1.5) we obtain since \(f\) and \(g\) are linearly independent that \(g=g^{*}+a_{1}f^{*}\) and \(f=a_{2}f^{*}\). \(f\neq 0\) implies that \(a_{2}\neq 0\). First case : \(a_{1}=0\). In this case \(g^{*}=g\), so if we apply Eq. (1.5) to \((\sigma(x),y)\) and then apply the preceding equation to \((x,\sigma(y))\) we obtain \[g(\sigma(y)x)=g(x)g(y)+\frac{1}{a_{2}^{2}}f(x)f(y). \tag{3.21}\] Comparing Eq. (3.21) with (1.5) and using that \(f\neq 0\) we deduce that \(a_{2}^{2}=1\). That is \(a_{2}=\pm 1\), so \(f=f^{*}\) or \(f=-f^{*}\). Second case : \(a_{1}\neq 0\). If we apply Eq. (1.5) to \((\sigma(x),y)\) and using Eq. (3.20) we find that \[f(yx)=f(y)\left(a_{2}g(x)+\left(\frac{a_{2}^{2}-1}{a_{1}}\right)f(x)\right)+g( y)f(x). \tag{3.22}\] Comparing Eq. (3.19) and Eq. (3.22) and using that \(f\neq 0\), we get that \(h=a_{2}g+\frac{a_{2}^{2}-1}{a_{1}}f\). Now Eq. (3.18) becomes \[f(\sigma(y)x)=g(x)\left(a_{2}f(y)+a_{1}g(y)\right)+f(x)\left(\left(\frac{a_{2 }^{2}-1}{a_{1}}\right)f(y)+c_{2}g(y)\right). \tag{3.23}\] By applying Eq. (3.22) to the pair \((x,\sigma(y))\), we find that \[f(\sigma(y)x)=f(x)\left(g(y)+\left(\frac{a_{2}^{2}-a_{1}^{2}-1}{a_{1}a_{2}} \right)f(y)\right)+g(x)f(y). \tag{3.24}\] Comparing these last two identities and using the linear independence of \(f\) and \(g\), we get \(g=\frac{1-a_{2}}{a_{1}}f\) but this is a contradiction since \(f\) and \(g\) are linearly independent. This completes the proof of Remark 3.2. The next result gives the general solution of (1.1) on semigroups. **Theorem 3.3**.: _The solutions \(f,g:S\to\mathbb{C}\) of the functional equation (1.1) are the following :_ 1. \(g=0\) _and_ \(f=0\)_._ 2. \(g\) _is any non-zero function such that_ \(g=0\) _on_ \(S^{2}\)_, and_ \(f=cg\)_, where_ \(c\in\{i,-i\}\)_._ 3. \(g=\frac{1}{1+\alpha^{2}}\chi\) _and_ \(f=\frac{\alpha}{1+\alpha^{2}}\chi\)_, where_ \(\alpha\in\mathbb{C}\backslash\{i,-i\}\) _is a constant and_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function such that_ \(\chi^{*}=\chi\)_._ 4. \(g=\frac{\delta^{-1}\chi_{1}+\delta\chi_{2}}{\delta^{-1}+\delta}\) _and_ \(f=\frac{\chi_{2}-\chi_{1}}{\delta^{-1}+\delta}\)_, where_ \(\delta\in\mathbb{C}\backslash\{0,i,-i\}\) _and_ \(\chi_{1},\chi_{2}:S\to\mathbb{C}\) _are two different multiplicative functions such that_ \(\chi_{1}^{*}=\chi_{1}\) _and_ \(\chi_{2}^{*}=\chi_{2}\)_._ 5. \(g=\frac{\chi+\chi^{*}}{2}\) _and_ \(f=\frac{\chi-\chi^{*}}{2i}\)_, where_ \(\chi:S\to\mathbb{C}\) _is a multiplicative function such that_ \(\chi^{*}\neq\chi\) _and_ \(\chi\circ\sigma^{2}=\chi\)_._ 6. \(f=-i\phi_{\chi}\) _and_ \(g=\chi\pm\phi_{\chi}\) _where_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function such that_ \(\chi^{*}=\chi\) _and_ \(\phi_{\chi}^{*}=\phi_{\chi}\)_._ _Note that \(f\) and \(g\) are Abelian in each case._ _Furthermore, if \(S\) is a topological semigroup and \(g\in C(S)\), then \(f,\chi,\chi^{*},\chi_{1},\chi_{2},\phi_{\chi}\in C(S)\)._ Proof: If \(g=0\) then \(f=0\). This is case (1). So from now on we assume that \(g\neq 0\). Suppose that \(g=0\) on \(S^{2}\), then we get from equation (1.1) that \[g(x)g(y)+f(x)f(y)=0. \tag{3.25}\] Since \(g\neq 0\) we obtain from equation (3.25) that \(f=cg\) where \(c\in\mathbb{C}\) is a constant. Then if we take this into account in equation (3.25) we get that \((c^{2}+1)g(x)g(y)=0\). This implies that \(c^{2}+1=0\) because \(g\neq 0\), so \(c\in\{i,-i\}\). This occurs in part (2) of Theorem 3.3. If \(f=0\), then equation (1.1) can be written as follows \(g(x\sigma(y))=g(x)g(y)\). So \(g=:\chi\) is multiplicative and \(\chi^{*}=\chi\). This occurs in part (3) of Theorem 3.3 with \(\alpha=0\). Now we assume that \(g\neq 0\) on \(S^{2}\), \(f\neq 0\) and we discuss two cases according to whether \(f\) and \(g\) are linearly dependent or not. First case : \(g\) and \(f\) are linearly dependent. There exists a constant \(\alpha\in\mathbb{C}\) such that \(f=\alpha g\), so equation (1.1) can be written as \[g(x\sigma(y))=(1+\alpha^{2})g(x)g(y),\quad x,y\in S. \tag{3.26}\] Since \(g\neq 0\) on \(S^{2}\) and \(f\neq 0\), we deduce from (3.26) that \(\alpha\notin\{0,i,-i\}\), and then \(\chi:=(1+\alpha^{2})g\) is multiplicative and \(\chi^{*}=\chi\). This occurs in case (3) with \(\alpha\neq 0\). Second case : \(g\) and \(f\) are linearly independent. According to Lemma 3.1\(g^{*}=g\) and \(f=f^{*}\) or \(f^{*}=-f\). Subcase A : \(f=f^{*}\). By applying Eq. (1.1) to the pair \((\sigma(x),y)\), we obtain \[g(xy)=g(x)g(y)+f(x)f(y),\quad x,y\in S. \tag{3.27}\] Defining \(l:=if\), equation (3.27) can be written as follows \[g(xy)=g(x)g(y)-l(x)l(y),\quad x,y\in S.\] According to Theorem 2.2 and taking into account that \(f\) and \(g\) are linearly independent, we have the following possibilities : (i) \(g=\frac{\delta^{-1}\chi_{1}+\delta\chi_{2}}{\delta^{-1}+\delta}\) and \(l=\frac{\chi_{1}-\chi_{2}}{i(\delta^{-1}+\delta)}\), where \(\delta\in\mathbb{C}\backslash\{0,i,-i\}\) is a constant and \(\chi_{1},\chi_{2}:S\rightarrow\mathbb{C}\) are two multiplicative functions such that \(\chi_{1}\neq\chi_{2}\). Since \(g=g^{*}\), \(f=f^{*}\) and \(l=if\), we deduce that \(f=\frac{\chi_{2}-\chi_{1}}{\delta^{-1}+\delta}\), \(\chi_{1}=\chi_{1}^{*}\) and \(\chi_{2}=\chi_{2}^{*}\). This is case (4). (ii) \(g=\chi\pm l\) and \(l=\phi_{\chi}\), where \(\chi:S\rightarrow\mathbb{C}\) is a non-zero multiplicative function. Since \(f^{*}=f\) and \(g^{*}=g\), we see that \(\chi^{*}=\chi\) and \(\phi_{\chi}^{*}=\phi_{\chi}\). In addition \(l=if\) implies that \(f=-i\phi_{\chi}\). This occurs in part (6). Subcase B : \(f^{*}=-f\). Equation (1.1) can be written as follows \[g(xy)=g(x)g(y)-f(x)f(y),\quad x,y\in S.\] Similarly to the previous case, we get according to Theorem 2.2 and taking into account that \(f\) and \(g\) are linearly independent the two cases: (i) \(g=\frac{\delta^{-1}\chi_{1}+\delta\chi_{2}}{\delta^{-1}+\delta}\) and \(f=\frac{\chi_{1}-\chi_{2}}{i(\delta^{-1}+\delta)}\), where \(\delta\in\mathbb{C}\backslash\{0,i,-i\}\) is a constant and \(\chi_{1},\chi_{2}:S\to\mathbb{C}\) are two different multiplicative functions. Since \(f^{*}=-f\) and \(g^{*}=g\), we get \[\delta^{-1}(\chi_{1}-\chi_{1}^{*})+\delta(\chi_{2}-\chi_{2}^{*})=0, \tag{3.28}\] \[\chi_{1}+\chi_{1}^{*}=\chi_{2}+\chi_{2}^{*}. \tag{3.29}\] Since \(\chi_{1}\neq\chi_{2}\), we obtain by the help of [12, Corollary 3.19] that \(\chi_{1}=\chi_{2}^{*}\) and \(\chi_{2}=\chi_{1}^{*}\). Then (3.28) reduces to \[\left(\delta^{-1}-\delta\right)(\chi_{1}-\chi_{1}^{*})=0.\] This implies that \(\delta^{-1}-\delta=0\) since \(\chi_{1}\neq\chi_{2}\). That is \(\delta=\pm 1\). This occurs in case (5) with \(\chi_{1}=\chi\) and \(\chi_{2}=\chi^{*}\). In addition \(\chi_{1}=\chi_{2}^{*}\) implies that \(\chi\circ\sigma^{2}=\chi\). (ii) \(g=\chi\pm f\) and \(f=\phi_{\chi}\) where \(\chi:S\to\mathbb{C}\) is a non-zero multiplicative function. If \(g=\chi+f\) we obtain since \(f^{*}=-f\) and \(g^{*}=g\) that \(g=\chi^{*}-f\). Adding and subtracting this from \(g=\chi+f\) we get that \[g=\frac{\chi+\chi^{*}}{2}\quad\text{and}\quad f=\frac{\chi^{*}-\chi}{2}.\] By assumption \(f\neq 0\), so \(\chi\neq\chi^{*}\). By substituting the forms of \(f\) and \(g\) in (1.1) we find that \(\chi=\chi^{*}\). This case does not occur. Now if \(g=\chi-f\), we show by the same way that \(g=\frac{\chi+\chi^{*}}{2}\) and \(f=\frac{\chi-\chi^{*}}{2}\) which leads by substitution to \(\chi=\chi^{*}\) (\(f=0\)). This case does not occur. Conversely we check by elementary computations that the forms (1), (2), (3), (4), (5) and (6) satisfy (1.1). Finally, suppose that \(S\) is a topological semigroup and \(g\in C(S)\). In case (1), \(f=0\in C(S)\). In case (1), \(f=\pm ig\in C(S)\). Now if \(f\neq 0\), the continuity of \(f\) follows easily from the continuity of \(g\) and the functional equation (1.1). Let \(y_{0}\in S\) such that \(f(y_{0})\neq 0\), we get from (1.1) that \[f(x)=\frac{g(x\sigma(y_{0}))-g(y_{0})g(x)}{f(y_{0})}\text{ for }x\in S.\] The function \(x\mapsto g(x\sigma(y_{0}))\) is continuous, since the right translation \(x\mapsto x\sigma(y_{0})\) from \(S\) into \(S\) is continuous, so \(f\) is continuous as a linear combination of continuous functions. In cases (4) and (5) we get the continuity of \(\chi_{1},\chi_{2},\chi,\chi^{*}\) by the help of [12, Theorem 3.18]. In case (6), \(\phi_{\chi}=if\in C(S)\) and \(\chi=(g\pm\phi_{\chi})\in C(S)\). This completes the proof of Theorem 3.3. \(\Box\) Now we relate the solution of the variant (1.5) to the functional equation (1.1). **Proposition 3.4**.: _The functional equation (1.1) and its variant (1.5), namely_ \[g(\sigma(y)x)=g(x)g(y)+f(x)f(y),\quad x,y\in S,\] _have the same solutions._ Proof.: Theorem 3.3 shows that if \((g,f)\) is a solution of (1.1), then \(g\) is abelian, in particular central. So any solution \((g,f)\) of (1.1) is also a solution of (1.5). Now let \((g,f)\) be a solution of (1.5). We show that \(g\) is central. First case: \(g\) and \(f\) are linearly dependent. That is \(f=\delta g\) for some constant \(\delta\in\mathbb{C}\). Eq. (1.5) can be written as \[g(\sigma(y)x)=(1+\delta^{2})g(x)g(y),\quad x,y\in S.\] If \(\delta\in\{i,-i\}\), then \(g=0\) on \(S^{2}\), so \(g\) is central and we are done. If \(\delta\neq\pm i\), we get that \((1+\delta^{2})g\) is multiplicative. Then \(g\) is central. Second case: \(g\) and \(f\) are linearly independent. By Remark 3.2 we have \(g=g^{*}\) and \(f=f^{*}\) or \(f=-f^{*}\). By applying Eq.(1.5) to \((\sigma(x),y)\) we obtain \[g(yx)=g(x)g(y)\pm f(x)f(y),\quad x,y\in S.\] This implies that \(g\) is central. This completes the proof of Proposition 3.4. ## 4 The sine addition formula (1.2) In this section we solve the functional equation (1.2) on semigroups. The following lemma gives some key properties of the solutions of Eq. (1.2). We use similar computations to those of [6, Theorem 5.1] but of course here \(\sigma\) is not involutive. **Lemma 4.1**.: _Suppose \(f,g:S\to\mathbb{C}\) satisfy Eq. (1.2) such that \(f\) and \(g\) are linearly independent. Then \(f^{*}=f\) and \(g^{*}=g\)._ Proof.: Computing \(f(x\sigma(yz))\) in two different ways using equation (1.2), we obtain after some rearrangement that \[f(x)\left[g(yz)-g(y)g(z)\right]+g(x)\left[f(yz)-f(y)g(z)\right]=f(z)g(x\sigma( y)). \tag{4.1}\] Since \(f\neq 0\), there exists \(z_{0}\in S\) such that \(f(z_{0})\neq 0\) and hence \[f(x)h(y)+g(x)k(y)=g(x\sigma(y)), \tag{4.2}\] where \[h(y)=\frac{g(yz_{0})-g(y)g(z_{0})}{f(z_{0})},\] and \[k(y)=\frac{f(yz_{0})-f(y)g(z_{0})}{f(z_{0})}.\] By using Eq. (1.2) and the fact that \(\sigma\) is a bijection, we get \[k=\alpha f+\beta g, \tag{4.3}\] for some constants \(\alpha,\beta\in\mathbb{C}\). Now by using (4.2), equation (4.1) becomes \[\begin{gathered} f(x)\left[g(yz)-g(y)g(z)\right]+g(x)\left[f(yz)- f(y)g(z)\right]\\ =f(x)f(z)h(y)+g(x)f(z)k(y).\end{gathered} \tag{4.4}\] Since \(f\) and \(g\) are linearly independent we deduce from (4.4) that for all \(y,z\in S\) \[g(yz)=g(y)g(z)+f(z)h(y), \tag{4.5}\] Cosine and Sine addition and subtraction law with an automorphism 11 and \[f(yz)=f(y)g(z)+f(z)k(y). \tag{4.6}\] By using (4.3), equation (4.6) can be written as follows \[f(yz)=\left(g(z)+\alpha f(z)\right)f(y)+\beta f(z)g(y).\] This implies that \[f(y\sigma(z))=\left(g^{*}(z)+\alpha f^{*}(z)\right)f(y)+\beta f^{*}(z)g(y).\] By comparing this last identitie with Eq. (1.2) and using the linear independence of \(f\) and \(g\) we deduce that \[g=g^{*}+\alpha f^{*}, \tag{4.7}\] \[f=\beta f^{*}. \tag{4.8}\] Since \(f\neq 0\) we get from (4.8) that \(\beta\neq 0\), and from (4.7) that \(g^{*}=g-\dfrac{\alpha}{\beta}f\). So, for all \(x,y\in S\) we have \[f(x\sigma(y)) =\beta f^{*}(x\sigma(y))=\beta f(\sigma(x)\sigma(\sigma(y)))\] \[=\beta f^{*}(x)g^{*}(y)+\beta f^{*}(y)g^{*}(x)\] \[=f(x)\left[g(y)-\dfrac{\alpha}{\beta}f(y)\right]+f(y)\left[g(x)- \dfrac{\alpha}{\beta}f(x)\right]\] \[=f(x)g(y)+f(y)g(x)-\dfrac{2\alpha}{\beta}f(x)f(y)\] \[=f(x\sigma(y))-\dfrac{2\alpha}{\beta}f(x)f(y).\] So \(\alpha=0\) since \(f\neq 0\), and then \(g^{*}=g\). Now if we apply Eq. (1.2) to the pair \((\sigma(x),y)\) and multiplying the preceding equation by \(\beta\) we get \[f(xy)=f(x)g(y)+\beta f(y)g(x). \tag{4.9}\] Computing \(f(xyz)\) in two different ways, we obtain from (4.9) by using Eq. (4.5) after some rearrangement that \[\left(\beta^{2}-\beta\right)g(x)g(y)=\beta f(y)h(x)-f(x)h(y). \tag{4.10}\] Since \(f\neq 0\), we deduce from Eq. (4.10) that \[h=af+bg, \tag{4.11}\] for some constants \(a,b\in\mathbb{C}\). Taking Eq. (4.11) into account Eq. (4.10) becomes \[\left(\beta^{2}-\beta\right)g(x)g(y)=f(x)\left((a\beta-a)f(y)-bg(y)\right)+b \beta g(x)f(y). \tag{4.12}\] Since \(f\) and \(g\) are linearly independent, we deduce from Eq. (4.12) that \[\left(\beta^{2}-\beta\right)g=b\beta f.\] This implies that \(\beta=1\) and \(b=0\) since \(\beta\neq 0\), \(f\) and \(g\) are linearly independent. So \(f=f^{*}\). This completes the proof of Lemma 4.1. **Remark 4.2**.: _Similar computations shows that the result of Lemma 4.1 hold for the variant (1.6) of equation (1.2)._ Now we are ready to solve the functional equation (1.2). **Theorem 4.3**.: _The solutions \(f,g:S\to\mathbb{C}\) of Eq. (1.2) are the following pairs:_ 1. \(f=0\) _and_ \(g\) _is arbitrary._ 2. \(f\) _is any non-zero function such that_ \(f=0\) _on_ \(S^{2}\)_, while_ \(g=0\)_._ 3. \(f=\frac{1}{2\alpha}\chi\) _and_ \(g=\frac{1}{2}\chi\)_, where_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function such that_ \(\chi^{*}=\chi\) _and_ \(\alpha\in\mathbb{C}\backslash\{0\}\)_._ 4. \(f=c\left(\chi_{1}-\chi_{2}\right)\) _and_ \(g=\frac{\chi_{1}+\chi_{2}}{2}\)_, where_ \(\chi_{1},\chi_{2}:S\to\mathbb{C}\) _are two different multiplicative functions such that_ \(\chi_{1}^{*}=\chi_{1}\)_,_ \(\chi_{2}^{*}=\chi_{2}\) _and_ \(c\in\mathbb{C}\backslash\{0\}\)_._ 5. \(f=\phi_{\chi}\) _and_ \(g=\chi\) _where_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function such that_ \(\chi^{*}=\chi\) _and_ \(\phi_{\chi}^{*}=\phi_{\chi}\)_._ _Note that, off the exceptional case (1)_ \(f\) _and_ \(g\) _are Abelian._ _Furthermore, off the exceptional case (1), if_ \(S\) _is a topological semigroup and_ \(f\in C(S)\)_, then_ \(g,\chi\)_,_\(\chi_{1},\chi_{2},\phi_{\chi}\in C(S)\)_._ Proof.: If \(f=0\) then \(g\) will be arbitrary. This occurs in case (1). From now on we assume that \(f\neq 0\). Suppose that \(f=0\) on \(S^{2}\). For all \(x,y\in S\), we get from equation (1.2) that \[f(x)g(y)+f(y)g(x)=0. \tag{4.13}\] Since \(f\neq 0\) we deduce from equation (4.13) according to [12, Exercise 1.1(b)] that \(g=0\). This occurs in part (2) of Theorem 4.3. Now we assume that \(f\neq 0\) on \(S^{2}\) and we discuss two cases according to whether \(f\) and \(g\) are linearly dependent or not. First case : \(f\) and \(g\) are linearly dependent. There exists a constant \(\alpha\in\mathbb{C}\) such that \(g=\alpha f\), so equation (1.2) can be written as follows \(f(x\sigma(y))=2\alpha f(x)f(y)\). This implies that \(\alpha\neq 0\), since \(f\neq 0\) on \(S^{2}\). So the function \(\chi:=2\alpha f\) is multiplicative and \(\chi^{*}=\chi\). This is case (3). Second case : \(f\) and \(g\) are linearly independent. According to Lemma 4.1 we have \(f=f^{*}\) and \(g=g^{*}\). So equation (1.2) becomes \[f(xy)=f(x)g(y)+f(y)g(x). \tag{4.14}\] According to Theorem 2.3 and taking into account that \(f\neq 0\), \(g\neq 0\), \(f^{*}=f\) and \(g^{*}=g\) we have the following possibilities : (i) \(f=c\left(\chi_{1}-\chi_{2}\right)\) and \(g=\frac{\chi_{1}+\chi_{2}}{2}\), for some constant \(c\in\mathbb{C}\backslash\{0\}\) and \(\chi_{1},\chi_{2}:S\to\mathbb{C}\) are two different multiplicative functions such that \(\chi_{1}^{*}=\chi_{1}\) and \(\chi_{2}^{*}=\chi_{2}\). This is case (4). (ii) \(f=\phi_{\chi}\) and \(g=\chi\) where \(\chi:S\to\mathbb{C}\) is a non-zero multiplicative function such that \(\phi_{\chi}^{*}=\phi_{\chi}\) and \(\chi^{*}=\chi\). This occurs in part (5) of Theorem 4.3. Conversely we check by elementary computations that if \(f,g\) have one of the forms (1)-(5) then \((f,g)\) is a solution of equation (1.2). For the continuity statements, the continuity of \(g\) follows easily from the continuity of \(f\) and the functional equation (1.2). In case (4) we get the continuity of \(\chi_{1}\) and \(\chi_{2}\) by the help of [12, Theorem 3.18]. This completes the proof of Theorem 4.3. At this point of our discussion about solutions of (1.2), a natural question comes up: Can we derive the solution of the variant (1.6) of (1.2) from Theorem 4.3? the next result gives a positive answer. **Proposition 4.4**.: _The functional equation (1.2) and its variant (1.6), namely_ \[f(\sigma(y)x)=f(x)g(y)+f(y)g(x),\quad x,y\in S,\] _have the same solutions._ Proof.: Theorem 4.3 prove that if \((f,g)\) is a solution of Eq. (1.2), then \(f\) is abelian. In particular central. So \((f,g)\) is a solution of the variant (1.6). Now let \(f,g:S\to\mathbb{C}\) be a solution of (1.6). It suffices to show that \(f\) is central. First case: \(f\) and \(g\) are linearly dependent. There exists a constant \(\gamma\in\mathbb{C}\) such that \(g=\gamma f\). Equation (1.6) becomes \[f(\sigma(y)x)=2\gamma f(x)f(y),\quad x,y\in S.\] If \(\gamma=0\), then \(f=0\) on \(S^{2}\), so \(f\) is central. If \(\gamma\neq 0\), then \(2\gamma f\) is multiplicative, and then \(f\) is central. Second case: \(f\) and \(g\) are linearly independent. According to Remark 4.2 we have \(f=f^{*}\) and \(g=g^{*}\). If we apply Eq. (1.6) to \((\sigma(x),y)\) we get \[f(yx)=f(x)g(y)+f(y)g(x),\quad x,y\in S,\] which implies that \(f\) is central. This completes the proof of Proposition 4.4. ## 5 The sine subtraction formula (1.3) In this section we solve the functional equation (1.3). The following lemma will be used later. **Lemma 5.1**.: _Let \(f,g:S\to\mathbb{C}\) be a solution of Eq. (1.3) such that \(f\) and \(g\) are linearly independent. Then \(f^{*}=-f\) and \(g^{*}=g+\beta f\) for some constant \(\beta\in\mathbb{C}\)._ Proof.: By using similar computations to those of the proof of Lemma 4.1 we find that for all \(y,z\in S\) \[g(yz)=g(y)g(z)-f(z)h(y), \tag{5.1}\] and that \[g=g^{*}+af^{*},\] \[f=-bf^{*},\] for some constants \(a,b\in\mathbb{C}\) and some function \(h\). Since \(f\neq 0\) we can see that \(b\neq 0\), and then \(g^{*}=g-af^{*}=g+\frac{a}{b}f\). Choosing \(\beta=\frac{a}{b}\), we get \(g^{*}=g+\beta f\). Now by letting \(x=\sigma(x)\) in (1.3) we get that \[f(xy)=f(x)g(y)+bf(y)g(x)+af(x)f(y),\quad x,y\in S.\] Computing \(f(xyz)\) in two different ways and using Eq. (5.1) we get after some simplifications that \[g(y)\left((a-ab)f(x)+(b-b^{2})g(x)\right)=bf(y)h(x)-f(x)h(y). \tag{5.2}\] Since \(f\neq 0\) we get from Eq. (5.2) that \(h=\delta f+\gamma g\) for some constants \(\delta,\gamma\in\mathbb{C}\). Taking this into account Eq. (5.2) becomes \[g(y)\left((a-ab)f(x)+(b-b^{2})g(x)\right)=f(y)\left((b\delta-\delta)f(x)+b \gamma g(x)\right)-\gamma g(y)f(x).\] Since \(f\) and \(g\) are linearly independent we deduce that \[(a-ab)f(x)+(b-b^{2})g(x)=-\gamma f(x).\] Then \(b=1\) since \(b\neq 0\). That is \(f^{*}=-f\). This completes the proof of Lemma 5.1. The next theorem generelizes the results about solutions of (1.3) found in [11, Proposition 3.1], [4, Theorem 5.1], [5, Proposition 3.2] and [8, Corollary 4.3]. **Theorem 5.2**.: _The solutions \(f,g:S\to\mathbb{C}\) of Eq. (1.3) are the following pairs:_ 1. \(f=0\) _and_ \(g\) _is arbitrary._ 2. \(f\) _is any non-zero function such that_ \(f=0\) _on_ \(S^{2}\) _and_ \(g=\alpha f\)_, where_ \(\alpha\in\mathbb{C}\)_._ 3. \(f=c(\chi-\chi^{*})\) _and_ \(g=\dfrac{\chi+\chi^{*}}{2}+c_{1}\dfrac{\chi-\chi^{*}}{2}\)_, where_ \(\chi:S\to\mathbb{C}\) _is a multiplicative function such that_ \(\chi\neq\chi^{*}\)_,_ \(\chi\circ\sigma^{2}=\chi\)_,_ \(c_{1}\in\mathbb{C}\) _and_ \(c\in\mathbb{C}\backslash\{0\}\)_._ 4. \(f=\phi_{\chi}\) _and_ \(g=\chi+c_{2}\phi_{\chi}\)_, where_ \(\chi:S\to\mathbb{C}\) _is a non-zero multiplicative function such that_ \(\chi^{*}=\chi\)_,_ \(\phi_{\chi}^{*}=-\phi_{\chi}\) _and_ \(c_{2}\in\mathbb{C}\) _is a constant._ _Note that, off the exceptional case (1) \(f\) and \(g\) are Abelian._ _Moreover, off the exceptional case (1), if \(S\) is a topological semigroup and \(f\in C(S)\), then \(g,\chi,\chi^{*},\phi_{\chi}\in C(S)\)._ Proof.: If \(f=0\) it is easy to see that \(g\) is arbitrary. This is cas (1). Now we split the discussion in two cases according to whether \(f\) and \(g\) are linearly dependent or not. First case:\(f\) and \(g\) are linearly dependent. That is \(g=\alpha f\) for some constant \(\alpha\in\mathbb{C}\). So equation (1.3) becomes \[f(x\sigma(y))=\alpha f(x)f(y)-\alpha f(y)f(x)=0,\quad x,y\in S.\] This implies that \(f=0\) on \(S^{2}\). This occurs in case (2). Second case:\(f\) and \(g\) are linearly independent. According to Lemma 5.1, we have \(f^{*}=-f\) and \(g^{*}=g+\beta f\), where \(\beta\in\mathbb{C}\) is a constant. Then if we apply Eq. (1.3) to the pair \((\sigma(x),y)\) we obtain \[f(xy)=f(x)g(y)+f(y)g(x)+\beta f(x)f(y),\quad x,y\in S.\] That is \[f(xy)=f(x)\left[g(y)+\dfrac{\beta}{2}f(y)\right]+f(y)\left[g(x)+\dfrac{\beta} {2}f(x)\right],\quad x,y\in S.\] Cosine and Sine addition and subtraction law with an automorphism According to Theorem 2.3 and taking into account that \(f\) and \(g\) are linearly indepenedent, we have the following possibilities: (i) \(f=c\left(\chi_{1}-\chi_{2}\right)\) and \(g+\dfrac{\beta}{2}f=\dfrac{\chi_{1}+\chi_{2}}{2}\), for some constant \(c\in\mathbb{C}\backslash\{0\}\) and \(\chi_{1},\chi_{2}:S\rightarrow\mathbb{C}\) are two different multiplicative functions. Since \(f=-f^{*}\), we get \[\chi_{1}+\chi_{1}^{*}=\chi_{2}^{*}+\chi_{2}.\] This implies that \(\chi_{1}=\chi_{2}^{*}\) and \(\chi_{2}=\chi_{1}^{*}\). This occurs in case (3) with \(\chi=\chi_{1}\), \(\chi^{*}=\chi_{2}\) and \(c_{1}=\dfrac{-\beta c}{2}\). In addition \(\chi_{2}^{*}=\chi_{1}\) implies that \(\chi\circ\sigma^{2}=\chi\). (ii) \(f=\phi_{\chi}\) and \(g+\dfrac{\beta}{2}f=\chi\) where \(\chi:S\rightarrow\mathbb{C}\) is a non-zero multiplicative function such that \(\phi_{\chi}^{*}=-\phi_{\chi}\). By applying Eq. (1.3) to the pair \((\sigma(x),y)\) we obtain \[\phi_{\chi}(xy)=\phi_{\chi}(x)\chi(y)+\phi_{\chi}(y)\chi^{*}(x).\] On the other hand we have \[\phi_{\chi}(xy)=\phi_{\chi}(x)\chi(y)+\phi_{\chi}(y)\chi(x).\] Comparing these last two identities we can see that \(\chi=\chi^{*}\) since \(f\neq 0\). This occurs in part (4) of Theorem 5.2 with \(c_{2}=\dfrac{-\beta}{2}\). For the converse we can check easily that the forms (1)-(4) satisfy Eq. (1.3). Finally, if \(S\) is a topological semigroup, the continuity statements are easy to verify. This completes the proof of Theorem (5.2). In the next section we shall apply our theory to two different types of groups. The first one is abelian and the second one is not. ## 6 Applications **Application 6.1**.: _Let \(S=(\mathbb{R},+)\), let \(\beta\in\mathbb{R}\backslash\{0\}\) be a fixed element and let \(\sigma(x)=\beta x\) for all \(x\in\mathbb{R}\). The functional equations (1.1) and (1.2) can be written respectively as follows :_ \[g(x+\beta y)=g(x)g(y)+f(x)f(y),\quad x,y\in\mathbb{R}, \tag{6.1}\] \[f(x+\beta y)=f(x)g(y)+f(y)g(x),\quad x,y\in\mathbb{R}. \tag{6.2}\] _We note that equation (6.1) with \(\beta=-1\) is [12, Example 4.18], and equation (6.2) with \(\beta=1\) is [12, Example 4.5]. We are interested to determine the solutions of (6.1) and (6.2) when \(\beta\in\mathbb{R}\backslash\{0,-1,1\}\). For this we apply Theorem 3.3 to Eq. (6.1) and Theorem 4.3 to Eq. (6.2). Let \(\chi:S\rightarrow\mathbb{C}\) be a non-zero multiplicative function such that_ \[\chi(\beta x)=\chi(x),\text{ for all }x\in\mathbb{R}.\] _Since \(S\) is a group, then \(\chi\) is a character. So we get \(\chi\left((\beta-1)x\right)=1\) for all \(x\in\mathbb{R}\). Since \(\beta\neq 1\), we obtain \(\chi=1\). By the same way we show that the only non-zero multiplicative function \(\chi\) satisfying \(\chi(\beta^{2}x)=\chi(x)\) for all \(x\in\mathbb{R}\) is \(\chi=1\) because \(\beta\neq\pm 1\). So the special sine addition law (1.4) becomes_ \[\phi(x+y)=\phi(x)+\phi(y),\quad x,y\in\mathbb{R}.\] _That is \(\phi\) additive. In addition if \(\phi(\beta x)=\phi(x)\) for all \(x\in\mathbb{R}\), then \(\phi=0\) since \(\beta\neq 1\)._ _The solutions \(f,g:S\rightarrow\mathbb{C}\) of Eq. (6.1) are the following: 1) \(f=0\) and \(g=0\). 2) \(f=\dfrac{\alpha}{1+\alpha^{2}}\) and \(g=\dfrac{1}{1+\alpha^{2}}\), where \(\alpha\in\mathbb{C}\backslash\{i,-i\}\). 3) \(f=0\) and \(g=1\)._ _The solutions \(f,g:S\rightarrow\mathbb{C}\) of Eq. (6.2) can be listed as follows: 1) \(f=0\) and \(g\) is arbitrary. 2) \(f=\dfrac{1}{2\alpha}\) and \(g=\dfrac{1}{2}\), where \(\alpha\in\mathbb{C}\backslash\{0\}\)._ **Application 6.2**.: _Let \(G\) be the \((ax+b)\)-group defined by_ \[G:=\left\{\begin{pmatrix}a&b\\ 0&1\end{pmatrix}\mid a>0,\quad b\in\mathbb{R}\right\},\] _and let \(X=\begin{pmatrix}a&b\\ 0&1\end{pmatrix}\) for all \(a,b\in\mathbb{R}\) such that \(a>0\). We consider the following automorphism on \(G\)_ \[\sigma\left(X\right)=\begin{pmatrix}a&2023b\\ 0&1\end{pmatrix}.\] _So \(\sigma\) is not involutive. According to [12, Example 2.10, Example 3.13], the continuous additive and the non-zero multiplicative functions on \(G\) have respectively the forms_ \[A_{c}\left(X\right)=c\log(a),\] _and_ \[\chi_{\lambda}\left(X\right)=a^{\lambda},\] _where \(c,\lambda\in\mathbb{C}\). We can see that \(\chi_{\lambda}\circ\sigma=\chi_{\lambda}\) and \(A_{c}\circ\sigma=A_{c}\), and it is well known that the non-zero continuous solution \(\phi\) of (1.4) on the group \(G\) will be of the form \(\phi=\chi_{\lambda}A_{c}\)._ _The non-zero solutions \(f,g\in C(G)\) of Eq. (1.1) are the following: 1) \(f\left(X\right)=\dfrac{\alpha a^{\lambda}}{1+\alpha^{2}}\) and \(g\left(X\right)=\dfrac{a^{\lambda}}{1+\alpha^{2}}\), where \(\alpha\in\mathbb{C}\backslash\{0,i,-i\}\) and \(\lambda\in\mathbb{C}\). 2) \(f\left(X\right)=-ica^{\lambda}\log(a)\) and \(g\left(X\right)=a^{\lambda}\pm ca^{\lambda}\log(a)\), where \(c\in\mathbb{C}\backslash\{0\}\) and \(\lambda\in\mathbb{C}\)._ _The non-zero solutions \(f,g\in C(G)\) of Eq. (1.2) are the following: 1) \(f\left(X\right)=\dfrac{a^{\lambda}}{2\alpha}\) and \(g\left(X\right)=\dfrac{a^{\lambda}}{2}\), where \(\alpha\in\mathbb{C}\backslash\{0\}\) and \(\lambda\in\mathbb{C}\). 2) \(f\left(X\right)=ca^{\lambda}\log(a)\) and \(g\left(X\right)=a^{\lambda}\), where \(c\in\mathbb{C}\backslash\{0\}\) and \(\lambda\in\mathbb{C}\)._ Cosine and Sine addition and subtraction law with an automorphism **Declarations** **Ethical Approval** Not Applicable. **Competing interests** None. **Author contributions** The authors confirm contribution to the paper as follows: study conception and design: Y. Aserrar, E. Elqorachi; data collection: Y. Aserrar; analysis and interpretation of results: Y. Aserrar, E. Elqorachi; draft manuscript preparation: Y. Aserrar. All authors reviewed the results and approved the final version of the manuscript. **Funding** None. **Availability of data and materials** Not applicable.
2301.06151
Multiverse in Karch-Randall Braneworld
In this paper, we propose a model based on wedge holography that can describe the multiverse. In wedge holography, we consider two gravitating baths, one of which has strong gravity and the other one has weak gravity. To describe a multiverse, we consider $2n$ Karch-Randall branes, and we propose that various $d$-dimensional universes are localized on these branes. These branes are embedded in $(d+1)$-dimensional spacetime. The model is useful in obtaining the Page curve of black holes with multiple horizons and in the resolution of the ``grandfather paradox''. We explicitly obtain the Page curves of eternal AdS black holes for $n=2$ multiverse and Schwarzschild de-Sitter black hole with two horizons.
Gopal Yadav
2023-01-15T18:00:05Z
http://arxiv.org/abs/2301.06151v5
# Multiverse in Karch-Randall Braneworld ###### Abstract In this paper, we propose a model based on wedge holography that can describe the multiverse. In wedge holography, we consider two gravitating baths, one of which has strong gravity and the other one has weak gravity. To describe a multiverse, we consider \(2n\) Karch-Randall branes, and we propose that various \(d\)-dimensional universes are localized on these branes. These branes are embedded in \((d+1)\)-dimensional spacetime. The model is useful in obtaining the Page curve of black holes with multiple horizons and in the resolution of the "grandfather paradox". We explicitly obtain the Page curves of eternal AdS black holes for \(n=2\) multiverse and Schwarzschild de-Sitter black hole with two horizons. ###### Contents * 1 Introduction * 2 Brief Review of Wedge Holography * 3 Emerging Multiverse from Wedge Holography * 3.1 Anti de-Sitter Background * 3.2 de-Sitter Background * 3.3 Braneworld Consists of Anti de-Sitter and de-Sitter Spacetimes * 4 Application to Information Paradox * 4.1 Page Curve of Eternal AdS Black Holes in \(n=2\) Multiverse * 4.2 Page Curve of Schwarzschild de-Sitter Black Hole * 4.2.1 Schwarzschild patch * 4.2.2 de-Sitter patch * 5 Application to Grandfather Paradox * 6 Conclusion ## 1 Introduction Recently doubly holographic setup has drawn the attention of many researchers to study the information paradox [1]. A version of the resolution of information paradox is to get the Page curve [2]. AdS/CFT conjecture states that bulk gravity is dual to quantum field theory on the AdS boundary [3]. Doubly holographic setup is the extended version where one considers two copies of AdS/BCFT-like systems [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. The idea was started from the Karch-Randall model, where one chop off the AdS boundary by a Karch-Randall brane [26, 27]. Let us discuss three equivalent descriptions of the doubly holographic setup which is being used to obtain the Page curve. * BCFT is living on \(d\)-dimensional boundary of AdS spacetime. BCFT has a \((d-1)\) -dimensional boundary, known as a defect. * Gravity on \(d\)-dimensional Karch-Randall brane is coupled to BCFT at the defect via transparent boundary condition. * \(d\)-dimensional BCFT has gravity dual which is Einstein gravity on \(AdS_{d+1}\). In this setup, the Karch-Randall brane contains a black hole whose Hawking radiation is collected by BCFT bath. One can define the radiation region on the BCFT bath, and the entanglement entropy of Hawking radiation can be obtained using the semiclassical formula in the second description [28]. The advantage of a doubly holographic setup is that we can compute entanglement entropy very easily using the classical Ryu-Takayanagi formula [29] in the third description. In this kind of setup, there exist two types of extremal surfaces: Hartman-Maldacena surface [30], which starts at the defect, crosses the black hole horizon, and goes to its thermofield double partner; in this process volume of Einstein-Rosen bridge grows. Another extremal surface is the island surface, which starts at BCFT and lands on the Karch-Randall brane. It turns out that initially, the entanglement entropy of the Hartman-Maldacena surface dominates, and after the Page time island surface takes over, and hence one gets the Page curve. The problem with this setup is that gravity becomes massive on the Karch-Randall brane, which is not physical [31, 32, 33, 34]. See [5, 23, 35, 36] for computation of Page curve with massless gravity on Karch-Randall brane. Massless gravity on Karch-Randall brane in [35] arises due to the inclusion of the Dvali-Gabadadze-Porrati term [37] on the same. In [23], we explicitly showed that normalizable graviton wave function requires massless graviton. Another reason is that we implemented the Dirichlet boundary condition on the graviton wave function at the black hole horizon that quantized the graviton mass and allowed a massless graviton. Further, the tension of the Karch-Randall brane (in our case it was a fluxed hyper-surface) is inversely proportional to the black hole horizon and we obtained "volcano"-like potential hence one can localize massless gravity on the Karch-Randall brane. Despite massless gravity on the Karch-Randall brane, we had comparable entanglement entropies coming from Hartman-Maldacena and island surfaces. Therefore we obtained the Page curve of an eternal neutral black hole from a top-down approach. In [36], authors imposed Dirichlet boundary conditions on gravitating branes in wedge holography where they obtained the Page curve even in the presence of massless gravity. The existence of islands with massless gravity was present in [5] because of the geometrical construction of the critical Randall-Sundrum II model. Information paradox of flat space black holes was discussed in [38, 39, 40]1 where one defines the subregions on the holographic screen to compute holographic entanglement entropy. The setup in which the bath is also gravitating is known as "wedge holography" [41, 42, 48]. See [43, 44, 45, 46] for work on quantum entanglement, complexity, and entanglement negativity in de-Sitter space2. Footnote 2: We thank S. Choudhury to bring his works to our attention. In wedge holography, we consider two Karch-Randall branes, \(Q_{1}\) and \(Q_{2}\), of tensions \(T_{1}\) and \(T_{2}\) such that \(T_{1}<T_{2}\). In this setup, \(Q_{2}\) contains a black hole whose Hawking radiation is collected by \(Q_{1}\). Literature on wedge holography can be found in [47, 48, 49, 50]. It is easy to obtain the Page curve for black holes with a single horizon. In this paper, we address the following issues: we construct a multiverse using the idea of wedge holography and use this setup to get the Page curve of black holes with multiple horizons from wedge holography. Multiverse in this paper will be constructed by localizing Einstein's gravity on various Karch-Randall branes. These branes will be embedded into one higher dimension. Further, we propose that it is possible to travel between different universes because all are communicating with each other. We suspect that the "grandfather paradox" can be resolved in this setup. The paper is organized as follows. In section 2, we briefly review wedge holography. In section 3, we discuss the existence of multiverse in the Karch-Randall braneworld with geometry anti de-Sitter, de-Sitter, and the issues when we mix de-Sitter and anti de-Sitter spacetimes in subsections 3.1, 3.2 and 3.3. In sections 4, we discuss application of the multiverse to information paradox where we have obtained the Page curve of eternal AdS black holes for \(n=2\) multiverse in 4.1 and Page curve of Schwarzschild de-Sitter black hole in 4.2 via 4.2.1 and 4.2.2. Section 5 is on the application of this model to grandfather paradox. Finally, we discuss our results in section 6. ## 2 Brief Review of Wedge Holography In this section, let us review wedge holography [41, 42, 48]. Consider the following action. \[S=-\frac{1}{16\pi G_{N}^{(d+1)}}\int d^{d+1}x\sqrt{-g_{\rm bulk}}\left(R[g_{ \rm bulk}]-2\Lambda\right)-\frac{1}{8\pi G_{N}^{(d+1)}}\int_{\alpha=1,2}d^{d} x\sqrt{-h_{\alpha}}\left(\mathcal{K}_{\alpha}-T_{\alpha}\right), \tag{1}\] where first term is the Einstein-Hilbert term with negative cosmological constant\(\left(\Lambda=-\frac{d(d-1)}{2}\right)\), and second term correspond to boundary terms on Karch-Randall branes of tensions \(T_{\alpha=1,2}\). Einstein equation for the bulk action (1) turns out to be: \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{d(d-1)}{2}g_{\mu\nu}. \tag{2}\] Solution to Einstein equation is [42]: \[ds_{(d+1)}^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+\cosh^{2}(r)h_{ij}^{\alpha}dy ^{i}dy^{j}, \tag{3}\] where \(h_{ij}^{\alpha}\) are the induced metric on Karch-Randall branes. One can obtain Neumann boundary condition by the variation of (1) with respect to \(h^{\alpha}_{ij}\) and is given as: \[\mathcal{K}^{\alpha}_{ij}-(\mathcal{K}^{\alpha}-T^{\alpha})h^{ \alpha}_{ij}=0. \tag{4}\] For the consistent construction of wedge holography, the metric (3) should be the solution of (2) provided \(h^{\alpha}_{ij}\) should satisfy Einstein equation with a negative cosmological constant in d dimensions \[R^{\alpha}_{ij}-\frac{1}{2}h^{\alpha}_{ij}R[h_{ij}]^{\alpha}= \frac{(d-1)(d-2)}{2}h^{\alpha}_{ij}, \tag{5}\] and it should satisfy Neumann boundary condition (4) at \(r=\pm\rho\). See Fig. 1 for a pictorial representation of wedge holography. One can also choose \(-\rho_{1}\leq r\leq\rho_{2}\) with \(\rho_{1}\neq\rho_{2}\)[42], in this range, tensions of the branes will be different. This is useful in obtaining the Page curve. There are three descriptions of wedge holography summarised below: * **Boundary description:**\(CFT_{d-1}\) living on the wedge of common boundaries of two \(AdS_{d}\)'s. * **Intermediate description:** Two Karch-Randall branes of geometry \(AdS_{d}\) (\(Q_{1}\) and \(Q_{2}\)) glued to each other at the interface point by a transparent boundary condition. Figure 1: Description of wedge holography. Two \(d\)-dimensional Karch-Randall branes joined at the \((d-1)\) dimensional defect, Karch-Randall branes are embedded in \((d+1)\)-dimensional bulk. * **Bulk description:** Einstein gravity in \((d+1)\)-dimensional bulk, \(AdS_{d+1}\). Precisely, correspondence can be interpreted as: "Classical gravity in \((d+1)\)-dimensions has a holographic dual theory on the defect which is CFT in \((d-1)\)-dimensions". Wedge holography is useful in the computation of the Page curve of black holes. Let us understand this connection. In the intermediate description, we consider a black hole on \(Q_{2}\) whose Hawking radiation will be collected by weakly gravitating bath \(Q_{1}\) (i.e., \(T_{1}<T_{2}\)). To calculate the entanglement entropy in the intermediate description, one is required to use the semiclassical formula: \[S(\mathcal{R})=\min_{\mathcal{I}}\,\mathrm{ext}_{\mathcal{I}}\,S_{gen}( \mathcal{R}\cup\mathcal{I}), \tag{6}\] where [51]: \[S_{gen}(\mathcal{R}\cup\mathcal{I})=\frac{A(\partial\mathcal{I})}{4G_{N}}+S_ {matter}(\mathcal{R}\cup\mathcal{I}), \tag{7}\] where \(A(\partial\mathcal{I})\) denotes the area of the boundary of the island surface, and \(S_{matter}(\mathcal{R}\cup\mathcal{I})\) interpreted as matter contributions from radiation and island regions both. Using bulk description, we can obtain entanglement entropy using the classical Ryu-Takayanagi formula [29]. \[S_{gen}(\mathcal{R}\cup\mathcal{I})=\frac{A(\gamma)}{4G_{N}^{(d+1)}}, \tag{8}\] where \(\gamma\) is the minimal surface in bulk. In wedge holography, there is one more extremal surface, Hartman-Maldacena surface [30], which starts at the defect, crosses the horizons, and meets its thermofield double. By plotting the entanglement entropies contributions of these surfaces, we can get the Page curve [2]. ## 3 Emerging Multiverse from Wedge Holography In this section, we discuss how one can describe multiverse from wedge holography. ### Anti de-Sitter Background In this subsection, we construct a multiverse from \(AdS\) spacetimes. Let us first start with the simplest case discussed in 2. To describe multiverse, we need multiple Karch-Randall branes located at \(r=\pm n\rho\) such that bulk metric should satisfy Neumann boundary condition at the aforementioned locations. Extrinsic curvature on the Karch-Randall brane and its trace is computed as: \[\mathcal{K}^{\alpha}_{ij}=\frac{1}{2}\left(\partial_{r}g_{ij}\right) \left|{}_{r=\pm n\rho}=\tanh(r)g_{ij}\right|_{r=\pm n\rho}=\tanh(\pm n\rho)h^{ \alpha}_{ij},\] \[\mathcal{K}^{\alpha}=h^{ij}_{\alpha}K^{\alpha}_{ij}=d\tanh(\pm n \rho). \tag{9}\] We can see that Neumann boundary condition (4) is satisfied at \(r=\pm n\rho\) provided \(T^{\alpha}_{\text{AdS}}=(d-1)\tanh(\pm n\rho)\)3, where \(\alpha=-n,...,n\). Further, bulk metric (3) is also satisfying the Einstein equation (2), and hence, this guarantees the existence of \(2n\) Karch-Randall branes in our setup. These \(2n\)-branes are analogs of universes that are embedded in \(AdS_{d+1}\). Defect is described as: \(P=Q_{\alpha}\cap Q_{\beta}\), where \(\alpha,\beta=-n,-n+1,..,1,...,n-1,n\). Now, we include the DGP term in the gravitational action, which can describe massless gravity [35]. Footnote 3: It seems that some of branes have negative tension. Let us discuss the case when branes are located at \(-n\rho_{1}\) and \(n\rho_{2}\) with \(\rho_{1}\neq\rho_{2}\). In this case tensions of branes are \((d-1)\tanh(-n\rho_{1})\) and \((d-1)\tanh(n\rho_{2})\). Negative tension issue can be resolved when we consider \(\rho_{1}<0\) and \(\rho_{2}>0\) similar to [48]. Therefore this fixes the brains stability issue in our setup. This discussion is also applicable to the case when \(\rho_{1}=\rho_{2}\). \[\small S=\frac{1}{16\pi G_{N}^{(d+1)}}\bigg{[}\int_{M}d^{d+1}x\sqrt{-g}\left( R[g]+d(d-1)\right)+2\int_{\partial M}d^{d}x\sqrt{-h}K+2\int_{\partial_{\alpha}}d^{d}x \sqrt{-h_{\alpha}}\left(\mathcal{K}_{\alpha}-T_{\alpha}+\lambda_{\alpha}R_{h _{\alpha}}\right)\bigg{]}, \tag{10}\] where the first term is the bulk Einstein-Hilbert term with negative cosmological constant, the second term is the Gibbons-Hawking-York boundary term for conformal boundary \(\partial M\), and the third term corresponds to the difference of extrinsic curvature scalar and tensions of \(2n\) Karch-Randall branes, \(R_{h_{\alpha}}\) are intrinsic curvature scalars for \(2n\) Karch-Randall branes. DGP is understood as the Dvali-Gabadadze-Porrati term [37]. In this case, bulk metric satisfies the following Neumann boundary condition at \(r=\pm n\rho\)4 Footnote 4: When we discuss multiverse then \(\alpha\) and \(\beta\) will take \(2n\) values whereas when we discuss wedge holography then \(\alpha,\beta=1,2\). \[\mathcal{K}_{\alpha,ij}-(\mathcal{K}_{\alpha}-T_{\alpha}+\lambda_{\alpha}R_{h _{\alpha}})h_{\alpha,ij}+2\lambda_{\alpha}R_{\alpha,ij}=0. \tag{11}\] Einstein equation for the bulk action (10) will be same as (2) and hence solution is: \[ds^{2}_{(d+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+\cosh^{2}(r)h^{\alpha,\text{ AdS}}_{ij}dy^{i}dy^{j}, \tag{12}\] with \(-n\rho_{1}\leq r\leq n\rho_{2}\). Induced metric \(h^{\alpha}_{ij}\) satisfy Einstein equation on the brane \[R^{\alpha}_{ij}-\frac{1}{2}h^{\alpha}_{ij}R[h_{ij}]^{\alpha}=\frac{(d-1)(d-2)} {2}h^{\alpha}_{ij}. \tag{13}\] The above equation can be derived from the following Einstein-Hilbert term including negative cosmological constant on the brane: \[S^{\text{EH}}_{\text{AdS}}=\lambda^{\text{AdS}}_{\alpha}\int d^{d}x\sqrt{-h_{ \alpha}}\left(R[h_{\alpha}]-2\Lambda^{\text{AdS}}_{\text{brane}}\right), \tag{14}\] where \(\Lambda^{\rm AdS}_{\rm brane}=-\frac{(d-1)(d-2)}{2}\), \(\lambda^{\rm AdS}_{\alpha}\left(\equiv\frac{1}{16\pi G_{N}^{d,~{}\alpha}}=\frac{ 1}{16\pi G_{N}^{(d+1)}}\int_{0}^{\alpha\rho}\cosh^{d-2}(r)dr~{};(\alpha=1,2,..., n)\right)^{5}\) is related to effective Newton's constant in \(d\) dimensions, and (14) can be obtained by substituting (12) into (1) and using the value of \(\mathcal{K}^{\alpha}\) from (9) and branes tensions \(T^{\alpha}_{\rm AdS}=(d-1)\tanh(\pm n\rho)\). Three descriptions of our setup are as follows: * **Boundary description:**\(d\)-dimensional boundary conformal field theory with \((d-1)\)-dimensional boundary. * **Intermediate description:** All \(2n\) gravitating systems are connected at the interface point by transparent boundary condition. * **Bulk description:** Einstein gravity in the \((d+1)\)-dimensional bulk. We see that in the intermediate description, there is a transparent boundary condition at the defect; therefore multiverse constructed in this setup consists of communicative universes localized on Karch-Randall branes (see Figs. 2,3). Wedge holography dictionary for "multiverse" with \(2n\) Figure 2: \(2n\) Karch-Randall branes, \(Q_{-n,-n+1,...,1,2,...,n-1,n}\) embedded in \(AdS_{d+1}\). P is the defect. Multiverse is described by \(2n\) Karch-Randall branes which are \(d\)-dimensional objects and defect is \((d-1)\)-dimensional object. AdS branes can be stated as follows. _Classical gravity in \((d+1)\)-dimensional anti de-Sitter spacetime \(\equiv\) (Quantum) gravity on \(2n\)\(d\)-dimensional Karch-Randall branes with metric \(AdS_{d}\) \(\equiv\) CFT living on \((d-1)\)-dimensional defect._ Second and third line exist due to braneworld holography [26, 27] and usual AdS/CFT correspondence [3] due to gravity on the brane. Therefore, _classical gravity in \(AdS_{d+1}\) is dual to \(CFT_{d-1}\) at the defect_. ### de-Sitter Background In this subsection, we study the realization of the multiverse in such a way that the geometry of Karch-Randall branes is of de-Sitter spacetime. Wedge holography with de-Sitter metric on Karch-Randall branes was discussed in [42] where the bulk spacetime is AdS spacetime and in [52] with flat space bulk metric. Before going into the details of construction of "multiverse" with de-Sitter geometry on Karch-Randall branes, first let us summarise some key points of [52]. Authors in [52] constructed wedge holography in \((d+1)\)-dimensional flat spacetime with Lorentzian signature. Karch-Randall branes in their construction have either geometry of Figure 3: Cartoon picture of the multiverse for \(n=3\) in AdS spacetimes. P is the \((d-1)\)-dimensional defect and Karch-Randall branes are denoted by \(Q_{-1/1,-2/2,-3/3}\). dimensional hyperbolic space or de-Sitter space. Since our interest lies in the de-Sitter space therefore we only discuss the results related to the same. Geometry of the defect is \(S^{d-1}\). Wedge holography states that _Classical gravity in \((d+1)\)-dimensional flat spacetime_ \(\equiv\) _(Quantum) gravity on two \(d\)-dimensional Karch-Randall branes with metric \(dS_{d}\)_ \(\equiv\) _CFT living on \((d-1)\)-dimensional defect \(S^{d-1}\)._ Third line in the above duality is coming from dS/CFT correspondence [53, 54]. Authors in [52] explicitly calculated the central charge of dual CFT which was imaginary and hence CFT living at the defect is non-unitary. The above discussion also applies to the AdS bulk as well. In this case one can state the wedge holographic dictionary as: _Classical gravity in \((d+1)\)-dimensional anti de-Sitter spacetime_ \(\equiv\) _(Quantum) gravity on two \(d\)-dimensional Karch-Randall branes with metric \(dS_{d}\)_ \(\equiv\) _non-unitary CFT living at the \((d-1)\)-dimensional defect._ Now to discuss the existence of multiverse, we start with the bulk metric [42]: \[ds^{2}_{(d+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+\sinh^{2}(r)h^{\beta,\text{dS }}_{ij}dy^{i}dy^{j}, \tag{15}\] (15) is the solution of (2) with a negative cosmological constant provided induced metric on Karch-Randall brane (\(h^{\beta}_{ij}\)) is the solution of Einstein equation with a positive cosmological constant on Karch-Randall branes: \[R^{\beta}_{ij}-\frac{1}{2}h^{\beta}_{ij}R[h_{ij}]^{\beta}=-\frac{(d-1)(d-2)}{ 2}h^{\beta}_{ij}. \tag{16}\] One can derive Einstein-Hilbert terms with positive cosmological constant on Karch-Randall branes by using Neumann boundary condition (4) for de-Sitter branes and substituting (15) in (1), the resulting action is given by the following expression \[S^{\text{EH}}_{\text{dS}}=\lambda^{\text{dS}}_{\beta}\int d^{d}x\sqrt{-h_{ \beta}}\left(R[h_{\beta}]-2\Lambda^{\text{dS}}_{\text{brane}}\right), \tag{17}\] where \(\lambda^{\text{dS}}_{\beta}\left(\equiv\frac{1}{16\pi G^{d,\ \beta}_{N}}=\frac{1}{16\pi G^{(d+1)}_{N}}\int_{0}^{\beta\rho}\sinh^{d-2}(r)dr \;;(\beta=1,2,...,n)\right)^{6}\) represents relationship with effective Newton's constant on the branes and \(\Lambda^{\text{dS}}_{\text{brane}}=\frac{(d-1)(d-2)}{2}\). For the de-Sitter embeddings in bulk AdS spacetime (15), extrinsic curvature and trace of the same on the Karch-Randall branes are obtained as: \[\mathcal{K}^{\beta}_{ij}=\frac{1}{2}\left(\partial_{r}g_{ij}\right) |_{r=\pm n\rho}=\coth(r)g_{ij}|_{r=\pm n\rho}=\coth(\pm n\rho)h^{\beta}_{ij},\] \[\mathcal{K}^{\beta}=h^{ij}_{\beta}K^{\beta}_{ij}=d\coth(\pm n\rho). \tag{18}\] Using (18), we can see that (15) satisfy Neumann boundary condition (4) at \(r=\pm n\rho\) if the tensions of branes are \(T^{\beta}_{\text{dS}}=(d-1)\coth\left(\pm n\rho\right)\), where \(\beta=-n,...,n\). Therefore we can obtain \(2n\) copies of Karch-Randall branes with metric de-Sitter spacetime on each of the brane. In this case, _the multiverse consists of \(2n\) universes localized on Karch-Randall branes whose geometry is \(dS_{d}\), and these \(2n\) copies are embedded in \(AdS_{d+1}\)_. Pictorial representation of the same for \(n=3\) is given in the Fig. 4. Now let us discuss the three descriptions of multiverse with de-Sitter geometries on Karch-Randall branes. * **Boundary description:**\(d\)-dimensional BCFT with \((d-1)\)-dimensional defect. * **Intermediate description:**\(2n\) gravitating systems with de-Sitter geometry connected to each other at the \((d-1)\)-dimensional defect. * **Bulk description:**\((d+1)\)-dimensional Einstein gravity with negative cosmological constant in the bulk. Figure 4: Cartoon picture of the multiverse for \(n=3\) with de-Sitter metric on Karch-Randall branes. P is the \((d-1)\)-dimensional defect and Karch-Randall branes are denoted by \(Q_{-1/1,-2/2,-3/3}\). First and third description are related to each other via AdS/BCFT correspondence and \((d-1)\)-dimensional defect which is non-unitary CFT exists because of dS/CFT correspondence [53, 54]. de-Sitter space exists for finite time and then disappear. Another de-Sitter space born after the disappearance of previous one [55]. Therefore it is possible to have a "multiverse" (say \(M_{1}\)) with de-Sitter branes provided all of them should be created at the same "creation time"7 but this will exist for finite time and then \(M_{1}\) disappears. After disappearance of \(M_{1}\), other multiverse (say \(M_{2}\)) consists of many de-Sitter branes born with same creation of time of all the de-Sitter branes. Footnote 7: Creation time is defined as the “time” when any universe born [55]. ### Braneworld Consists of Anti de-Sitter and de-Sitter Spacetimes Based on the discussion in 3.1 and 3.2, we can construct two copies of multiverse, \(M_{1}\) and \(M_{2}\) in such a way that metric of Karch-Randall branes in \(M_{1}\) have the structure of \(AdS_{d}\) spacetime and Karch-Randall branes in \(M_{2}\) have geometry of de-Sitter spaces in \(d\)-dimensions. Bulk metric (3) of \(M_{1}\) and (15) of \(M_{2}\) satisfy Einstein's equation with a negative cosmological constant in bulk (2). In this scenario, \(M_{1}\) consists of \(2n_{1}\) Karch-Randall branes located at \(r=\pm n_{1}\rho\) with induced metric \(h_{ij}^{\alpha,\text{AdS}}\), and tensions \(T_{\text{AdS}}^{\alpha}=(d-1)\tanh(\pm n_{1}\rho)\) and \(M_{2}\) contains \(2n_{2}\) Karch-Randall branes located at \(r=\pm n_{2}\rho\) with induced metric \(h_{ij}^{\beta,\text{dS}}\), and tensions \(T_{\text{dS}}^{\beta}=(d-1)\coth(\pm n_{2}\rho)\), where \(\alpha=-n_{1},...,n_{1}\) and \(\beta=-n_{2},...,n_{2}\). One can ask why we are interested in the setup that contains mixture of anti de-Sitter and de-Sitter branes. The answer is that this model will be helpful in the study of information paradox of the Schwarzschild de-Sitter black hole with two horizons from wedge holography. To do so, one has to replace AdS branes in \(M_{1}\) with flat-space branes8 with \(n_{1}=1\). Overall, we have \(n_{1}=n_{2}=1\) such that we have two flat-space branes and two de-Sitter branes. Footnote 8: In this case, warp factor will be different in the bulk metric. Exact metric is given in (45). Now the question is that whether this description makes sense or not. When \(d\)-dimensional AdS spacetimes are embedded in \(AdS_{d+1}\) then these branes intersect at the time-like surface of \(AdS_{d+1}\) boundary whereas when \(dS_{d}\) Karch-Randall branes are embedded in \(AdS_{d+1}\) then they intersect at the space-like surface of \(AdS_{d+1}\) boundary9. In Fig. 5, as long as \(M_{1}\) and \(M_{2}\) are disconnected from each other then there is no problem. This is what has been followed in 4.2 to get the Page curve of Schwarzschild de-Sitter black hole by treating Schwarzschild and de-Sitter patches independent of each other. Footnote 9: We thank J. Maldacena for comment on this. In this subsection, we have discussed the embedding of different types of Karch-Randall branes in the different bulks which are disconnected from each other. Authors in [55] have discussed the various possibilities of embedding of different types of branes, e.g., Minkowski, de-Sitter and anti de-Sitter branes in the same bulk. Existence of various branes are characterized by creation time \(\tau_{*}\). There is finite amount of time for which Minkowski and de-Sitter branes born and there is no creation time for anti de-Sitter branes. Out of various possibilities discussed in [55], it was pointed by authors that one can see Minkowski, de-Sitter and anti de-Sitter brane at the same time with creation time \(\tau_{*}=-\pi/2\) in a specific bulk. In this case, branes have time dependent position. First we will summarise this result10 and then comment on the realization of the same from wedge holography. Footnote 10: For more details, see [55]. Bulk \(AdS_{5}\) metric has the following form: \[ds^{2}=\frac{1}{z^{2}}\left(-dt_{h}^{2}+t_{h}^{2}dH_{3}^{2}+dz^{2}\right), \tag{19}\] where \(dH_{3}^{2}=d\theta^{2}+\sinh^{2}(\theta)d\omega_{2}^{2}\). In this bulk, Minkowski Randall-Sundrum brane is located at \(z_{M}(t_{h})=z_{0}\), where \(z_{0}\) is some constant, \(AdS_{4}\) slice are located at \(z_{\text{AdS},1}(t_{h})=\sqrt{l^{2}+t_{h}^{2}}-\sqrt{l^{2}-1}\) (when \(X_{4}>0\)) and \(z_{\text{AdS},2}(t_{h})=\sqrt{l^{2}+t_{h}^{2}}+\sqrt{l^{2}-1}\) (when \(X_{4}<0\)) both sides of turn around point \(X_{4}=0(X_{4}\) being one of parametrization of \(AdS_{5}\) defined in [55]). At \(t_{h}=0\), \(z_{\text{AdS},\text{min}}=l\mp\sqrt{l^{2}-1}\). Minkowski and AdS brane can coexist for fixed value of \(z\) beyond \(z_{\text{AdS},\text{min}}\). Metric Figure 5: Braneworld consists of \(d\)-dimensional anti de-Sitter and de-Sitter spacetimes. AdS spacetimes are embedded in the bulk (3) where as de-Sitter spacetimes are embedded in the bulk spacetime with metric (15). We have used \(n_{1}=n_{2}=3\) to draw this figure. on \(AdS_{4}\) brane is \[ds^{2}=-d\tau_{h}^{2}+a(\tau_{h})dH_{3}^{2}, \tag{20}\] where \(a(\tau_{h})=~{}\sin{(\tau_{h}/l)}\). \(dS\) branes exist at \(z_{\text{dS},1}(t_{h})=\sqrt{l^{2}+t_{h}^{2}}+\sqrt{l^{2}+1}\) and \(z_{\text{dS},2}(t_{h})=\sqrt{l^{2}+1}-\sqrt{l^{2}+t_{h}^{2}}\) with metric on each \(dS_{4}\) brane \[ds^{2}=-d\tau_{h}^{2}+a(\tau_{h})dH_{3}^{2}, \tag{21}\] where \(a(\tau_{h})=~{}\sinh{(\tau_{h}/l)}\). **Comment on the Wedge Holographic Realization of Mismatched Branes**: One can construct doubly holographic setup from (19) using the idea of AdS/BCFT. Let us state the three possible descriptions of doubly holographic setup constructed from (19). * **Boundary description**: \(4D\) quantum field theory (QFT) at conformal boundary of (19). * **Intermediate description**: Dynamical gravity localized on \(4D\) end-of-the-world brane coupled to \(4D\) boundary QFT. * **Bulk description**: \(4D\) QFT defined in the first description has \(5D\) gravity dual whose metric is (19). Due to covariant nature of AdS/CFT duality it remains the same if one works with the changed coordinates in the bulk i.e. different parametrisations of AdS does not imply different dualities11 and therefore in the above doubly holographic setup, we expect defect to be 3-dimensional conformal field theory because 4-dimensional gravity is just FRW parametrization of \(AdS_{4}\) spacetime (20). Relationship between boundary and bulk description is due to AdS/CFT correspondence, in particular, this kind of duality was studied in [56] where bulk is de-Sitter parametrization of \(AdS_{4}\) and conformal field theory is QFT on \(dS_{3}\). As discussed in detail in appendix **A** of [55] and summarised in this subsection that one can also have de-Sitter and Minkowski branes in this particular coordinate system (19). If one works with de-Sitter metric (21) on end-of-the-world brane then we expect defect CFT to be non-unitary. Due to dynamical nature of gravity on Karch-Randall brane, holographic dictionary is not well understood in the braneworld scenario. Footnote 11: We thank K. Skenderis to clarify this to us and pointing out his interesting paper [56]. Now let us discuss what is the issue in describing wedge holography with "mismatched branes". Wedge holography has "defect CFT" which comes due to dynamical gravity on Karch-Randall branes. Suppose we have two Karch-Randall branes with different geometry, one of them is AdS brane and the other one is de-Sitter brane. Then due to AdS brane, defect CFT should be unitary and due to de-Sitter brane, defect CFT should be non-unitary. It seems that we have two different CFTs at the same defect. This situation will not change even one considers four branes or in general \(2n\) branes. Hence, one may not be able to describe "multiverse" with mismatched branes from wedge holography. That was just an assumption. Common boundary of multiverses \(M_{1}\) and \(M_{2}\) (described in Fig. 5) can't be the same even when geometry is (19) due to "time-dependent" position of branes. All the AdS branes in \(M_{1}\) can communicate with each other via transparent boundary conditions at the defect and similarly all the de-Sitter branes in \(M_{2}\) are able to communicate with each other. But there is no communication between \(M_{1}\) and \(M_{2}\) even in (19). Therefore we conclude that we can create multiverse of same branes(AdS or de-Sitter) but not the mixture of two. Hence issue of mismatched branes do not alter from wedge holography perspective too. Multiverse of AdS branes exists forever whereas multiverse of de-Sitter branes has finite lifetime12. Footnote 12: We thank A. Karch for very helpful discussions on the existence of de-Sitter branes and issue of mismatched branes in wedge holography. ## 4 Application to Information Paradox Multiverse consists of \(2n\) Karch-Randall branes embedded in the bulk \(AdS_{d+1}\). Therefore there will be a single Hartman-Maldacena surface connecting the defect CFTs between thermofield double partner and \(n\) island surfaces (\(\mathcal{I}_{1}\),\(\mathcal{I}_{2}\),.....,\(\mathcal{I}_{n}\)). \(n\) Island surfaces will be stretching between corresponding branes of the same locations with opposite sign (\(r=\pm n\rho\)), see Fig. 6. Let us make the precise statement of a wedge holographic dictionary. _Classical gravity in \((d+1)\)-dimensional AdS bulk_ _(Quantum) gravity on \(2n\)\(d\)-dimensional Karch-Randall branes with metric \(AdS_{d}/dS_{d}\)_ _(CFT living on \((d-1)\)-dimensional defect._ If the metric on Karch-Randall branes will be the de-Sitter metric then CFT will be non-unitary. Therefore this description is the same as the usual wedge holography with two Karch-Randall branes, the only difference is that we have \(2n\) Karch-Randall branes now. Now let us write the explicit formula for entanglement entropies. We consider \(Q_{1,2,...,n}\) as black holes which emit Hawking radiation, the radiation is collected by gravitating baths \(Q_{-1,-2,....,-n}\) (Fig. 6). In this setup, entanglement entropy for the islands surfaces will be: \[S_{\text{Island}}=S_{Q_{-1}-Q_{1}}^{\mathcal{I}_{1}}+S_{Q_{-2}-Q_{2}}^{ \mathcal{I}_{2}}+.......+S_{Q_{-n}-Q_{n}}^{\mathcal{I}_{n}}. \tag{22}\] If entanglement entropy corresponding to the Hartman-Maldacena surface, i.e.,\(S_{\text{HM}}\propto t\) and \(S_{\text{Island}}=2S_{\text{BH}}^{i=1,2,\ldots,n,\text{ thermal}}\) then we can get the Page curve, where \(S_{\text{Island}}\) and \(S_{\text{HM}}\) can be calculated using Ryu-Takayanagi formula [29]. Following are the three descriptions of the multiverse: * **Boundary Description:** BCFT is living at the \(AdS_{d+1}\) boundary with \((d-1)\)-dimensional boundary. * **Intermediate Description:**\(2n\) gravitating systems interact with each other via transparent boundary conditions at the \((d-1)\)-dimensional defect. * **Bulk Description:** Gravity dual of BCFT is Einstein gravity in the bulk. **Consistency Check:** Let us check the formula given in (22) for \(n=2\). Figure 6: In this figure, we assume that \(n\) black holes contained in \(Q_{1,2,\ldots,n}\) emit Hawking radiation which is collected by baths \(Q_{-1,-2,\ldots,-n}\). Green and yellow curves represent island surfaces between \(Q_{-n}\) and \(Q_{n}\), \(Q_{-1}\) and \(Q_{1}\) respectively. The red curve represents the Hartman-Maldacena surface starting at the defect and meets its thermofield double partner. \(\delta M\) is the AdS boundary. ### Page Curve of Eternal AdS Black Holes in \(n=2\) Multiverse First, we will calculate the thermal entropies of black holes. The metric of the black holes in \(AdS\) background is: \[ds^{2}_{(d+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+\cosh^{2}(r)\Bigg{(}\frac{\frac{ dz^{2}}{f(z)}-f(z)dt^{2}+\sum_{i=1}^{d-2}dy_{i}^{2}}{z^{2}}\Bigg{)}, \tag{23}\] where \(f(z)=1-\frac{z^{d-1}}{z_{h}^{d-1}}\). For \(z=z_{h}\), thermal entropy has the following form(we set \(z_{h}=1\) throughout the calculation for the simplicity and focus on \(d=4\)): \[S_{\rm AdS}^{\rm thermal}=\frac{A_{z=z_{h}}^{\rm BH}}{4G_{N}^{(d+1)}}=\frac{1} {4G_{N}^{(5)}}\int dr\cosh^{2}(r)\int dy_{1}\int dy_{2}=\frac{V_{2}}{4G_{N}^{(5 )}}\int dr\cosh^{2}(r), \tag{24}\] where \(V_{2}=\int\int dy_{1}dy_{2}\). Let's consider the \(n=2\) case, in which two Karch-Randall branes between \(-2\rho\leq r\leq 2\rho\) and \(-\rho\leq r\leq\rho\) act as a black hole and bath systems. Therefore total thermal entropies for two eternal AdS black holes will be: \[S_{\rm AdS}^{\rm thermal,\ total} = \frac{V_{2}}{4G_{N}^{(5)}}\int_{-2\rho}^{2\rho}dr\cosh^{2}(r)+ \frac{V_{2}}{4G_{N}^{(5)}}\int_{-\rho}^{\rho}dr\cosh^{2}(r) \tag{25}\] \[= \frac{V_{2}}{4G_{N}^{(5)}}\left(\frac{1}{2}(6\rho+\sinh(2\rho)+ \sinh(4\rho))\right).\] Now let us obtain the Page curve using the formula given in (22) for two eternal black holes. **Entanglement entropy contribution from Hartman-Maldacena surface**: Bulk metric (23) in terms of infalling Eddington-Finkelstein coordinate, \(dv=dt-\frac{dz}{f(z)}\) simplified as follows. \[ds^{2}_{(4+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+\cosh^{2}(r) \Bigg{(}\frac{-f(z)dv^{2}-2dvdz+\sum_{i=1}^{2}dy_{i}^{2}}{z^{2}}\Bigg{)}. \tag{26}\] Induced metric for the Hartman-Maldacena surface parametrize by \(r\equiv r(z)\) and \(v\equiv v(z)\) obtained as: \[ds^{2}=\Bigg{(}r^{\prime}(z)^{2}-\frac{\cosh^{2}(r(z))v^{\prime}(z)}{z^{2}} \left(2+f(z)v^{\prime}(z)\right)\Bigg{)}dz^{2}+\frac{\cosh^{2}(r(z))}{z^{2}} \sum_{i=1}^{2}dy_{i}^{2}, \tag{27}\] where \(r^{\prime}(z)=\frac{dr}{dz}\) and \(v^{\prime}(z)=\frac{dv}{dz}\). From (27), the area of the Hartman-Maldacena surface is obtained as: \[A_{\rm HM}^{\rm AdS}=V_{2}\int_{z_{1}}^{z_{\rm max}}dz\Bigg{(} \frac{\cosh^{2}(r(z))}{z^{2}}\sqrt{r^{\prime}(z)^{2}-\frac{\cosh^{2}(r(z))v^{ \prime}(z)}{z^{2}}\left(2+f(z)v^{\prime}(z)\right)}\Bigg{)}, \tag{28}\] where \(z_{1}\) is the point on gravitating bath, \(z_{\rm max}\) is the turning point of Hartman-Maldacena surface and \(V_{2}=\int\int dy_{1}dy_{2}\). For large time, i.e., \(t\rightarrow\infty\), \(r(z)\to 0\)[35]. Therefore, \[A_{\rm HM}^{\rm AdS}=V_{2}\int_{z_{1}}^{z_{\rm max}}dz\Bigg{(}\frac{\sqrt{-v^{ \prime}(z)\left(2+f(z)v^{\prime}(z)\right)}}{z^{3}}\Bigg{)}. \tag{29}\] Equation of motion for the embedding \(v(z)\) is \[\frac{d}{dz}\left(\frac{\partial L}{\partial v^{\prime}(z)}\right) =0,\] \[\implies \frac{\partial L}{\partial v^{\prime}(z)}=E,\] \[\implies v^{\prime}(z)=\frac{-E^{2}z^{6}-\sqrt{E^{4}z^{12}+E^{2}f(z)z^{6}} -f(z)}{E^{2}f(z)z^{6}+f(z)^{2}}. \tag{30}\] Since, \(v^{\prime}(z)|_{z=z_{\rm max}}=0\) where \(z_{\rm max}\) is the turning point therefore, \(E=\frac{i\sqrt{f(z_{\rm max})}}{z_{\rm max}^{3}}\) and \(\frac{dE}{dz_{\rm max}}=0\) implies \(z_{\rm max}=\frac{7z_{h}}{6}\) (i.e. turning point of Hartman-Maldacena surface is outside the horizon). We can obtain time on the bath as given below: \[t_{1}=t(z_{1})=-\int_{z_{1}}^{z_{\rm max}}\left(v^{\prime}(z)+\frac{1}{f(z)} \right)dz. \tag{31}\] Now let us analyze, the late-time behavior of the area of the Hartman-Maldacena surface: \[\lim_{t\rightarrow\infty}\frac{dA_{\rm HM}^{\rm AdS}}{dt}=\lim_{t\to \infty}\Bigg{(}\frac{\frac{dA_{\rm HM}^{\rm AdS}}{dz_{\rm max}}}{\frac{dt}{dz _{\rm max}}}\Bigg{)}=\frac{L(z_{\rm max},v^{\prime}(z_{\rm max}))+\int_{z_{1} }^{z_{\rm max}}\frac{\partial L}{\partial z_{\rm max}}dz}{-v^{\prime}(z_{\rm max })-\frac{1}{f(z_{\rm max})}-\int_{z_{1}}^{z_{\rm max}}\frac{\partial v^{\prime} (z)}{\partial z_{\rm max}}}. \tag{32}\] Since, \[\lim_{t\rightarrow\infty}\frac{\partial v^{\prime}(z)}{\partial z _{\rm max}}=\lim_{t\rightarrow\infty}\frac{\partial v^{\prime}(z)}{\partial E }\frac{\partial E}{\partial z_{\rm max}}=0,\] \[\lim_{t\rightarrow\infty}\frac{\partial L(z,v^{\prime}(z))}{ \partial z_{\rm max}}=\frac{\partial L(z,v^{\prime}(z))}{\partial v^{\prime}(z )}\frac{\partial v^{\prime}(z)}{\partial z_{\rm max}}=0. \tag{33}\] Therefore, \[\lim_{t\rightarrow\infty}\frac{dA_{\rm HM}^{\rm AdS}}{dt}=\frac{L(z_{\rm max},v^{\prime}(z_{\rm max}))}{-v^{\prime}(z_{\rm max})-\frac{1}{f(z_{\rm max})}}= \frac{\frac{\sqrt{-v^{\prime}(z_{\rm max})(2+f(z_{\rm max})v^{\prime}(z_{\rm max }))}}{z_{\rm max}^{3}}}{-v^{\prime}(z_{\rm max})-\frac{1}{f(z_{\rm max})}}=constant. \tag{34}\] The above equation implies that \(A_{\rm HM}^{\rm AdS}\propto t_{1}\), and hence entanglement entropy for the Hartman-Maldacena surface has the following form \[S_{\rm HM}^{\rm AdS}\propto t_{1}. \tag{35}\] This corresponds to an infinite amount of Hawking radiation when \(t_{1}\rightarrow\infty\), i.e., at late times, and hence leads to information paradox. **Entanglement entropy contribution from Island surfaces**: Now consider the island surfaces parametrize as \(t=constant\) and \(z\equiv z(r)\). Entanglement entropy of two eternal AdS black holes for the island surfaces can be obtained using (22). Since there are two island surfaces(\(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\)) stretching between the Karch-Randall branes located at \(r=\pm\rho\) (\(\mathcal{I}_{1}\)) and \(r=\pm 2\rho\) (\(\mathcal{I}_{2}\)), and hence we can write (22) for the same as given below \[S_{\text{AdS}}^{\text{Island}}=S_{Q_{-1}-Q_{1}}^{\mathcal{I}_{1}}+S_{Q_{-2}-Q_{ 2}}^{\mathcal{I}_{2}}=\frac{(\mathcal{A}_{\mathcal{I}_{1}}+\mathcal{A}_{ \mathcal{I}_{2}})}{4G_{N}^{(5)}}=\frac{\int d^{3}x\sqrt{h_{1}}+\int d^{3}x \sqrt{h_{2}}}{4G_{N}^{(5)}}. \tag{36}\] First we calculate \(\mathcal{A}_{\mathcal{I}_{1}}\). Induced metric on Karch-Randall branes can be obtained from (23) by using the parametrization of island surface as \(t=constant\) and \(z=z(r)\) and restricting to \(d=4\) with \(f(z)=1-z^{3}\) (since \(z_{h}=1\)), \[ds^{2}=\Bigg{(}1+\frac{\cosh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}(1-z(r)^{3})} \Bigg{)}dr^{2}+\frac{\cosh^{2}(r)}{z(r)^{2}}\sum_{i=1}^{2}dy_{i}^{2}, \tag{37}\] Area of the island surface \(\mathcal{I}_{1}\) from (37) obtained as \[\mathcal{A}_{\mathcal{I}_{1}}=V_{2}\int_{-\rho}^{\rho}dr\mathcal{L}_{ \mathcal{I}_{1}}\left(z(r),z^{\prime}(r)\right)=V_{2}\int_{-\rho}^{\rho}dr \Bigg{(}\frac{\cosh^{2}(r)}{z(r)^{2}}\sqrt{1+\frac{\cosh^{2}(r)z^{\prime}(r)^ {2}}{z(r)^{2}(1-z(r)^{3})}}\Bigg{)}, \tag{38}\] where we have chosen \(z_{h}=1\) and hence \(0<z<1\) for \(f(z)\geq 0\). Let us discuss the variation of the action (38). \[\delta\mathcal{A}_{\mathcal{I}_{1}}=V_{2}\int_{-\rho}^{\rho}dr \Bigg{[}\Bigg{(}\frac{\delta\mathcal{L}_{\mathcal{I}_{1}}\left(z(r),z^{\prime }(r)\right)}{\delta z(r)}\Bigg{)}\,\delta z(r)+\Bigg{(}\frac{\delta\mathcal{L} _{\mathcal{I}_{1}}\left(z(r),z^{\prime}(r)\right)}{\delta z^{\prime}(r)} \Bigg{)}\,\delta z^{\prime}(r)\Bigg{]}\] \[=V_{2}\int_{-\rho}^{\rho}dr\left(\frac{\delta\mathcal{L}_{ \mathcal{I}_{1}}\left(z(r),z^{\prime}(r)\right)}{\delta z^{\prime}(r)}\right) \delta z(r)-\int_{-\rho}^{\rho}dr\Bigg{[}\frac{d}{dr}\left(\frac{\delta \mathcal{L}_{\mathcal{I}_{1}}\left(z(r),z^{\prime}(r)\right)}{\delta z^{ \prime}(r)}\right)-\Bigg{(}\frac{\delta\mathcal{L}_{\mathcal{I}_{1}}\left(z( r),z^{\prime}(r)\right)}{\delta z(r)}\Bigg{)}\Bigg{]}\delta z(r). \tag{39}\] Variational principle will be meaningful only if first term of the above equation vanishes. Second term is the EOM for the embedding \(z(r)\). Let us see what this implies \[\int_{-\rho}^{\rho}dr\left(\frac{\delta\mathcal{L}_{\mathcal{I}_{1}}\left(z(r),z^{\prime}(r)\right)}{\delta z^{\prime}(r)}\right)\delta z(r)=\int_{-\rho}^{ \rho}dr\Bigg{(}\frac{\cosh^{4}(r)z^{\prime}(r)}{z(r)^{4}f(z(r))\sqrt{\frac{ \cosh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}}+1}}\Bigg{)}\delta z(r), \tag{40}\] (40) vanishes either we impose Dirichlet boundary condition on the branes, i.e., \(\delta z(r=\pm\rho)=0\) or Neumann boundary condition on the branes, i.e., \(z^{\prime}(r=\pm\rho)=0\). For gravitating baths Neumann boundary condition allow RT surfaces to move along the branes. In this case, minimal surface is the black hole horizon [33]. Euler-Lagrange equation of motion for the embedding \(z(r)\) from the action (38) turns out to be: \[\frac{\cosh^{2}(r)}{2z(r)^{4}\left(z(r)^{3}-1\right)\left(-\cosh^{2 }(r)z^{\prime}(r)^{2}+z(r)^{5}-z(r)^{2}\right)\sqrt{\frac{\cosh^{2}(r)z^{ \prime}(r)^{2}}{z(r)^{2}-z(r)^{5}}+1}}\] \[\times\bigg{(}z(r)^{4}\cosh^{2}(r)z^{\prime}(r)^{2}+2z(r)\cosh^{2 }(r)z^{\prime}(r)^{2}+6\sinh(r)\cosh^{3}(r)z^{\prime}(r)^{3}-2z(r)^{5}\cosh(r) \left(\cosh(r)z^{\prime\prime}(r)+4\sinh(r)z^{\prime}(r)\right)\] \[\qquad\qquad+2z(r)^{2}\cosh(r)\left(\cosh(r)z^{\prime\prime}(r)+4 \sinh(r)z^{\prime}(r)\right)+4z(r)^{9}-8z(r)^{6}+4z(r)^{3}\bigg{)}=0. \tag{41}\] Interestingly, solution of (41) is \(z(r)=1\) which is black hole horizon and its satisfies Neumann boundary condition on the branes. The same can be seen from the structure of (41). Terms inside the open bracket of (41) contains mostly \(z^{\prime}(r)\) and \(z^{\prime\prime}(r)\), but there is a particular combination independent of \(z^{\prime}(r)\) and \(z^{\prime\prime}(r)\), \((4z(r)^{9}-8z(r)^{6}+4z(r)^{3})\) which vanishes for \(z(r)=1\) and hence \(z(r)=1\) is the solution of (41). This implies Ryu-Takayanagi surface is the black hole horizon because \(z_{h}=1\)13 and on substituting \(z(r)=1\) in (38), we obtained the minimal area of the island surface \(\mathcal{I}_{1}\) as Footnote 13: It was discussed in [33] that Neumann boundary condition on gravitating branes implies that Ryu-Takayanagi surface in the wedge holography is the black hole horizon. The same was also obtained in [35] by using inequality condition on the area of island surface. We obtained the same throughout the paper wherever we have discussed the entanglement entropy of island surfaces. \[\mathcal{A}_{\mathcal{I}_{1}}=V_{2}\int_{-\rho}^{\rho}dr{\cosh^{2 }(r)}. \tag{42}\] Minimal area of the second island surface \(\mathcal{I}_{2}\) will be the same as (42) with different limits of integration due to different locations of Karch-Randall branes (\(r=\pm 2\rho\)). \[\mathcal{A}_{\mathcal{I}_{2}}=V_{2}\int_{-2\rho}^{2\rho}dr{\cosh^ {2}(r)}. \tag{43}\] Substituting (42) and (43) into (36), we obtain the total entanglement entropy of island surfaces \[S_{\text{AdS}}^{\text{Island}}=\frac{2V_{2}}{4G_{N}^{(5)}} \Bigg{(}\int_{-\rho}^{\rho}dr{\cosh^{2}(r)}+\int_{-2\rho}^{2\rho}dr{\cosh^{2 }(r)}\Bigg{)}=2S_{\text{AdS}}^{\text{thermal, total}}. \tag{44}\] prefactor 2 in (44) comes due to the extra two island surfaces from the thermofield double partner. From (35) and (44), we obtain the Page curve for \(n=2\) multiverse as shown in Fig. 7. ### Page Curve of Schwarzschild de-Sitter Black Hole In this section, we study the information paradox of the Schwarzschild de-Sitter black hole. As discussed in section 3.3, we can not have mismatched branes connected at the same defect. Therefore, we study this problem in two parts by first calculating the Page curve of the Schwarzschild patch and then the Page curve of the de-Sitter patch similar to the non-holographic model [58]. This can be done as follows. We study the Schwarzschild patch in subsection 4.2.1 where we consider two flat space branes embedded in the bulk and de-Sitter patch in subsection 4.2.2 with two de-Sitter branes. We have shown the setup in Fig. 8. The setup is two copies of wedge holography with flat space and de-Sitter branes in Schwarzschild and de-Sitter patches respectively. #### 4.2.1 Schwarzschild patch Since for Schwarzschild black hole, \(\Lambda=0\), therefore to realize Schwarzschild black hole on Karch-Randall brane, we need to consider the flat space black holes. It was shown in [42] that one can get flat space black holes on Karch-Randall branes provided bulk metric should have the following form: \[ds^{2}_{(d+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+e^{2r}h_{ij}dy^{i}dy^{j}=dr^{ 2}+e^{2r}\Bigg{(}\frac{\frac{dz^{2}}{f(z)}-f(z)dt^{2}+\sum_{i=1}^{d-2}dy_{i}^{ 2}}{z^{2}}\Bigg{)}. \tag{45}\] Figure 7: Page curve of eternal AdS black holes for \(n=2\) multiverse. Induced metric \(h_{ij}\) on the brane given in (45) obey the following Einstein equation on the brane \[R_{ij}-\frac{1}{2}h_{ij}R[h_{ij}]=0. \tag{46}\] (46) is the equation of motion of the following Einstein-Hilbert term on the brane: \[S_{\rm FS}^{\rm EH}=\lambda^{\rm FS}\int d^{d}x\sqrt{-h}R[h], \tag{47}\] where \(\lambda^{\rm FS}\left(\equiv\frac{1}{16\pi G_{N}^{d}}=\frac{1}{16\pi G_{N}^{(d +1)}}\frac{e^{(d-2)a_{1}}}{(d-2)}\right)\) encodes information about the effective Newton's constant in \(d\) dimensions and (47) has been obtained from substitution of (45) into the (1). For Schwarzschild black hole in \(d\)-dimensions \(f(z)=1-\frac{z_{h}^{d-3}}{z^{d-3}}\)[57]. Further, metric (45) satisfy Neumann boundary condition at \(r=constant\) with brane tension \(T_{\rm flat\ space}=|d-1|\). Schwarzchild black hole and its bath will be given by two Karch-Randall branes located at \(r=\pm a_{1}\). Thermal entropy of Schwarzschild patch can be obtained from (45) for \(z=z_{h}\) and the final result is given as \[S_{\rm thermal}^{\rm Schwarzschild}=\frac{V_{2}\int_{-a_{1}}^{a_{1}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Hartman-Maldacena Surface**: Defining infalling Eddington-Finkelstein coordinate: \(dv=dt-\frac{dz}{f(z)}\), flat space metric (45) simplifies to: \[ds^{2}=dr^{2}+\frac{e^{2r}}{z^{2}}\left(-f(z)dv^{2}-2dvdz+\sum_{i=1}^{2}dy_{i}^{ 2}\right). \tag{49}\] Induced metric for the Hartman-Maldacena surface parametrized by \(r=r(z)\) and \(v=v(z)\) is \[ds^{2}=\Bigg{(}r^{\prime}(z)^{2}-\frac{e^{2r(z)}}{z^{2}}\left(2+f(z)v^{\prime} (z)\right)\Bigg{)}dz^{2}+\frac{e^{2r(z)}}{z^{2}}\sum_{i=1}^{2}dy_{i}^{2}. \tag{50}\] Area of the Hartman-Maldacena surface using (50) obtained as: \[A_{\rm HM}^{\rm Schwarzschild}=V_{2}\int_{z_{1}}^{z_{\rm max}}dz\Bigg{(}\frac{e ^{2r(z)}}{z^{2}}\sqrt{r^{\prime}(z)^{2}-\frac{e^{2r(z)}v^{\prime}(z)}{z^{2}} \left(2+f(z)v^{\prime}(z)\right)}\Bigg{)}. \tag{51}\] For large time, i.e., \(t\rightarrow\infty\), \(r(z)\to 0\)14[35]. Therefore, Footnote 14: We can show the same by following the steps given in detail from (63)-(67). But we have to replace the warp factor \(\sinh(r(z))\) by \(e^{r(z)}\). \[A_{\rm HM}^{\rm Schwarzschild}=V_{2}\int_{z_{1}}^{z_{\rm max}}dz\Bigg{(}\frac{ \sqrt{-v^{\prime}(z)\left(2+f(z)v^{\prime}(z)\right)}}{z^{3}}\Bigg{)}. \tag{52}\] Since the area of the Hartman-Maldacena surface is similar to (29) except the volume factor, here we are restricted to \(d=4\), therefore for the Schwarzschild patch too, \(A_{\rm HM}^{\rm Schwarzschild}\propto t_{1}\). Therefore entanglement entropy contribution from the Hartman-Maldacena surface of the Schwarzschild patch has the linear time dependence \[S_{\rm HM}^{\rm Schwarzschild}\propto t_{1}. \tag{53}\] **Island Surface**: The island surface is parametrized by \(t=constant\) and \(z=z(r)\). The area of island surface can be obtained from the induced metric in terms of embedding(\(z(r)\)) and its derivative using the bulk metric (45), induced metric is \[ds^{2}=\Bigg{(}1+\frac{e^{2r}z^{\prime}(r)^{2}}{z(r)^{2}\left(1-\frac{1}{z(r) }\right)}\Bigg{)}dr^{2}+\frac{e^{2r}}{z(r)^{2}}\sum_{i=1}^{2}dy_{i}^{2}. \tag{54}\] where we have used \(f(z)=\left(1-\frac{1}{z}\right)\). Using (54) area of the island surface for the Schwarzschild patch is obtained as \[A_{\rm IS}^{\rm Schwarzschild}=V_{2}\int_{-a_{1}}^{a_{1}}dr\Bigg{(}\frac{e^{2r} }{z(r)^{2}}\sqrt{1+\frac{e^{2r}z^{\prime}(r)^{2}}{z(r)^{2}\left(1-\frac{1}{z( r)}\right)}}\Bigg{)}. \tag{55}\] In the above equation, we have set \(z_{h}=1\) for simplicity and hence \(f(z)\geq 0\) requires \(z>1\). Substituting the Lagrangian of (55) in (39), first term of the last line of (39) for (55) implies \[\frac{e^{4r}z^{\prime}(r)}{\left(1-\frac{1}{z(r)}\right)z(r)^{4} \sqrt{\frac{e^{2r}z^{\prime}(r)^{2}}{\left(1-\frac{1}{z(r)}\right)z(r)^{2}}+1}}=0. \tag{56}\] Therefore, we have well-defined variational principle of (55) provided embedding function satisfies Neumann boundary condition on the branes, i.e., \(z(r=\pm a_{1})=0\) and hence the minimal surface will be the black hole horizon, i.e., \(z(r)=1\) similar to [33, 35]. The same can be obtained from the equation of motion of \(z(r)\) worked out as follows \[\frac{e^{2r}\sqrt{\frac{e^{2r}z^{\prime}(r)^{2}+z(r)^{2}-z(r)}{(z (r)-1)z(r)}}}{2z(r)^{2}\left(e^{2r}z^{\prime}(r)^{2}+z(r)^{2}-z(r)\right)^{2}} \Bigg{(}3e^{2r}z^{\prime}(r)^{2}\left(2e^{2r}z^{\prime}(r)-1\right)+2z(r)^{2} \left(e^{2r}z^{\prime\prime}(r)+4e^{2r}z^{\prime}(r)-4\right)\] \[+2z(r)\left(-e^{2r}z^{\prime\prime}(r)+e^{2r}z^{\prime}(r)^{2}-4e ^{2r}z^{\prime}(r)+2\right)+4z(r)^{3}\Bigg{)}=0. \tag{57}\] Solution of (57) is the black hole horizon, i.e. \(z(r)=1\)15 consistent with the Neumann boundary condition on the branes [33]. Therefore the minimal area of the island surface can be obtained after substituting \(z(r)=1\) in (55) and the final result is: Footnote 15: See the terms inside the open bracket of (57), there are terms with derivatives of \(z(r)\) and a particular combination \((-8z(r)^{2}+4z(r)+4z(r)^{3})\) which vanishes for \(z(r)=1\). \[A^{\rm Schwarzschild}_{\rm IS}=V_{2}\int_{-a_{1}}^{a_{1}}dre^{2r}= V_{2}\sinh(2a_{1}). \tag{58}\] Therefore entanglement entropy for the island surface of the Schwarzschild patch is \[S^{\rm Schwarzschild}_{\rm IS}=\frac{A^{\rm Schwarzschild}_{\rm IS }}{4G_{N}^{(5)}}=\frac{2V_{2}\int_{-a_{1}}^{a_{1}}dre^{2r}}{4G_{N}^{(5)}}= \frac{2V_{2}\sinh(2a_{1})}{4G_{N}^{(5)}}=2S^{\rm Schwarzschild}_{\rm thermal}. \tag{59}\] Numerical factor 2 in the above equation appear because of second island surface in thermofield double partner (see Fig. 8). Therefore we can get the Page curve by plotting (53) and (59) for the Schwarzschild patch shown in Fig. 9. #### 4.2.2 de-Sitter patch de-Sitter black hole and its bath will be located at \(r=\pm\rho\). The Metric for the bulk which contains de-Sitter branes is \[ds^{2}_{(d+1)}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+\sinh^{2}(r)h^{ \rm dS}_{ij}dy^{i}dy^{j}=dr^{2}+\sinh^{2}(r)\Bigg{(}\frac{\frac{dz^{2}}{f(z)}- f(z)dt^{2}+\sum_{i=1}^{d-2}dy_{i}^{2}}{z^{2}}\Bigg{)}, \tag{60}\] where in \(d=4\), for de-Sitter space: \(f(z)=1-\frac{\Lambda}{3}z^{2}=1-\left(\frac{z}{z_{s}}\right)^{2}\) where \(z_{s}=\sqrt{\frac{3}{\Lambda}}\). Thermal entropy of the de-Sitter patch can be obtained from (60) by setting \(z_{s}=1\)16 in the same and the result is Footnote 16: We used \(z_{s}=1\) only for the simplification of calculation. Since cosmological constant is very small and hence in reality \(z_{s}>>1\) but some number which will not affect our qualitative results. \[S_{\rm dS}^{\rm thermal}=\frac{A_{z=z_{s}}}{4G_{N}^{(5)}}=\frac{V_{2}\int_{- \rho}^{\rho}dr\sinh^{2}(r)}{4G_{N}^{(5)}}=\frac{V_{2}\left(\sinh(\rho)\cosh( \rho)-\rho\right)}{4G_{N}^{(5)}}, \tag{61}\] where \(V_{2}=\int\int dy_{1}dy_{2}\). **Hartman-Maldacena Surface**: Similar to Schwarzschild patch, we define: \(dv=dt-\frac{dz}{f(z)}\), and hence (60) becomes: \[ds^{2}=dr^{2}+\sinh^{2}(r)\left(\frac{-f(z)dv^{2}-2dvdz+\sum_{i=1}^{2}dy_{i}^ {2}}{z^{2}}\right). \tag{62}\] Parametrization of Hartman-Maldacena surface is \(r=r(z)\) and \(v=v(z)\) and hence the area of the same can be obtained using (62) for the aforementioned parametrization and written below: \[A_{\rm HM}^{\rm de-Sitter}=V_{2}\int_{z_{1}^{\rm dS}}^{z_{\rm max }^{\rm dS}}dz\mathcal{L}_{\rm HM}^{\rm dS}=V_{2}\int_{z_{1}^{\rm dS}}^{z_{\rm max }^{\rm dS}}dz\Bigg{(}\frac{\sinh^{2}(r(z))}{z^{2}}\sqrt{r^{\prime}(z)^{2}- \frac{\sinh^{2}(r(z))v^{\prime}(z)}{z^{2}}\left(2+f(z)v^{\prime}(z)\right)} \Bigg{)}, \tag{63}\] where \(z_{1}^{\rm dS}\) and \(z_{\rm max}^{\rm dS}\) are the point on gravitating bath and turning point of Hartman-Maldancena surface for the de-Sitter geometry. In (63), \(v(z)\) is cyclic therefore conjugate momentum of \(v(z)\) Figure 9: Page curve of Schwarzschild patch. is constant, i.e., \(\frac{\partial\mathcal{L}_{\rm HM}^{\rm SS}}{\partial v^{\prime}(z)}=C\) (\(C\) being the constant) implies \[v^{\prime}(z)=\frac{-Cz^{3}{\rm csch}(r(z))\sqrt{32C^{2}z^{6}+15f (z)\cosh(2r(z))-6f(z)\cosh(4r(z))+f(z)\cosh(6r(z))-10f(z)}}{8\left(C^{2}z^{6}f( z)+f(z)^{2}\sinh^{6}(r(z))\right)}\] \[\qquad\qquad\times\left(\sqrt{2z^{2}f(z)r^{\prime}(z)^{2}+\cosh(2 r(z))-1}-8C^{2}z^{6}-8f(z)\sinh^{6}(r(z))\right). \tag{64}\] Euler-Lagrange equation of motion for the \(r(z)\) from (63) obtained as \[\frac{\sinh^{2}(r(z))}{2z^{4}\left(z^{2}r^{\prime}(z)^{2}-\sinh^{2 }(r(z))v^{\prime}(z)\left(f(z)v^{\prime}(z)+2\right)\right)\sqrt{r^{\prime}(z) ^{2}-\frac{\sinh^{2}(r(z))v^{\prime}(z)(f(z)v^{\prime}(z)+2)}{z^{2}}}}\left(zr ^{\prime}(z)\sinh^{2}(r(z))\right.\] \[\left(\left(zf^{\prime}(z)+2f(z)\right)v^{\prime}(z)^{2}+2v^{ \prime}(z)\left(zf(z)v^{\prime\prime}(z)+2+2zv^{\prime\prime}(z)\right)-\sinh^ {2}(r(z))v^{\prime}(z)\left(f(z)v^{\prime}(z)+2\right)\right.\] \[\left.\left(3f(z)\sinh(2r(z))v^{\prime}(z)^{2}+2z^{2}r^{\prime \prime}(z)+6\sinh(2r(z))v^{\prime}(z)\right)+4z^{2}r^{\prime}(z)^{2}\sinh(2r(z ))v^{\prime}(z)\left(f(z)v^{\prime}(z)+2\right)-4z^{3}r^{\prime}(z)^{3}\right) =0. \tag{65}\] Substituting \(v^{\prime}(z)\) from (64) into (65) and using \(f(z)=1-z^{2}\), we set \(z_{s}=1\) for simplification, EOM (65) simplifies to the following form \[\frac{\sinh^{2}(r(z))}{2z^{4}\left(C^{2}z^{6}-(z^{2}-1)\sinh^{6}( r(z))\right)\left(z^{2}\left(z^{2}-1\right)r^{\prime}(z)^{2}-\sinh^{2}(r(z)) \right)\sqrt{\frac{\left(z^{2}-z^{4}\right)r^{\prime}(z)^{2}\sinh^{6}(r(z))+ \sinh^{8}(r(z))}{C^{2}z^{8}+\left(z^{2}-z^{4}\right)\sinh^{6}(r(z))}}\] \[\times\left(-2z^{2}r^{\prime\prime}(z)\sinh^{2}(r(z))\left(C^{2} z^{6}-\left(z^{2}-1\right)\sinh^{6}(r(z))\right)+r^{\prime}(z)\left(2z\sinh^{8}(r(z) )-4C^{2}z^{7}\sinh^{2}(r(z))\right)\right.\] \[\left.+r^{\prime}(z)^{2}\left(C^{2}z^{8}\sinh(2r(z))-8z^{2}\left(z ^{2}-1\right)\sinh^{7}(r(z))\cosh(r(z))\right)+r^{\prime}(z)^{3}\left(4z^{3} \left(z^{2}-1\right)^{2}\sinh^{6}(r(z))-2C^{2}z^{9}\right)\right.\] \[\qquad\left.+6\sinh^{9}(r(z))\cosh(r(z))\right)=0. \tag{66}\] The above equation is difficult to solve. One trivial solution of (66) is \[r(z)=0. \tag{67}\] From equation (63), we can see that when \(r(z)=0\)17 then \(A_{\rm HM}^{\rm de-Sitter}=0\)18, and hence area of Hartman-Maldacena surface is Footnote 17: Same solution \(r(z)=0\) also appeared in [35] in the computation of area of Hartman-Maldacena surface. See [33] for the similar solution, in our case embedding is \(r(z)\) whereas in [33], embedding is \(r(\mu)\), \(\mu\) being the angle. Footnote 18: See [59] for the discussion of complexity of de-Sitter spaces. \[A_{\rm HM}^{\rm de-Sitter}=0, \tag{68}\] we see that area of the Hartman-Maldacena surface vanishes and hence \[S_{\rm HM}^{\rm de-Sitter}=\frac{A_{\rm HM}^{\rm de-Sitter}}{4G _{N}^{(5)}}=0. \tag{69}\] **Cosmological Island Surface Entanglement Entropy**: Area of the island surface parametrized by \(t=constant\), \(z=z(r)\) can be obtained from the induced metric in terms of embedding (\(z=z(r)\)) and its derivative using (60) and the final result is \[A_{\rm IS}^{\rm de-Sitter}=V_{2}\int_{-\rho}^{\rho}dr\Bigg{(}\frac{ \sinh^{2}(r)}{z(r)^{2}}\sqrt{1+\frac{\sinh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}(1-z( r)^{2})}}\Bigg{)}. \tag{70}\] For the de-Sitter patch, \(f(z)=1-\left(\frac{z}{z_{s}}\right)^{2}\), we have taken \(z_{s}=1\) in (70) for calculation simplification. Therefore \(f(z)\geq 0\) if \(0<z<1\). Euler-Lagrange equation of motion for the embedding \(z(r)\) from (70) turns out to be: \[\sinh^{2}(r)\sqrt{\frac{-\sinh^{2}(r)z^{\prime}(r)^{2}+z(r)^{4}-z (r)^{2}}{z(r)^{2}\left(z(r)^{2}-1\right)}}\left(z(r)\sinh^{2}(r)z^{\prime}(r)^ {2}+3\sinh^{3}(r)\cosh(r)z^{\prime}(r)^{3}-z(r)^{4}\sinh(r)\left(\sinh(r)z^{ \prime\prime}(r)+4\cosh(r)z^{\prime}(r)\right)\right)\] \[+z(r)^{2}\sinh(r)\left(\sinh(r)z^{\prime\prime}(r)+4\cosh(r)z^{ \prime}(r)\right)+2z(r)^{7}-4z(r)^{5}+2z(r)^{3}\Bigg{)}=0. \tag{71}\] In general, it is not easy to solve the above equation. Interestingly, there is a \(z(r)=1\) solution to the above differential equation which is nothing but de-Sitter horizon assumed earlier\((z_{s}=1)\)19 and it satisfies the Neumann boundary condition on the branes and hence the solution for the cosmological island surface is Footnote 19: This can also be verified from the terms inside the open bracket of (71). Apart from \((2z(r)^{7}-4z(r)^{5}+2z(r)^{3})\), every term contains the derivative of \(z(r)\). For \(z(r)=1\), \(2z(r)^{7}-4z(r)^{5}+2z(r)^{3}=0\) and hence \(z(r)=1\) satisfies (71). There are other two possibilities \(z(r)=-1\) and \(z(r)=0\) but entanglement entropy (70) is negative for \(z(r)=-1\) and divergent for \(z(r)=0\) and hence these are non-physical. \[z(r)=1. \tag{72}\] One can arrive at the same conclusion by requiring the well-defined variational principle of (70) and imposing Neumann boundary condition on the branes similar to the discussion in section 4.1 which requires \[\frac{\sinh^{4}(r)z^{\prime}(r)}{z(r)^{4}\left(1-z(r)^{2} \right)\sqrt{\frac{\sinh^{2}(r)z^{\prime}(r)^{2}}{z(r)^{2}(1-z(r)^{2})}+1}}=0. \tag{73}\] When we impose \(z^{\prime}(r=\pm\rho)=0\) then the minimal surface is the horizon, i.e., \(z(r)=1\)[33]. On substituting \(z(r)=1\) in (70), we obtain the minimal area of the cosmological island surface for the de-Sitter patch as given below \[A_{\rm IS}^{\rm de-Sitter}=V_{2}\int_{-\rho}^{\rho}dr\sinh^{2}(r)= V_{2}\left(\sinh(\rho)\cosh(\rho)-\rho\right). \tag{74}\] Entanglement entropy contribution of cosmological island surface is \[S_{\rm IS}^{\rm dS}=\frac{2A_{\rm IS}^{\rm de-Sitter}}{4G_{N}^{ (5)}}=2S_{\rm dS}^{\rm thermal}. \tag{75}\] Additional numerical factor "2" is coming due to second cosmological island surface on thermofield double partner side (shown in Fig. 8). We get the Page curve of de-Sitter patch by plotting (69) and (75) from wedge holography. We will get a flat Page curve in this case similar to [33]. Let us summarize the results of this section. It was argued in [33, 35] that in wedge holography without DGP term, the black hole horizon is the only extremal surface and the Hartman-Maldacena surface does not exist and hence one expects the flat page curve. We also see that when we compute the entanglement entropies of island surfaces of AdS, Schwarzschild, and de-Sitter black holes then minimal surfaces turn out to be horizons of the AdS or Schwarzschild or de-Sitterblack holes. As a curiosity, we computed entanglement entropies of Hartman-Maldacena surfaces for the parametrization \(r(z)\) and \(v(z)\) used in the literature and we found non-trivial linear time dependence for the AdS and Schwarzschild black holes whereas Hartman-Maldacena surface entanglement entropy turns out to be zero for the de-Sitter black hole. Therefore we obtain the flat Page curve for the de-Sitter black hole not for the AdS and Schwarzschild black holes due to the non-zero entanglement entropy of Hartman-Maldacena surfaces. The theme of the paper is not to discuss whether we get a flat Page curve or not. The paper aimed to construct a "multiverse" in Karch-Randall braneworld which we did in section 3 and check the formula given in (22). We saw in subsection 4.1 that (22) is giving consistent results. **Comment on the Wedge Holographic Realization of Schwarzschild de-Sitter Black Hole with Two Karch-Randall Branes**: In subsection 4.2, we performed our computation of the Schwarzschild and de-Sitter patches separately. There is one more way by which we may get the Page curve of the Schwarzschild de-Sitter black hole. We summarize the idea below: * Consider two Karch-Randall branes \(Q_{1}\) and \(Q_{2}\) such that one of which contains Schwarzschild de-Sitter black hole and the other one act as a bath to collect the radiation20. Footnote 20: In this case, Hawking radiation will not be a suitable term because when Schwarzschild de-Sitter black hole as whole emits radiation then observer may not distinguish between Hawking radiation emitted by Schwarzschild patch and Gibbons-Hawking radiation emitted by de-Sitter patch [60]. * Suppose the bulk metric has the following form: \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=dr^{2}+g(r)h^{\rm SdS}_{ij}dy^{i}dy^{j}=dr^ {2}+g(r)\Bigg{(}\frac{\frac{dz^{2}}{f(z)}-f(z)dt^{2}+\sum_{i=1}^{2}dy_{i}^{2}} {z^{2}}\Bigg{)},\] (76) where \(f(z)=1-\frac{2M}{z}-\frac{\Lambda}{3}z^{2}\) in \(d=4\). * Next step is find out \(g(r)\) by solving Einstein equation (2). * After getting the solution, one needs to ensure that bulk metric (76) must satisfy the Neumann boundary condition (4) at \(r=\pm\rho\). * One also needs to check what kind of theory exists at the defect i.e., it is CFT or non-CFT, to apply the Ryu-Takayanagi formula. * If we are successfully checked the above points then we can obtain the Page curve of the Schwarzschild de-Sitter black hole by computing the areas of Hartman-Maldacena and island surfaces21. Footnote 21: In this setup, the notion of “island” may become problematic because we will be talking about the island in the interior of Schwarzschild de-Sitter black hole. Since SdS black hole has two horizons, therefore it may cause trouble to say whether the “island” is located inside the black hole horizon or the de-Sitter horizon. Therefore it will be nice to follow the setup with two black holes and two baths. See [58] for non-holographic approach. The above discussion is just a "mathematical idea". Since we have three possible branes: Minkowski, de-Sitter and anti de-Sitter [55]. There is no brane with the induced metric defined in the open bracket of (76). Further, we have AdS/CFT correspondence or dS/CFT correspondence, or flat space holography. There is no such duality that states the duality between CFT and bulk which has the form of a Schwarzschild de-Sitter-like structure. There will be no defect description due to the aforementioned reason and hence no "intermediate description" of wedge holography. Therefore we conclude that one can model Schwarzschild de-Sitter black hole from wedge holography with two copies of wedge holography in such a way that one part defines Schwarzschild patch and the other part defines de-Sitter patch22. Footnote 22: See [58, 62] for non-holographic model. ## 5 Application to Grandfather Paradox This section states the "grandfather paradox" and its resolution in our setup. "Grandfather paradox" says that Bob can not travel back in time. Because if he can travel back in time, he can land in another universe where he can kill his grandfather. If Bob's grandfather is dead in another universe, then he will not exist in the present [61]. Now let us see how this problem can be avoided in our setup. We discussed in sections 3.1 and 3.2 that a multiverse consists of \(2n\) Karch-Randall branes, which we call "universes". The geometry of these branes is AdS and de-Sitter spacetime in sections 3.1 and 3.2. In all the setups, all "universes" are connected at the "defect" via transparent boundary condition. Transparent boundary condition guarantees that all these universes are communicating with each other. Suppose Bob lives on \(Q_{1}\) and his grandfather lives on \(Q_{2}\). Then to avoid the paradox, Bob can not travel to \(Q_{2}\), but he can travel to \(Q_{-2}\), \(Q_{-3}\) etc. where he can meet Robert and Alice. Hence "grandfather paradox" can be resolved in this setup. Further traversable wormhole solution is also possible [63]. This discussion is consistent with "many world theory" where "grandfather paradox" has been resolved using the similar idea. ## 6 Conclusion In this work, we propose the existence of a multiverse in the Karch-Randall braneworld using the idea of wedge holography. Multiverse is described in the sense that if we talk about \(2n\) universes, then those will be represented by Karch-Randall branes embedded in the bulk. These branes will contain black holes or not that can be controlled by gravitational action. We studied three cases: * We constructed mutiverse from \(d\)-dimensional Karch-Randall branes embedded in \(AdS_{d+1}\) in section 3.1. The geometry of these branes is \(AdS_{d}\). In this case, the multiverse consists of \(2n\) anti de-Sitter branes and all are connected to each other at the defect via transparent boundary conditions. Multiverse consists of AdS branes exists forever once created. * We constructed multiverse from \(d\)-dimensional de-Sitter spaces on Karch-Randall branes embedded in \((d+1)\)-dimensional bulk \(AdS_{d+1}\) in 3.2. Multiverse made up of \(2n\) de-Sitter branes has a short lifetime. All the de-Sitter branes in this setup should be created and annihilated at the same time. Defect CFT is a non-unitary conformal field theory because Figure 10: Different universes \(Q_{-1,-2,-3,1,2,3}\) where different people are living. of dS/CFT correspondence. * We also discussed why it is not possible to describe multiverse as a mixture of \(d\)-dimensional de-Sitter and anti de-Sitter spacetimes in the same bulk in section 3.3. We can have the multiverse with anti de-Sitter branes (\(M_{1}\)) or de-Sitter branes (\(M_{2}\)) but not the mixture of the two. Because AdS branes intersect at "time-like" boundary and de-Sitter branes intersect at "space-like" boundary of the bulk \(AdS_{d+1}\). Universes in \(M_{1}\) can communicate with each other, similarly, \(M_{2}\) consists of communicating de-Sitter branes but \(M_{1}\) can't communicate with \(M_{2}\). We look for the possibility of whether we can resolve the information paradox of multiple black holes simultaneously or not. This can be done by constructing a multiverse in such a way the \(n\) Karch-Randall branes will contain black holes, and Hawking radiation of these black holes will be collected by a \(n\) gravitating baths. In this case, we obtain linear time dependence from the Hartman-Maldacena surfaces, and the constant value will be \(2S_{\text{BH}}^{i=1,2,..,n,\text{ thermal}}\) which is coming from \(n\) island surfaces. As a consistency check of the proposal, we calculated the Page curves of two black holes for \(n=2\) multiverse. We assumed that black hole and bath systems between \(-2\rho\leq r\leq 2\rho\) and \(-\rho\leq r\leq\rho\). In this case, we found that entanglement entropy contribution from the Hartman-Maldacena surfaces has a linear dependence on time for the AdS and Schwarzschild black holes and it is zero for the de-Sitter black hole, whereas island surfaces contributions are constant. Therefore this reproduces the Page curve. Using this idea, we obtain the Page curve of Schwarzschild de-Sitter black hole and one can also do the same for Reissner-Nordstrom de-Sitter black hole. This proposal is helpful in the computation of the Page curve of black holes with multiple horizons from wedge holography. We also discussed the possibility of getting a Page curve of these black holes using two Karch-Randall branes, one as a black hole and the other as a bath. In this case, there will be an issue in defining the island surface and identifying what kind of radiation we are getting. For example, when a Karch-Randall brane consists of black hole and cosmological event horizons, i.e., Schwarzschild de-Sitter black hole on the brane, the observer collecting the radiation will not be able to identify clearly whether it is Hawking radiation or Gibbons-Hawking radiation. We checked our proposal for very simple examples without DGP term on the Karch-Randall branes, but one can also talk about massless gravity by adding the DGP term on the Karch-Randall branes [35]. In this case, tensions of the branes will recieve correction from the extra term in (11). Further, we argued that one could resolve the "grandfather paradox" using this setup where all universes communicate via transparent boundary conditions at the interface point. To avoid the paradox, one can travel to another universe where his grandfather is not living, so he can't kill his grandfather. We have given a qualitative idea to resolve the "grandfather paradox" but detailed analysis requires more research in this direction using wedge holography. ## Acknowledgements The author is supported by a Senior Research Fellowship (SRF) from the Council of Scientific and Industrial Research (CSIR), Govt. of India. It is my pleasure to thank Aalok Misra, who motivated me to work on the entanglement stuff, and for his blessings. We would also like to thank Juan Maldacena, Andreas Karch, Kostas Skenderis and Tadashi Takayanagi for very helpful discussions and comments. This research was also supported in part by the International Centre for Theoretical Sciences (ICTS) for the program "Nonperturbative and Numerical Approaches to Quantum Gravity, String Theory and Holography" (code:ICTS/numstrings-2022/8). Various conferences/workshops; e.g., _Mysteries of Universe-I (Institute Lecture Series)_ and _Indian Strings Meeting 2021_ at Indian Institute of Technology Roorkee, Roorkee, India; _Applications of Quantum Information in QFT and Cosmology_ at the University of Lethbridge, Canada; _Kavli Asian Winter School (KAWS) on Strings, Particles and Cosmology (Online)_ at International Centre for Theoretical Sciences (ICTS) Bangalore, India (code:ICTS/kaws2022/1); _Reconstructing the Gravitational Hologram with Quantum Information_ at Galileo Galilei Institute for Theoretical Physics, Florence, Italy; _Quantum Information in QFT and AdS/CFT-III_ at Indian Institute of Technology Hyderabad, India; helped me to learn about the information paradox and related stuff. I am very thankful to the speakers and organizers of these conferences because I learned about the subject from these conferences.
2303.08705
Intrinsic optical absorption in Dirac metals
A Dirac metal is a doped (gated) Dirac material with the Fermi energy ($E_\text{F}$) lying either in the conduction or valence bands. In the non-interacting picture, optical absorption in gapless Dirac metals occurs only if the frequency of incident photons ($\Omega$) exceeds the direct (Pauli) frequency threshold, equal to $2E_\text{F}$. In this work, we study, both analytically and numerically, the role of electron-electron ($ee$) and electron-hole ($eh$) interactions in optical absorption of two-dimensional (2D) and three-dimensional (3D) Dirac metals in the entire interval of frequencies below $2E_\text{F}$. We show that, for $\Omega\ll E_\text{F}$, the optical conductivity, $\Re\sigma(\Omega)$, arising from the combination of $ee$ and certain $eh$ scattering processes, scales as $\Omega^2\ln\Omega$ in 2D and as $\Omega^2$ in 3D, respectively, both for short-range (Hubbard) and long-range (screened Coulomb) interactions. Another type of $eh$ processes, similar to Auger-Meitner (AM) processes in atomic physics, starts to contribute for $\Omega$ above the direct threshold, equal to $E_\text{F}$. Similar to the case of doped semiconductors with parabolic bands studied in prior literature, the AM contribution to $\Re\sigma(\Omega)$ in Dirac metals is manifested by a threshold singularity, $\Re\sigma(\Omega)\propto (\Omega-E_\text{F})^{d+2}$, where $d$ is the spatial dimensionality and $0<\Omega-E_\text{F}\ll E_\text{F}$. In contrast to doped semiconductors, however, the AM contribution in Dirac metals is completely overshadowed by the $ee$ and other $eh$ contributions. Numerically, $\Re\sigma(\Omega)$ happens to be small in almost the entire range of $\Omega<2E_\text{F}$. This finding may have important consequences for collective modes in Dirac metals lying below $2E_\text{F}$.
Adamya P. Goyal, Prachi Sharma, Dmitrii L. Maslov
2023-03-15T15:38:48Z
http://arxiv.org/abs/2303.08705v1
# Intrinsic optical absorption in Dirac metals ###### Abstract A Dirac metal is a doped (gated) Dirac material with the Fermi energy (\(E_{\rm F}\)) lying either in the conduction or valence bands. In the non-interacting picture, optical absorption in gapless Dirac metals occurs only if the frequency of incident photons (\(\Omega\)) exceeds the direct (Pauli) frequency threshold, equal to \(2E_{\rm F}\). In this work, we study, both analytically and numerically, the role of electron-electron (_ee_) and electron-hole (_eh_) interactions in optical absorption of two-dimensional (2D) and three-dimensional (3D) Dirac metals in the entire interval of frequencies below \(2E_{\rm F}\). We show that, for \(\Omega\ll E_{\rm F}\), the optical conductivity, \(\Re\sigma(\Omega)\), arising from the combination of _ee_ and certain _eh_ scattering processes, scales as \(\Omega^{2}\ln\Omega\) in 2D and as \(\Omega^{2}\) in 3D, respectively, both for short-range (Hubbard) and long-range (screened Coulomb) interactions. Another type of _eh_ processes, similar to Auger-Meitner (AM) processes in atomic physics, starts to contribute for \(\Omega\) above the direct threshold, equal to _E_p. Similar to the case of doped semiconductors with parabolic bands studied in prior literature, the AM contribution to \(\Re\sigma(\Omega)\) in Dirac metals is manifested by a threshold singularity, \(\Re\sigma(\Omega)\propto(\Omega-E_{\rm F})^{d+2}\), where \(d\) is the spatial dimensionality and \(0<\Omega-E_{\rm F}\ll E_{\rm F}\). In contrast to doped semiconductors, however, the AM contribution in Dirac metals is completely overshadowed by the _ee_ and other _eh_ contributions. Numerically, \(\Re\sigma(\Omega)\) happens to be small in almost the entire range of \(\Omega<2E_{\rm F}\). This finding may have important consequences for collective modes in Dirac metals lying below \(2E_{\rm F}\). + Footnote †: preprint: APS/123-QED ## I Introduction The characteristic feature of Dirac materials is the presence of symmetry-protected band-touching points which, in certain cases, is accompanied by the eponymous Dirac dispersion near these points. Realizations of these systems include monolayer graphene [1] and the surface state of a three-dimensional topological insulator [2] in two dimensions (2D), and Weyl/Dirac semi-metals [3; 4; 5] in three dimensions (3D).1 Owing to zero band gap, these materials exhibit semi-metallic behavior at charge neutrality. Footnote 1: For the purposes of present discussion, the topological distinction between Weyl and Dirac materials is irrelevant, and we will be referring to both of the them as to “Dirac materials”. At the level of non-interacting (NI) electrons, a pristine 2D Dirac material is characterized by a frequency-independent and universal optical conductivity [1] \[\Re\sigma_{\rm NI2}(\Omega)=\frac{Ne^{2}}{16\hbar}, \tag{1}\] whereas the conductivity of a pristine 3D Dirac material scales linear with frequency [6; 7] \[\Re\sigma_{\rm NI3}(\Omega)=\frac{Ne^{2}\Omega}{24\pi\hbar v_{\rm D}}, \tag{2}\] where \(N\) is the total (spin times valley) degeneracy, \(v_{\rm D}\) is the Dirac velocity.2 These predictions were corroborated by multiple experiments, see for e.g., reviews Ref. [3; 4; 5; 8; 9; 10; 11]. Footnote 2: Throughout the paper, we set \(\hbar=1\) in the intermediate results but display it in the final results for the conductivity. Also, without loss of generality, we take \(\Omega>0\) and assume that the Fermi energy lies in the conduction band. The effect of electron-electron (_ee_) interactions on the optical conductivity of Dirac materials was studied extensively in 2D, see, e.g., reviews [8; 9; 10] and references therein, and also in 3D [12; 13]. As Coulomb interaction is marginally irrelevant both in 2D and 3D, it leads to a logarithmic renormalization of the Dirac velocity and thus of the coupling constant, \(e^{2}/v_{\rm D}\)[14; 10]. Consequently, \(\Re\sigma_{\rm NI2}(\Omega)\) acquires a multiplicative renormalization factor, which varies with \(\Omega\) logarithmically and, at \(\Omega\to 0\), approaches a constant equal to 1 [10] or \(1+1/(N+1)\)[13] in 2D and 3D, respectively. Note that this renormalization starts already at first order in the bare Coulomb potential, which implies that it does not involve collisions between particles in the intermediate states (the latter start at second order). On the other hand, a short-range (Hubbard) interaction is irrelevant in both 2D and 3D. In a typical experiment, Dirac materials are doped (gated) away from charge neutrality, either intentionally or unintentionally. From now on, we will be referring to such systems as "Dirac metals". In this case, the Pauli principle dictates that the optical conductivity of an ideal Dirac metal is strictly zero below the "direct" (or Pauli) threshold, \[\omega_{\rm D}=2E_{\rm F}, \tag{3}\] where \(E_{\rm F}\) is the Fermi energy, measured from the Dirac point. Experimentally, however, one observes significant absorption for frequencies above the Drude tail but below \(\omega_{\rm D}\)[15; 16; 17; 18; 19] and significant Raman response in the same frequency range [20], both of which indicate a deviation from the single-particle picture. Absorption below the Pauli threshold in doped graphene due to a combined effect of disorder, electron-phonon and electron electron interaction has also been addressed theoretically in Refs. [21; 22; 23; 24]. In this paper, we focus on intrinsic absorption due to \(ee\) and electron-hole (\(eh\)) interactions for \(\Omega<\omega_{\rm D}\). Absorption due to \(ee\) interaction in a Dirac metal was studied in Refs. [25; 26]. For \(\Omega\ll E_{\rm F}\), the conductivity was found to scale as \(\Omega^{2}\ln\Omega\) and \(\Omega^{2}\) in 2D and 3D, respectively [26].3 A quadratic scaling of the conductivity can be understood as the consequence of partially broken Galilean invariance in a Dirac-Fermi liquid (DFL). Indeed, the optical conductivity can be cast into a Drude-like form Footnote 3: An earlier result of Ref. [25] was missing a logarithmic factor in the 2D case. \[\Re\sigma(\Omega)\propto\frac{1}{\Omega^{2}\tau_{j}(\Omega)}, \tag{4}\] where \(\tau_{j}(\Omega)\) is the current relaxation time. If Galilean invariance is broken completely, e.g., by umklapp scattering, \(\tau_{j}(\Omega)\) is of the same order as the quasiparticle lifetime in a Fermi liquid (FL): \(\tau_{j}(\Omega)\sim\tau_{\rm qp}(\Omega)\propto\Omega^{-2}\). In this case, Eq. (4) produces a familiar "FL foot": \(\Re\sigma(\Omega)=\text{const}\). On the other hand, if Galilean invariance is intact, current cannot be relaxed in \(ee\) collisions: although \(\tau_{\rm qp}(\Omega)\) is finite, \(\tau_{j}(\Omega)=\infty\) and thus \(\Re\sigma(\Omega)=0\). A DFL occupies an intermediate niche between the two limits described above. On one hand, its non-parabolic spectrum allows for current relaxation; on the other hand, the spectrum is still isotropic (at low doping) and current relaxation is impossible for electrons right on the Fermi surface (FS) [26]. For states away from the FS, the current relaxation time is finite but long, \(\tau_{j}(\Omega)\propto\Omega^{-4}\) (modulo a \(\ln\Omega\) factor in 2D), while \(\tau_{\rm qp}(\Omega)\) still scales in a FL way, i.e., as \(\Omega^{-2}\). According to Eq. (4), the quartic scaling of \(1/\tau_{j}(\Omega)\) translates into the quadratic scaling of the conductivity. In this paper, we extend the results of Ref. [26] to the entire interval of frequencies below \(\omega_{\rm D}\). Such an extension necessarily requires to account for both \(ee\) and \(eh\) interaction processes. We consider 2D and 3D Dirac metals with two types of interaction: Hubbard and Coulomb. Our analytic results follow from the analysis of the Kubo formula and are applicable in two regions: i) for \(\Omega\ll\omega_{\rm I}\), where \[\omega_{\rm I}=E_{\rm F}, \tag{5}\] is the "indirect" threshold and ii) just above the indirect threshold, i.e, for \(\Omega\gtrapprox\omega_{\rm I}\). In the rest of the interval \(0<\Omega<\omega_{\rm D}\), the conductivity is calculated numerically, but only for a Dirac metal with Hubbard interaction. For \(\Omega\ll\omega_{\rm I}\), we show that the \(eh\) contribution to the conductivity scales as \(\Omega^{2}\), i.e., it is comparable to the \(ee\) one found in Ref. [26] in 3D and is subleading to the \(ee\) one in 2D, but only in the leading logarithm sense. This \(\Omega^{2}\)-scaling of the \(eh\) contribution to the conductivity can also be understood in terms of the Drude formula (4). Current relaxation due to \(eh\) scattering is not limited by (partially broken) Galilean invariance, so that \(\tau_{j}(\Omega)\sim\tau_{\rm qp}(\Omega)\propto\Omega^{-2}\). [Unlike \(\tau_{\rm qp}(\Omega)\), \(\tau_{j}(\Omega)\) does not have an extra logarithmic factor in 2D.] However, the energies of electrons and holes differ now by \(E_{\rm F}\) rather than \(\Omega\); therefore, the factor of \(\Omega^{2}\) in Eq. (4) is replaced by \(E_{\rm F}^{2}\), and the conductivity scales as \(\Omega^{2}\). Another channel of absorption due to \(eh\) interaction opens up when \(\Omega\) exceeds the indirect threshold \(\omega_{\rm I}\) [Eq. (5)]. Since the seminal 1969 paper by Gavoret et al. [27], absorption of light by degenerate semiconductors due to a particular type of \(eh\) interaction processes, similar to Auger-Meitner (AM) processes in atomic physics [28; 29; 30], have been studied by a large number of researchers, see, e.g., Refs. [31; 32; 33; 34]. Although we consider only gapless systems, our result for the AM contribution just above \(\omega_{\rm I}\) exhibits a threshold singularity of the same type as found for a gapped spectrum [31; 32; 33; 34; 27], i.e., \[\Re\sigma(\Omega)\propto\theta(\delta\Omega)\delta\Omega^{\beta_{\rm A}}, \tag{6}\] where \(\delta\Omega\equiv\Omega-\omega_{\rm I}\ll\omega_{\rm I}\) and \(\beta_{\rm A}=d+2\) with \(d\) being the spatial dimensionality and \(\theta(x)\) is the Heaviside step function. Equation (6) can be obtained by estimating the conductivity as \(\Re\sigma(\Omega)\propto\delta\Omega\mathcal{N}(\delta\Omega)/\tau_{\rm qp}( \delta\Omega)\), where \(\mathcal{N}(\epsilon)\propto\epsilon^{d-1}\) is the density of states of a gapless Dirac metal and \(1/\tau_{\rm qp}(\epsilon)\propto\epsilon^{2}\). More important, however, is the fact that for a non-parabolic spectrum the AM contribution occurs at the background of \(ee\) and other \(eh\) contributions, which start at the lowest frequencies (as \(\Omega^{2}\) and \(\Omega^{2}\ln\Omega\) in 3D and 2D, respectively) and are still present both near and above \(\omega_{\rm I}\). Therefore, the AM threshold singularity is masked by these other contributions. These competing contributions were not taken into account in the previous work on AM processes [31; 32; 33; 34; 27], which considered two strictly parabolic bands separated by a gap (\(2\Delta\)). To clarify Figure 1: Band diagram of Dirac metal symmetric conduction and valence bands showing the direct (Pauli) threshold \(\omega_{\rm D}=2E_{\rm F}\) for single-particle inter-band transitions. Also shown is the indirect threshold for many-body Auger-Meitner transitions, \(\omega_{\rm I}=E_{\rm F}\). the difference in absorption by materials with parabolic and Dirac bands, we invoke temporarily a gapped Dirac spectrum, \(\epsilon_{\mathbf{k}}=\pm\sqrt{v_{\mathrm{D}}^{2}k^{2}+\Delta^{2}}\). A gapped semiconductor with parabolic conduction and valence band can be viewed as the \(\Delta\to\infty\) limit of this spectrum. In this case, intra-band \(ee\) interaction does not affect the conductivity due to Galilean invariance, as we already discussed above. Moreover, inter-band absorption accompanied by electron-hole conversion processes, i.e., processes that do not conserve the numbers of electrons and holes separately, is also forbidden in the parabolic limit, because the corresponding eigenstates are either purely electron-like or purely hole-like, with zero overlap between the two. Therefore, the interaction part of the corresponding Hamiltonian conserves the numbers of electrons and holes separately, and absorption is absent for \(\Omega<\omega_{\mathrm{I}}\). For a strongly non-parabolic, e.g., gapless Dirac spectrum, the \(ee\) contribution is not suppressed by Galilean invariance, while electron-hole conversion processes are generically as important as other processes. As far as the interval of \(\Omega>\omega_{\mathrm{D}}\) is concerned, Abedinpour et al. [35] showed that the conductivity of doped graphene (a 2D Dirac metal, in our terminology) with Coulomb interaction exhibits a logarithmic renormalization which, for \(\Omega\gg\omega_{\mathrm{D}}\), is reduced to the well-studied case of undoped graphene, and is logarithmically enhanced for \(\Omega\gtrapprox\omega_{\mathrm{D}}\) both for Coulomb and Hubbard interactions Both of these effects arise already at first order in the corresponding interaction and reflect renormalization of the Dirac velocity and, consequently, of the coupling constant. To the best of our knowledge, the interval of \(\Omega\gg\omega_{\mathrm{D}}\) has not been studied for a 3D Dirac metal but, in analogy with the results for the undoped 3D case [12; 13], we would also expect a logarithmic renormalization starting at first order. On the other hand, absorption processes studied in our paper correspond to real collisions between electrons, and between electrons and holes, which occur starting from the second order in the interaction. Therefore, these processes are subleading to the first-order effects described above studied in Ref. [35], and we will not extend our results above \(\omega_{\mathrm{D}}\). Our numerical results agree with analytic ones, where applicable, and allow one to trace the behavior of the conductivity for almost entire frequency range of interest, \(0<\Omega<\omega_{\mathrm{D}}\), except for a narrow interval of width \(\mathcal{O}(\alpha_{\mathrm{H,C}}^{2}E_{\mathrm{F}})\) around \(\omega_{\mathrm{D}}\), where \(\alpha_{\mathrm{H,C}}\ll 1\) is the dimensionless coupling constant of Hubbard and Coulomb interactions, respectively. In this interval, our perturbative expansion breaks down and one needs to re-sum the diagrammatic series. The rest the paper is organized as follows. In Sec. II, we set up the model Hamiltonians for 2D and 3D Dirac metals. In Sec. III, we outline the formalism for calculating the optical conductivity via the Kubo formula. In Sec. IV, we identify the \(ee\) and \(eh\) scattering processes that contribute to the conductivity in a given frequency range. In section IV.2, we analyze the general structure of the contributions to the conductivity from the self-energy and vertex diagrams, which serve as archetypes for other contributions. In sections V and VI, we present our analytical and numerical results for the optical conductivity of 3D and 2D Dirac metals, respectively. Our conclusions are given in Sec. VII. ## II Model Hamiltonians of Dirac metals In this section we define our model Hamiltonians for 2D and 3D Dirac metals. ### 3D Hamiltonian We model a 3D Dirac metal by a \(4\times 4\) low-energy Hamiltonian with two orbital degrees of freedom per spin which describes a single Dirac point [36; 37; 38] \[\hat{\mathcal{H}}_{\mathrm{3D}} =\hat{\mathcal{H}}_{0}+\hat{\mathcal{H}}_{\mathrm{int}}, \tag{7a}\] \[\hat{\mathcal{H}}_{0} =\sum_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\left[v_{\mathrm{D} }\hat{\sigma}_{x}\otimes(\mathbf{\hat{\varsigma}}\cdot\mathbf{k})-E_{\mathrm{ F}}\hat{\sigma}_{0}\otimes\hat{\varsigma}_{0}\right]\Psi_{\mathbf{k}},\] (7b) \[\hat{\mathcal{H}}_{\mathrm{int}} =\frac{1}{2\mathcal{V}}\sum_{\mathbf{q}}V_{\mathrm{3D}}(\mathbf{ q})\hat{n}_{\mathbf{q}}\hat{n}_{-\mathbf{q}}, \tag{7c}\] where \(v_{\mathrm{D}}\) is the Dirac velocity, \(\Psi_{\mathbf{k}}\) is the \(4\times 1\) Dirac spinor, Pauli matrices \(\mathbf{\hat{\varsigma}}=(\hat{\varsigma}_{x},\hat{\varsigma}_{y},\hat{ \varsigma}_{z})\) and \(\hat{\mathbf{\sigma}}=(\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z})\) represent (real) spin and pseudospin, respectively, \(\hat{\varsigma}_{0}\) and \(\hat{\sigma}_{0}\) are the identity matrices in the corresponding subspaces, \(\hat{n}_{\mathbf{q}}=\sum\limits_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\Psi_{ \mathbf{k}+\mathbf{q}}\) is the density operator, \(V_{\mathrm{3D}}(\mathbf{q})\) is the interaction potential, and \(\mathcal{V}\) is the system volume. In general, we assume that there are \(N\) identical Dirac points. The eigenvalues and orthonormal eigenfunctions of \(\hat{\mathcal{H}}_{0}\) in Eq. (7b) are given by \[\xi_{\mathbf{k}}^{s}=s\epsilon_{\mathbf{k}}-E_{\mathrm{F}},\;\epsilon_{ \mathbf{k}}=v_{\mathrm{D}}k \tag{8}\] and \[\left|\mathbf{k},+\right\rangle=\frac{1}{\sqrt{2}}\left[\begin{pmatrix}\psi_{1 }\\ \left(\mathbf{\hat{\varsigma}}\cdot\hat{k}\right)\psi_{1}\end{pmatrix},\left| \mathbf{k},-\right\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}-\left(\mathbf{\hat{ \varsigma}}\cdot\hat{k}\right)\psi_{2}\\ \psi_{2}\end{bmatrix}\right.\quad, \tag{9}\] respectively. Here \(\hat{k}=\mathbf{k}/k\), \(s=\pm 1\) is the helicity index, and \(\psi_{1,2}\) are the \(2\times 1\) spinor states such that \(\psi_{1,2}^{\dagger}\psi_{1,2}=1\). We choose \(\psi_{1}=\psi_{2}=(0,1)^{T}\). The Green's function of \(\hat{\mathcal{H}}_{0}\) is given by \[\hat{G}(\mathbf{k},i\omega)=\frac{1}{2}\sum_{s=\pm}\hat{M}_{ \mathbf{k}}^{s}g_{s}(\mathbf{k},i\omega), \tag{10a}\] \[\hat{M}_{\mathbf{k}}^{s}=\hat{\sigma}_{0}\otimes\hat{\varsigma}_{0 }+s\left(\hat{\sigma}_{x}\otimes(\mathbf{\hat{\varsigma}}\cdot\hat{k})\right),\] (10b) \[g_{s}(\mathbf{k},i\omega)=\frac{1}{i\omega-\xi_{\mathbf{k}}^{s}}. \tag{10c}\] For the sake of brevity, we will be omitting index \(n\) in Matsubara frequencies, which will be distinguished from real ones by a factor of the imaginary unit, \(i\). For example, \(\omega\) in Eq. (10c) stands for a Matsubara frequency. We will be referring to the bands with helicity \(s=\pm 1\) as the "conductivity" and "valence" bands, respectively. The density of states at the Fermi level per spin per valley is equal to \(\mathcal{N}_{\text{F,3}}=E_{\text{F}}^{2}/2\pi^{2}v_{\text{D}}^{3}\). The velocity operator corresponding to \(\hat{\mathcal{H}}_{0}\) in (7b) is \[\hat{\mathbf{v}}=v_{\text{D}}\hat{\sigma}_{x}\otimes\mathbf{\xi} \tag{11}\] with matrix elements \[\mathbf{v}_{\mathbf{k}}^{s,s^{\prime}}=\left\langle\mathbf{k},s\right|\hat{ \mathbf{v}}\left|\mathbf{k},s^{\prime}\right\rangle. \tag{12}\] In what follows, we will need explicit expressions for the intra- and inter-band matrix elements of the velocity operator, which are given by \[\mathbf{v}_{\mathbf{k}}^{s,s}=\left\langle\mathbf{k},s\right|\hat{\mathbf{v}} \left|\mathbf{k},s\right\rangle=sv_{\text{D}}\hat{k} \tag{13}\] and \[\mathbf{v}_{\mathbf{k}}^{+,-} =\left(\mathbf{v}_{\mathbf{k}}^{-,+}\right)^{*}=\left\langle \mathbf{k},+\right|\hat{\mathbf{v}}\left|\mathbf{k},-\right\rangle\] \[=v_{\text{D}}\psi_{1}^{\dagger}\left[\mathbf{\xi}-\left(\mathbf{\xi} \cdot\hat{k}\right)\mathbf{\xi}\left(\mathbf{\xi}\cdot\hat{k}\right)\right]\psi_{2}, \tag{14}\] respectively. We now turn to the interaction part of the Hamiltonian. In what follows, we will consider two models for the interaction \(V_{\text{3D}}(\mathbf{q})\): \[V_{\text{3D}}(\mathbf{q})=\left\{\begin{aligned} &\lambda_{3},&& \text{(3D, Hubbard)}\\ &\frac{4\pi e^{2}}{q^{2}},&&\text{(3D, Coulomb)}\end{aligned}\right.\] (15a) where \[\lambda_{3}>0\] is a constant and \[e\] is the magnitude of electron charge. We focus on the case of low doping, when \[k_{\text{F}}\] is much smaller than the distance between the nearby Dirac points, \[b\]. By "Hubbard interaction" we then mean an interaction that is constant for \[q\] less or comparable to \[k_{\text{F}}\] and falls off rapidly in the interval \[k_{\text{F}}\ll q\ll b\]. In that case, one can neglect scattering processes that swap electrons between the Dirac points. The Hubbard model, though not completely realistic, captures the essential physics and allows one to obtain both analytic results for the optical conductivity in certain frequency regimes and numerical results for all frequencies. Thus, we focus most of our discussion on the Hubbard model. The Coulomb model allows one to obtain analytic results in certain frequency regimes but is very expensive computationally for arbitrary frequencies, and we will restrict our analysis of this model to analytic results only. We discuss both Hubbard and Coulomb interactions in more detail in Section III.2. Note that in the basis of electron and hole creation/annihilation operators, which diagonalizes \(\hat{\mathcal{H}}_{0}\), the Hamiltonian (7c) accounts for _all_ possible interaction processes, including those that do not conserve the number of electrons and holes. As mentioned in Sec. I, our approach is more general in this regard than the one in prior studies of optical absorption in doped semiconductors [31, 32, 33, 34, 27]. These studies considered a model Hamiltonian, which allows only for the density-density interaction between electrons and holes \[\hat{\mathcal{H}}_{\text{int}}^{\prime}=\frac{1}{2\mathcal{V}} \sum_{\begin{subarray}{c}\mathbf{k},\mathbf{p},\mathbf{q},\\ \varsigma=\pm,s=\pm\end{subarray}}V_{\text{int}}(\mathbf{q})\hat{d}_{\mathbf{k }+\mathbf{q},\varsigma,s}^{\dagger}\hat{d}_{\mathbf{p}-\mathbf{q},\varsigma^{ \prime},-s}^{\dagger}\hat{d}_{\mathbf{p},\varsigma^{\prime}-s}\hat{d}_{ \mathbf{k},\varsigma,s}, \tag{16}\] where \(\hat{d}_{\mathbf{k},\varsigma,\pm}^{\dagger}\) is the operator creating an electron/hole with momentum \(\mathbf{k}\) and spin \(\varsigma\). Such a Hamiltonian is correct for a parabolic spectrum, in which case intra-band absorption is forbidden by Galilean invariance while processes of electron-hole conversion are absent due to the vanishing overlap of the electron and hole states. However, it is not applicable to the gapless Dirac spectrum studied in this paper. ### 2D Hamiltonian As an example of a 2D Dirac metal, we consider monolayer graphene described by the standard Hamiltonian [1]: \[\hat{\mathcal{H}}_{\text{2D}} =\hat{\mathcal{H}}_{0}+\hat{\mathcal{H}}_{\text{int}}, \tag{17a}\] \[\hat{\mathcal{H}}_{0} =\sum_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\left[v_{\text{D}} \left(\tau_{z}\hat{\sigma}_{x}k_{x}+\hat{\sigma}_{y}k_{y}\right)-\hat{\sigma}_ {0}E_{\text{F}}\right]\Psi_{\mathbf{k}},\] (17b) \[\hat{\mathcal{H}}_{\text{int}} =\frac{1}{2\mathcal{V}}\sum_{\mathbf{q}}V_{\text{2D}}(\mathbf{q}) \hat{n}_{\mathbf{q}}\hat{n}_{-\mathbf{q}}, \tag{17c}\] where \(\tau_{z}=\pm 1\), \(\Psi_{\mathbf{k}}\) is a \(2\times 1\) Dirac spinor, the set of Pauli matrices \(\hat{\mathbf{\sigma}}=(\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z})\) describes pseudospin, \(\hat{\sigma}_{0}\) is the identity matrix in the same subspace, \(\hat{n}_{\mathbf{q}}=\sum\limits_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\Psi_ {\mathbf{k}+\mathbf{q}}\) is the density operator, and \(\mathcal{V}\) is the system area. To use the large-\(N\) approximation afterwards, we assume that fermions carry spin \(\varsigma\), such that the total degeneracy is \(N=2(2\varsigma+1)\). The eigenvalues of \(\hat{\mathcal{H}}_{0}\) in Eq. (17b) are the same as in Eq. (8), while its orthonormal eigenfunctions are given by \[\left|\mathbf{k},+\right\rangle =\frac{1}{\sqrt{2}}\begin{bmatrix}1\\ \tau_{z}e^{i\tau_{z}\phi_{\mathbf{k}}}\end{bmatrix},\left|\mathbf{k},-\right\rangle =\frac{1}{\sqrt{2}}\begin{bmatrix}-\tau_{z}e^{-i\tau_{z}\phi_{\mathbf{k}}} \\ 1\end{bmatrix} \tag{18}\] where \(\phi_{\mathbf{k}}\) is the azimuthal angle of \(\mathbf{k}\). The Green's function of \(\hat{\mathcal{H}}_{0}\) is given by \[\hat{G}(\mathbf{k},i\omega)=\frac{1}{2}\sum_{s=\pm}\hat{M}_{ \mathbf{k}}^{s}g_{s}(\mathbf{k},i\omega), \tag{19a}\] \[\hat{M}_{\mathbf{k}}^{s}=\hat{\sigma}_{0}+s\left(v_{\text{D}} \frac{\hat{\sigma}_{x}\tau_{z}k_{x}+\hat{\sigma}_{y}k_{y}}{\epsilon_{\mathbf{k}}} \right), \tag{19b}\] where \(g_{s}(\mathbf{k},i\omega)\) is the same as in Eq. (10c). The density of states at the Fermi level per spin per valley is equal to \(\mathcal{N}_{\text{F,2}}=E_{\text{F}}/2\pi v_{\text{D}}^{2}\). The velocity operator corresponding to \(\hat{\mathcal{H}}_{0}\) is \[\hat{\mathbf{v}}=v_{\mathrm{D}}\left(\tau_{z}\hat{\sigma}_{x},\hat{\sigma}_{y} \right), \tag{20}\] with its intra-band matrix element being the same as in Eq. (13), while the inter-band matrix element is given by \[\mathbf{v}_{\mathbf{k}}^{+,-} =\left(\mathbf{v}_{\mathbf{k}}^{-,+}\right)^{*}=\left\langle \mathbf{k},+\right|\hat{\mathbf{v}}\left|\mathbf{k},-\right\rangle\] \[=iv_{\mathrm{D}}e^{-i\tau_{z}\Phi_{\mathbf{k}}}\left(\hat{k} \times\hat{z}\right), \tag{21}\] where \((\hat{x},\hat{y},\hat{z})\) are the Cartesian unit vectors. As in 3D, the intra- and inter-band velocities are orthogonal to each other. Lastly, similar to the 3D case, we consider two models of the interaction \[V_{\mathrm{2D}}(\mathbf{q})=\left\{\begin{aligned} \lambda_{2},&\text{(2D, Hubbard)}\\ \frac{2\pi e^{2}}{q},&\text{(2D, Coulomb)}\end{aligned}\right.\] (22a) where \[\lambda_{2}>0\] is a constant. As in 3D, by "Hubbard" interaction we mean the interaction with radius shorter than the Fermi wavelength but longer that the lattice constant, which cannot transfer electrons between the valleys. As in 3D, we will present both the analytical and numerical results for the Hubbard case, and only the analytical results for the Coulomb case. ## III Optical Conductivity: General Formalism ### Kubo formula In linear response, the real part of the optical conductivity is given by the Kubo formula \[\Re\sigma_{\alpha\beta}(\Omega)=-\frac{1}{\Omega}\Im\Pi_{\alpha\beta,\mathrm{ R}}(\mathbf{Q}=\mathbf{0},\Omega), \tag{23}\] where \(\alpha,\beta=x,y(z)\) in 2D and 3D, respectively, and \(\Pi_{\alpha\beta,\mathrm{R}}(\mathbf{Q}=\mathbf{0},\Omega)\) is the retarded current-current correlation function (denoted by subscript "R"), which is obtained by analytic continuation of its Matsubara counterpart: \[\Pi_{\alpha\beta,\mathrm{R}}(\mathbf{Q},\Omega)=\Pi_{\alpha\beta }(\mathbf{Q},i\Omega\rightarrow\Omega+i0^{+}),\] \[\Pi_{\alpha\beta}(\mathbf{Q},i\Omega)=-\frac{1}{\mathcal{V}}\int \limits_{0}^{1/k_{\mathrm{B}}T}d\tau e^{i\Omega\tau}\left\langle T_{\tau} \hat{j}_{\alpha}^{\dagger}(\mathbf{Q},\tau)\hat{j}_{\beta}(\mathbf{Q},0) \right\rangle. \tag{24}\] In the basis of conduction/valence bands, the current operator is written as \[\hat{\mathbf{j}}(\mathbf{Q},\tau)=-e\sum_{\mathbf{k},s,s^{\prime}}\mathbf{v}_{ \mathbf{k}}^{s,s^{\prime}}\quad\hat{d}_{\mathbf{k}-\frac{\mathbf{Q}}{2},s}^{ \dagger}(\tau)\hat{d}_{\mathbf{k}+\frac{\mathbf{Q}}{2},s^{\prime}}(\tau), \tag{25}\] where \(\mathbf{v}_{\mathbf{k}}^{s,s^{\prime}}\) is given by Eq. (12). For isotropic systems, considered in this paper, the conductivity tensor is diagonal and symmetric. In this case, we define \[\Pi(\mathbf{Q},i\Omega) \equiv\frac{1}{d}\sum_{\alpha}\Pi_{\alpha\alpha}(\mathbf{Q},i \Omega),\] \[\Pi_{\mathrm{R}}(\mathbf{Q},\Omega) \equiv\frac{1}{d}\sum_{\alpha}\Pi_{\alpha\alpha,\mathrm{R}}( \mathbf{Q},\Omega),\] \[\Re\sigma(\Omega) =-\frac{1}{\Omega}\Im\Pi_{\mathrm{R}}(\mathbf{Q}=\mathbf{0}, \Omega). \tag{26}\] We also assume that temperature is much smaller than any other energy scale of the problem and consider only the \(T=0\) limit. ### Relevant diagrams In Dirac metals, optical absorption occurs already for non-interacting particles, if the frequency of incident light exceeds the direct threshold, \(\omega_{\mathrm{D}}=2E_{\mathrm{F}}\). The main focus of this paper is the range of \(0<\Omega<2E_{\mathrm{F}}\), where absorption occurs only if electrons interact with other degrees in freedom, in particular, both among themselves and with holes. Dissipation occurs only if the interaction is dynamic, i.e., if the bare interaction, either Hubbard or Coulomb, is dressed by particle-hole pairs. Diagrammatically, this corresponds to renormalizing the interaction lines either by particle-hole polarization bubbles or "Aslamazov-Larkin triangles" (cf. Fig. 2). #### iii.2.1 Hubbard interaction To make the analysis tractable, we assume that the number of identical Dirac points is large (\(N\gg 1\)) and also adopt the weak-coupling approximation, i.e, we assume that \(\alpha_{\mathrm{H}}N\ll 1\), where \[\alpha_{\mathrm{H}}=\lambda_{d}\mathcal{N}_{\mathrm{F},d} \tag{27}\] is the dimensionless coupling constant. The first assumption allows us to retain only diagrams with the highest number of fermion loops, while the second one allows us to keep the lowest order in the interaction at which dissipation occurs, to wit: the second. The relevant diagrams for the current-current correlation function are shown in Fig. 2. For the Hubbard case, the solid and broken interaction lines are identical and denote the Hubbard coupling \(\lambda_{d}\). #### iii.2.2 Coulomb interaction Within the random-phase approximation (RPA), the dynamically screened Coulomb interaction is given by \[V(\mathbf{q},i\nu)=\frac{1}{V_{0}^{-1}(\mathbf{q})+\pi_{0}(\mathbf{q},i\nu)} \tag{28}\] where \[\pi_{0}(\mathbf{q},i\nu)=-\int_{\mathcal{K}}\mathrm{Tr}\left[\hat{G}(\mathbf{k}+ \mathbf{q},i\omega+i\nu)\hat{G}(\mathbf{k},i\omega)\right], \tag{29}\] is the polarization bubble, \(\int_{\mathcal{K}}\) is a short-hand for \((2\pi)^{-(d+1)}\int\mathrm{d}^{d}k\int\mathrm{d}\omega\), and \(\hat{G}\) is the free-electron Green's function given by Eqs. (10a) and (19a) in 3D and 2D, respectively. Since only the dynamic interaction contributes to dissipation, it is convenient to subtract off the static part of the interaction and treat the remaining dynamic part as the effective interaction. The dynamic part is given by \[V_{\mathrm{dyn}}(\mathbf{q},i\nu) \equiv V(\mathbf{q},i\nu)-V(\mathbf{q},0)\] \[=-V(\mathbf{q},i\nu)V(\mathbf{q},0)\pi_{0,\mathrm{dyn}}(\mathbf{ q},i\nu), \tag{30}\] where \[\pi_{0,\mathrm{dyn}}(\mathbf{q},i\nu)=\pi_{0}(\mathbf{q},i\nu)-\pi_{0}( \mathbf{q},0) \tag{31}\] is the dynamic part of the polarization bubble. The lowest two-loop order diagrams in \(V_{\mathrm{dyn}}(\mathbf{q},i\nu)\) are shown in Fig. 2, where now the solid and broken wavy lines depict the dynamic and static parts of the interaction, respectively. As opposed to the Hubbard case, the Coulomb one has an additional energy scale, \[\omega_{\mathrm{pd}}=v_{\mathrm{D}}\kappa_{d}, \tag{32}\] where \[\kappa_{3}=\left(4\pi e^{2}N\mathcal{N}_{\mathrm{F},3}\right)^{1/2}\] (33a) and \[\kappa_{2}=2\pi e^{2}N\mathcal{N}_{\mathrm{F},2} \tag{33b}\] are the inverse screening radii in 3D and 2D, respectively. For \(d=3\), \(\omega_{\mathrm{p3}}\) is on the order of the plasmon frequency at \(q=0\). For \(d=2\), \(\omega_{\mathrm{p2}}\) is on the order of the plasmon dispersion evaluated at \(q\sim\kappa_{2}\). The condition for the Coulomb interaction to be treated via within RPA is \(\kappa_{d}\ll k_{\mathrm{F}}\), which implies that \(\omega_{\mathrm{pd}}\ll E_{\mathrm{F}}\). Correspondingly, the frequency region \(0<\Omega\ll E_{\mathrm{F}}\) is divided into two subregions: \(0<\Omega\ll\omega_{\mathrm{pd}}\) and \(\omega_{\mathrm{pd}}\ll\Omega\ll E_{\mathrm{F}}\). In the first subregion, a typical energy transfer, \(\nu\), is on the order of \(\Omega\), while a typical momentum transfer, \(q\), is on the order of \(\kappa_{d}\). Therefore, \(\nu\ll v_{\mathrm{D}}q\sim\omega_{\mathrm{pd}}\). In this case, one can set \(\nu=0\) in the first factor on the RHS of Eq. (30) with the result \[V_{\mathrm{dyn}}(\mathbf{q},i\nu)\approx-V^{2}(\mathbf{q},0)\pi_{0,\mathrm{ dyn}}(\mathbf{q},i\nu). \tag{34}\] Diagrammatically, this amounts to replacing all the solid wavy lines by broken wavy ones in Fig. 2. Because \(q\ll k_{\mathrm{F}}\), the static screened potential is described by the usual Thomas-Fermi form: \[V(\mathbf{q},0)=\left\{\begin{aligned} &\frac{4\pi e^{2}}{q^{2}+ \kappa_{3}^{2}},&&\text{for 3D}\\ &\frac{2\pi e^{2}}{q+\kappa_{2}},&&\text{for 2D}.\end{aligned}\right. \tag{35a}\] In the second subregion (\(\omega_{\mathrm{pd}}\ll\Omega\ll E_{\mathrm{F}}\)), typical energy and momentum transfers are \(\nu\sim v_{\mathrm{D}}q\sim\Omega\gg\omega_{\mathrm{pd}}\). In this case, screening is irrelevant and the effective dynamic interaction is given by \[V_{\mathrm{dyn}}(\mathbf{q},i\nu)=-V_{0}^{2}(\mathbf{q})\pi_{0,\mathrm{dyn} }(\mathbf{q},i\nu), \tag{36}\] where \(V_{0}(\mathbf{q})\) is the bare Coulomb potential. Figure 2: Leading-order diagrams for the current-current correlation function. Thick solid lines depict the matrix Green’s functions, given by Eqs. (10a) and (19a) in 3D and 2D, respectively. For Hubbard interaction, the solid and broken wavy lines are identical and depict the Hubbard interaction \(\lambda_{d}=\mathrm{const}\) in \(d\) dimensions, and the displayed diagrams are the leading ones in the large \(N\)-approximation. For Coulomb interaction, the solid and broken wavy lines depict the dynamically and statically screened Coulomb potentials, respectively [Eqs. (28), (35a), (35b)], and the displayed diagrams are the leading ones within the random-phase approximation. The external momentum has only the frequency component: \(\mathcal{W}=(\mathbf{0},i\Omega)\). From top to bottom: self-energy (SE\({}_{1}\) and SE\({}_{2}\)), vertex (V), parallel (PAL) and crossed Aslamazov-Larkin (CAL) diagrams. Indices \(s_{1}^{\prime}\ldots s_{6}^{\prime}=\pm 1\) indicate helicities that are being summed over. Note that we do not need to use the large-\(N\) approximation for the Coulomb case, it is enough to require that the dimensional coupling constant of the Coulomb interaction \[\alpha_{\mathrm{C}}=\frac{v_{\mathrm{D}}\kappa_{d}}{E_{\mathrm{F}}} \tag{37}\] is small, which is the condition for the validity of RPA. For the Coulomb case, therefore, we will restrict our analysis to the actual value of \(N\) for a specific system. ### Current-current correlation function on the Matsubara axis In this section, we describe the general structure of the diagrams for the current-current correlation function. The set of diagrams in Fig. 2 includes two self-energy (SE) diagrams, \(\mathrm{SE}_{1}\) and \(\mathrm{SE}_{2}\), a vertex correction diagram (V), and two Aslamazov-Larkin (AL) diagrams in the particle-particle and particle-hole channels, labelled as PAL ("parallel AL") and CAL ("crossed AL"), respectively. The contributions of individual diagrams to the current-current correlation function at the external \(d+1\) momentum \(\mathcal{W}\equiv(\mathbf{0},i\Omega)\) are given by \[\Pi^{\mathrm{SE}_{1}}(\mathcal{W}) =\frac{1}{d}\int_{\mathcal{K}}\mathrm{Tr}\left[\hat{\mathbf{v}} \hat{S}(\mathcal{K}+\mathcal{W})\cdot\hat{\mathbf{v}}\hat{G}(\mathcal{K}) \right], \tag{38a}\] \[\Pi^{\mathrm{SE}_{2}}(\mathcal{W}) =\frac{1}{d}\int_{\mathcal{K}}\mathrm{Tr}\left[\hat{\mathbf{v}} \hat{G}(\mathcal{K}+\mathcal{W})\cdot\hat{\mathbf{v}}\hat{S}(\mathcal{K}) \right],\] (38b) \[\Pi^{\mathrm{V}}(\mathcal{W}) =\frac{1}{d}\int_{\mathcal{K}}\mathrm{Tr}\left[\hat{\mathbf{ \Gamma}}\left(\mathcal{K}^{\prime};\mathcal{W}\right)\hat{G}(\mathcal{K}^{ \prime}+\mathcal{W})\cdot\hat{\mathbf{v}}\hat{G}(\mathcal{K}^{\prime})\right],\] (38c) \[\Pi^{\mathrm{PAL}}(\mathcal{W}) =-\frac{1}{d}\int_{\mathcal{Q}}V_{\mathrm{st}}^{2}(\mathbf{q}) \mathbf{A}(\mathcal{Q},\mathcal{W})\cdot\mathbf{B}(\mathcal{Q},\mathcal{W}),\] (38d) \[\Pi^{\mathrm{CAL}}(\mathcal{W}) =-\frac{1}{d}\int_{\mathcal{Q}}V_{\mathrm{st}}^{2}(\mathbf{q}) \mathbf{A}(\mathcal{Q},\mathcal{W})\cdot\mathbf{C}(\mathcal{Q},\mathcal{W}), \tag{38e}\] where \[\hat{S}(\mathcal{L}) =\hat{G}(\mathcal{L})\hat{\Sigma}(\mathcal{L})\hat{G}(\mathcal{L}), \tag{39a}\] \[\hat{\Sigma}(\mathcal{L}) =-\int_{\mathcal{Q}}\tilde{V}(\mathcal{Q})\hat{G}(\mathcal{L}+ \mathcal{Q}),\] (39b) \[\tilde{V}(\mathcal{Q}) =-V_{\mathrm{st}}^{2}(\mathbf{q})\pi_{0}(\mathcal{Q}),\] (39c) \[\hat{\mathbf{\Gamma}}\left(\mathcal{K}^{\prime};\mathcal{W}\right) =-\int_{\mathcal{K}}\tilde{V}(\mathcal{K}^{\prime}-\mathcal{K}) \hat{G}(\mathcal{K})\hat{\mathbf{v}}\hat{G}(\mathcal{K}+\mathcal{W}),\] (39d) \[\mathbf{A}(\mathcal{Q},\mathcal{W}) =-\int_{\mathcal{K}}\mathrm{Tr}\left[\hat{G}(\mathcal{K})\hat{ \mathbf{v}}\hat{G}(\mathcal{K}+\mathcal{W})\hat{G}(\mathcal{K}-\mathcal{Q}) \right],\] (39e) \[\mathbf{B}(\mathcal{Q},\mathcal{W}) =-\int_{\mathcal{P}}\mathrm{Tr}\left[\hat{G}(\mathcal{P}+\mathcal{ W})\hat{\mathbf{v}}\hat{G}(\mathcal{P})\hat{G}(\mathcal{P}-\mathcal{Q})\right],\] (39f) \[\mathbf{C}(\mathcal{Q},\mathcal{W}) =-\int_{\mathcal{P}}\mathrm{Tr}\left[\hat{G}(\mathcal{P})\hat{G}( \mathcal{P}+\mathcal{W}+\mathcal{Q})\hat{G}(\mathcal{P}+\mathcal{W})\hat{ \mathbf{v}}\right], \tag{39g}\] \(\mathcal{K}\equiv(\mathbf{k},i\omega)\), \(\mathcal{K}^{\prime}\equiv(\mathbf{k}^{\prime},i\omega)\), \(\mathcal{P}\equiv(\mathbf{p},i\omega)\), \(\mathcal{Q}\equiv(\mathbf{q},i\nu)\), \(\pi_{0}(\mathcal{Q})\) is defined by Eq. (29), and \(V_{\mathrm{st}}(\mathbf{q})\) is the static part of the interaction, equal to \(V(\mathbf{q},0)\) [Eqs. (35a) and (35b)] and to \(\lambda_{d}\) for the Coulomb and Hubbard cases, respectively. The expressions above are valid for Coulomb interaction at the lowest frequencies (\(\Omega\ll E_{\mathrm{F}}\)) and for any frequency for Hubbard interaction. Using the free rather than dressed Green's functions is justified for any frequency except for a narrow region near the direct threshold (a precise condition will be formulated later, cf. Sec. V.2). The total current-current correlation function is the sum of all the contributions displayed above: \[\Pi(\mathcal{W})=\sum_{J}\Pi^{J}(\mathcal{W}), \tag{40}\] where \(J\in\{\mathrm{SE},\mathrm{V},\mathrm{PAL},\mathrm{CAL}\}\), and "SE" refers to both the self-energy diagrams collectively. Equations (38a)-(38e) become more transparent if written in the electron-hole basis, in which \(\hat{\mathcal{H}}_{0}\) is diagonal. Indeed, any diagram contains six Green's functions, each being the sum of an electron and hole parts with helicities \(s^{\prime}=\pm 1\), respectively. This gives rise to a set of six helicities \(\mathcal{S}^{\prime}=\{s^{\prime}_{1}\ldots s^{\prime}_{6}\}\) that are to be summed over. Thus, each diagram is the sum of \(2^{6}=64\) terms \[\Pi^{J}(\mathcal{W})=\sum_{\mathcal{S}^{\prime}}\Pi^{J}_{\mathcal{S}^{\prime}}( \mathcal{W}), \tag{41}\] where summation goes over all 64 configurations of \(\mathcal{S}^{\prime}\). Each \(\Pi^{J}_{\mathcal{S}^{\prime}}(\mathcal{W})\) term in the sum contains a product of two integrals over the frequency \[\int d\omega\prod_{l=1}^{L}g_{s_{l}}(\mathbf{k}_{l},i\omega+i\nu_{l})\int d \omega^{\prime}\prod_{l^{\prime}=1}^{L^{\prime}}g_{s_{l^{\prime}}}(\mathbf{k}_{l ^{\prime}},i\omega^{\prime}+i\nu_{l^{\prime}}), \tag{42}\] where \(L=4\) for all diagrams, \(L^{\prime}=2\) for \(\mathrm{SE}_{1,2}\) and V diagrams, and \(L=L^{\prime}=3\) for PAL and CAL diagrams, \(s_{l},s_{l^{\prime}}\in\mathcal{S}^{\prime}\), and \(g_{s}(\mathbf{k},i\omega)\) is the Green's function in the diagonal basis, defined by Eq. (10c). The integrals in Eq. (42) vanish if the poles of the integrands are located in the same halves of the complex plane. Because \(\xi^{s_{l}}_{\mathsf{k}_{l}}<0\) for \(s_{l}=-1\), at least one of the helicities in each of the integrals in Eq. (42) must be positive for a non-zero result. Thus, instead of \(2^{6}=64\) terms we would, in general, have only \(2^{4}=16\) terms in the sum of helicities in Eq. (41). ### Retarded current-current correlation function Upon analytic continuation, the imaginary part of the retarded current-current correlation function can be written as a sum over the new terms, \(\mathcal{R}^{J}_{\mathcal{S}}(\Omega)\): \[\Im\Pi^{J}_{\mathrm{R}}(\Omega)=\sum_{\mathcal{S}^{\prime}}\Im\Pi^{J}_{\mathcal{ S}^{\prime},\mathrm{R}}(\Omega)=\sum_{\mathcal{S}}\mathcal{R}^{J}_{\mathcal{S}}(\Omega), \tag{43}\] where \(\mathcal{S}\in\{s_{1}\ldots s_{6}\}\) is another set of helicities, which is different from \(\mathcal{S}^{\prime}\), and the subscript "R" stands for "retarded". Note that while the equality between the sums in Eq. (43) is always valid, there is, in general, no one-to-one correspondence between the individual terms of the two sums.4 Looking ahead, it will be convenient to represent not only the self-energy but also all other diagrams as sums of two terms, which we will distinguish by assigning a label \(u=1,2\) to the diagram index \(J\), i.e., Footnote 4: The rationale behind transitioning from \(\Im\Pi_{\mathcal{S}^{J},\mathrm{R}}^{J}(\Omega)\) to \(\mathcal{R}_{\mathcal{S}}^{J}(\Omega)\), which differ only in labeling of the helicities, is mere convenience. Namely, it allows one to systematically collect contributions with similar behaviors into \(\mathcal{R}_{\mathcal{S}}^{J}(\Omega)\). \[\mathcal{R}_{\mathcal{S}}^{J}(\Omega)=\sum_{u=1,2}\mathcal{R}_{\mathcal{S}}^{J _{u}}(\Omega), \tag{44}\] where \(J_{1,2}\in\{\mathrm{SE}_{1,2},\mathrm{V}_{1,2},\mathrm{PAL}_{1,2},\mathrm{CAL}_ {1,2}\}\). Note that whereas the subscript \(u\) refers to two topologically distinct diagrams for the SE case, its meaning for the V, PAL and CAL contributions is purely algebraic. For example, the contribution of the vertex diagram is represented by a sum of two terms in Eq. (A46), and similarly for the AL diagrams. We remind the reader that we chose \(\Omega>0\). With this choice, as shown in Appendix A, any of the \(\mathcal{R}_{\mathcal{S}}^{J_{u}}(\Omega)\) terms has the following structure \[\mathcal{R}_{\mathcal{S}}^{J_{u}}(\Omega)=K^{J_{u}}\int_{\mathbf{ k},\mathbf{p},\mathbf{q}}\int_{\nu} V_{\mathrm{st}}^{2}(\mathbf{q})\mathcal{T}_{\mathcal{S}}^{J_{u}} \left(\mathbf{k},\mathbf{p},\mathbf{q}\right)\mathcal{G}_{\mathcal{S}}^{J_{u }}(\mathbf{k},\mathbf{p},\mathbf{q},\Omega)\] \[\times\theta(\Omega+\xi_{\mathbf{k}}^{s_{3}})\theta(-\xi_{ \mathbf{k}}^{s_{3}})\theta(\xi_{\mathbf{k}+\mathbf{q}}^{s_{5}})\theta(-\xi_{ \mathbf{p}}^{s_{4}})\theta(\xi_{\mathbf{p}+\mathbf{q}}^{s_{6}})\delta(\Omega+ \nu+\xi_{\mathbf{k}}^{s_{3}}-\xi_{\mathbf{k}+\mathbf{q}}^{s_{5}})\delta(\nu+ \xi_{\mathbf{p}+\mathbf{q}}^{s_{6}}-\xi_{\mathbf{p}}^{s_{4}}). \tag{45}\] [A rather complicated form of Eq. (45) will be clarified later by an example of the \(\mathrm{SE}_{1}\) diagram; see Eq. (55) and Sec. IV.2.] Here, \(\int_{\mathbf{n}}\) is a shorthand for \(\int\mathrm{d}^{d}n/(2\pi)^{d}\), \(\int_{\nu}\) stands for \(\int_{-\infty}^{\infty}\mathrm{d}\nu/2\pi\), and \[K^{J_{u}}=\left\{\begin{array}{rl}-\pi^{2}/32,&\mathrm{for}\;J_{u}=\mathrm{ SE}_{1,2},\mathrm{CAL}_{1,2},\\ \pi^{2}/32,&\mathrm{for}\;J_{u}=\mathrm{V}_{1,2},\mathrm{PAL}_{1,2}.\end{array}\right. \tag{46}\] Further, \(\mathcal{T}_{\mathcal{S}}^{J_{u}}\) denote the trace of matrix products coming from the spinor wavefunctions and \(\mathcal{G}_{\mathcal{S}}^{J_{u}}\) are the products of the real parts of the Green's functions, given by \[\mathcal{T}_{\mathcal{S}}^{\mathrm{SE}_{1}}= \frac{1}{d}\operatorname{Tr}\left(\hat{\mathbf{v}}\hat{M}_{\mathbf{ k}}^{s_{1}}\hat{M}_{\mathbf{k}+\mathbf{q}}^{s_{5}}\hat{M}_{\mathbf{k}}^{s_{2}} \cdot\hat{\mathbf{v}}\hat{M}_{\mathbf{k}}^{s_{3}}\right)\] \[\times\operatorname{Tr}\left(\hat{M}_{-\mathbf{p}-\mathbf{q}}^{s_ {6}}\hat{M}_{-\mathbf{p}}^{s_{4}}\right),\] \[\mathcal{T}_{\mathcal{S}}^{\mathrm{SE}_{2}}= \frac{1}{d}\operatorname{Tr}\left(\hat{\mathbf{v}}\hat{M}_{- \mathbf{k}-\mathbf{q}}^{s_{1}}\hat{M}_{-\mathbf{k}}^{s_{3}}\hat{M}_{-\mathbf{ k}-\mathbf{q}}^{s_{5}}\cdot\hat{\mathbf{v}}\hat{M}_{-\mathbf{k}-\mathbf{q}}^{s_{5}}\right)\] \[\times\operatorname{Tr}\left(\hat{M}_{\mathbf{p}}^{s_{4}}\hat{M}_ {\mathbf{p}+\mathbf{q}}^{s_{6}}\right),\] \[\mathcal{G}_{\mathcal{S}}^{\mathrm{SE}_{1}}= \frac{1}{\Omega-\xi_{\mathbf{k}}^{s_{1}}+\xi_{\mathbf{k}}^{s_{3}} }\frac{1}{\Omega-\xi_{\mathbf{k}}^{s_{2}}+\xi_{\mathbf{k}}^{s_{3}}},\] \[\mathcal{G}_{\mathcal{S}}^{\mathrm{SE}_{2}}= \frac{1}{\Omega-\xi_{\mathbf{k}+\mathbf{q}}^{s_{5}}+\xi_{\mathbf{ k}+\mathbf{q}}^{s_{1}}}\frac{1}{\Omega-\xi_{\mathbf{k}+\mathbf{q}}^{s_{ 6}}+\xi_{\mathbf{k}+\mathbf{q}}^{s_{2}}}, \tag{47}\] \[\mathcal{T}_{\mathcal{S}}^{V_{1}}= \frac{1}{d}\operatorname{Tr}\left(\hat{\mathbf{v}}\hat{M}_{ \mathbf{k}}^{s_{1}}\hat{M}_{\mathbf{k}+\mathbf{q}}^{s_{5}}\cdot\hat{\mathbf{v}} \hat{M}_{\mathbf{k}+\mathbf{q}}^{s_{2}}\hat{M}_{\mathbf{k}}^{s_{3}}\right)\] \[\times\operatorname{Tr}\left(\hat{M}_{-\mathbf{p}-\mathbf{q}}^{s_ {6}}\hat{M}_{-\mathbf{p}}^{s_{4}}\right),\] \[\mathcal{T}_{\mathcal{S}}^{V_{2}}= \frac{1}{d}\operatorname{Tr}\left(\hat{\mathbf{v}}\hat{M}_{- \mathbf{k}-\mathbf{q}}^{s_{5}}\hat{M}_{-\mathbf{k}}^{s_{3}}\hat{M}_{-\mathbf{ k}-\mathbf{q}}^{s_{2}}\right)\] \[\times\operatorname{Tr}\left(\hat{M}_{\mathbf{p}}^{s_{4}}\hat{M}_ {\mathbf{p}+\mathbf{q}}^{s_{5}}\hat{M}_{\mathbf{p}}^{s_{4}}\hat{M}_{-\mathbf{ p}-\mathbf{q}}^{s_{5}}\right),\] \[\mathcal{G}_{\mathcal{S}}^{\mathrm{CAL}_{1}}= \frac{1}{\Omega-\xi_{\mathbf{p}}^{s_{2}}+\xi_{\mathbf{p}}^{s_{ 5}}}\frac{1}{\Omega-\xi_{\mathbf{k}}^{s_{1}}+\xi_{\mathbf{k}}^{s_{3}}},\] \[\mathcal{G}_{\mathcal{S}}^{\mathrm{CAL}_{2}}= \frac{1}{\Omega-\xi_{\mathbf{p}+\mathbf{q}}^{s_{6}}+\xi_{\mathbf{ p}+\mathbf{q}}^{s_{2}}}\frac{1}{\Omega-\xi_{\mathbf{k}+\mathbf{q}}^{s_{5}}+ \xi_{\mathbf{k}+\mathbf{q}}^{s_{1}}}. \tag{50}\] Here, \(\hat{M}_{\mathbf{l}}^{t}\) is the matrix part of the Green's function given by Eqs. (10b) and (19b) in 3D and 2D, respectively. Note that for all diagrams \(\mathcal{T}_{\mathcal{S}}^{J_{u}}\) are separable functions of the momenta \(\mathbf{k}\) and \(\mathbf{p}\), i.e., \[\mathcal{T}_{\mathcal{S}}^{J_{u}}(\mathbf{k},\mathbf{p},\mathbf{q})=\mathcal{T} _{1}^{J_{u}}(\mathbf{k},\mathbf{q})\mathcal{T}_{2}^{J_{u}}(\mathbf{p},\mathbf{q }). \tag{51}\] From the \(\theta\)-functions in Eq. (45), we see that the result is non-zero only if \(\xi_{\mathbf{k}+\mathbf{q}}^{s_{\mathrm{S}}}>0\) and \(\xi_{\mathbf{p}+\mathbf{q}}^{s_{\mathrm{S}}}>0\), which implies that \[s_{5}=s_{6}=+1, \tag{52}\] i.e., the corresponding solid lines in diagrams describe electrons in the conduction band. This is a particular instance of the general constraint discussed after Eq. (42), thanks to which the sum over helicities contains now only \(2^{4}=16\) instead of \(2^{6}=64\) terms. The remaining helicities belong to the subset \[\mathcal{S}_{A}\in\{s_{1},s_{2},s_{3},s_{4}\}. \tag{53}\] Therefore, the contribution of diagram \(J\) to the sum in Eq. (43) is given by the sum of 16 terms of the type \(\mathcal{R}_{\mathcal{S}_{A}}^{J}(\Omega)\): \[\Im\Pi_{\mathrm{R}}^{J}(\Omega)=\sum_{\mathcal{S}_{A}}\mathcal{R}_{\mathcal{S }_{A}}^{J}(\Omega)=\sum_{\mathcal{S}_{A},u}\mathcal{R}_{\mathcal{S}_{A}}^{J_{ u}}(\Omega). \tag{54}\] Thus, the total retarded current-current correlation function \(\Im\Pi_{\mathrm{R}}(\Omega)\), is given by [cf. Eqs. (40) and (43)]: \[\Im\Pi_{\mathrm{R}}(\Omega)=\sum_{J}\Im\Pi_{\mathrm{R}}^{J}( \Omega)=\sum_{\mathcal{S}_{A},J}\mathcal{R}_{\mathcal{S}_{A}}^{J}(\Omega)= \sum_{\mathcal{S}_{A},J_{u}}\mathcal{R}_{\mathcal{S}_{A}}^{J_{u}}(\Omega)\] \[=\sum_{s_{3},s_{4}}\int_{\mathbf{k},\mathbf{p},\mathbf{q},\nu} \mathcal{D}(\mathbf{k},\mathbf{p},\mathbf{q},\nu,\Omega)\] \[\times\sum_{J_{u},s_{1},s_{2}}K^{J_{u}}\mathcal{T}_{\mathcal{S}_ {A}}^{J_{u}}(\mathbf{k},\mathbf{p},\mathbf{q})\mathcal{G}_{\mathcal{S}_{A}}^{J _{u}}(\mathbf{k},\mathbf{p},\mathbf{q},\Omega), \tag{55}\] where \[\mathcal{D}(\mathbf{k},\mathbf{p},\mathbf{q},\nu,\Omega)=\theta( \Omega+\xi_{\mathbf{k}}^{s_{3}})\theta(-\xi_{\mathbf{k}}^{s_{3}})\theta(\xi_{ \mathbf{k}+\mathbf{q}}^{+})\theta(-\xi_{\mathbf{p}}^{s_{4}})\theta(\xi_{ \mathbf{p}+\mathbf{q}}^{+})\] \[\times\delta(\Omega+\nu+\xi_{\mathbf{k}}^{s_{3}}-\xi_{\mathbf{k}+ \mathbf{q}}^{+})\delta(\nu+\xi_{\mathbf{p}+\mathbf{q}}^{+}-\xi_{\mathbf{p}}^{ s_{4}}) \tag{56}\] is the block of kinematic constraints represented by the theta- and delta-functions with Eq. (52) implemented. The \(\theta\)-functions reflect the Pauli principle, while the \(\delta\)-functions manifest the energy conservation. Note that \(\mathcal{D}(\mathbf{k},\mathbf{p},\mathbf{q},\nu)\) depends only on helicities \(s_{3},s_{4}\) and is the same for all diagrams; therefore, it can be pulled out of the sum over \(J_{u},s_{1}\), and \(s_{2}\). From now onward, we assume that the constraint Eq. (52) has already been implemented. ## IV Scattering processes ### Frequency thresholds Different terms in Eq. (54) start to contribute at frequencies above certain thresholds. These thresholds can be deduced from the kinematic constraints in Eq. (IV.1), which depend only on the helicities \(s_{3},s_{4}\) and are the same for all diagram types. (For the reader's convenience, helicity sets corresponding to different scattering processes are summarized in Table 1.) Equation (IV.1) gives rise to the following kinematic constraints: \[\xi_{\mathbf{k}+\mathbf{q}}^{+}=\epsilon_{\mathbf{k}+\mathbf{q}} -E_{\mathrm{F}}>0,\ \xi_{\mathbf{p}+\mathbf{q}}^{+}=\epsilon_{\mathbf{p}+\mathbf{q}}-E_{\mathrm{F}}>0, \tag{57a}\] \[\xi_{\mathbf{k}}^{s_{3}}=s_{3}\epsilon_{\mathbf{k}}-E_{\mathrm{ F}}<0,\ \xi_{\mathbf{p}}^{s_{4}}=s_{4}\epsilon_{\mathbf{p}}-E_{\mathrm{F}}<0,\] (57b) \[\Omega+\nu+\xi_{\mathbf{k}}^{s_{3}}=\xi_{\mathbf{k}+\mathbf{q}}^{+ };\ \xi_{\mathbf{p}}^{s_{4}}-\nu=\xi_{\mathbf{p}+\mathbf{q}}^{+}. \tag{57c}\] The inequalities (57a) and (57c) imply that \[E_{\mathrm{F}}-\Omega-s_{3}\epsilon_{\mathbf{k}}<\nu<s_{4}\epsilon_{\mathbf{p}} -E_{\mathrm{F}}, \tag{58}\] which, in its turn, leads to \[s_{4}\epsilon_{\mathbf{p}}+s_{3}\epsilon_{\mathbf{k}}>2E_{\mathrm{F}}-\Omega. \tag{59}\] Making all possible choices of \(s_{4}=\pm 1\) and \(s_{3}=\pm 1\), we obtain the frequency thresholds which delineate three frequency regimes, as described in the following sections. #### iv.1.1 All frequencies: \(0<\Omega<\omega_{\mathrm{D}}\) The choice of \(s_{3}=s_{4}=+1\) corresponds to processes whose contributions start right at \(\Omega>0\) and continue up to \(\omega_{\mathrm{D}}=2E_{\mathrm{F}}\) (and beyond). Combining Eqs. (57b) and (59), we see that dispersions \(\epsilon_{\mathbf{k}}\) and \(\epsilon_{\mathbf{p}}\) are constrained by the following inequalities: \[0<\epsilon_{\mathbf{k}}<E_{\mathrm{F}},\ 0<\epsilon_{\mathbf{p}}<E_{\mathrm{F}},\ \epsilon_{\mathbf{k}}+\epsilon_{\mathbf{p}}>2E_{\mathrm{F}}-\Omega. \tag{61}\] Geometrically, these constraints are shown in Fig. 3\(a\). At \(\Omega=0\), the slanted line \(\epsilon_{\mathbf{k}}+\epsilon_{\mathbf{p}}=2E_{\mathrm{F}}-\Omega\) touches the corner of the square, which formed by the horizontal line \(\epsilon_{\mathbf{k}}=E_{\mathrm{F}}\) and vertical line \(\epsilon_{\mathbf{p}}=E_{\mathrm{F}}\). For \(0<\Omega<2E_{\mathrm{F}}\), the slanted line \(\epsilon_{\mathbf{k}}+\epsilon_{\mathbf{p}}=2E_{\mathrm{F}}\) cuts through the square, such that the allowed values of \(\epsilon_{\mathbf{k}}\) and \(\epsilon_{\mathbf{p}}\) lie in the diagonally hatched region. This regime includes processes of pure intra-band absorption (\(s_{1}=s_{2}=+1\)), when all the six states are in the conduction band, and dissipation occurs in the same way as in a DFL [26]. In addition, this regime includes scattering processes between electrons and holes. With four out of six helicities chosen positive (\(s_{3}=s_{4}=s_{5}=s_{6}=+1\)), either one of helicities \(s_{1}\) and \(s_{2}\) or both of them can be negative. Therefore, such scattering processes involve up to two states in the valence band, while the numbers of electrons and holes are not conserved separately. As we discussed in Sec. I, absorption due to all processes described above is absent within the model of a gapped semiconductor with parabolic bands and the interaction Hamiltonian given in Eq. (16), which was considered in Refs. [27; 31; 32; 33; 34]. #### iv.1.2 Intermediate frequencies: \(\omega_{I}\leq\Omega<\omega_{D}\) In addition to still active _ee_ and _eh_ processes, described in the previous section, another type of _eh_ processes contributes to the conductivity in the intermediate-frequency regime, defined as \(\omega_{\rm I}\leq\Omega<\omega_{\rm D}\). This regime corresponds to the following helicity choices: 1) \(s_{3}=-1\), \(s_{4}=+1\) and 2) \(s_{3}=+1\), \(s_{4}=-1\). For the first choice, Eqs. (57b) and (59) imply that \[0<\epsilon_{\bf k}<\infty,\ 0<\epsilon_{\bf p}<E_{\rm F},\] \[\epsilon_{\bf p}-\epsilon_{\bf k}>2E_{\rm F}-\Omega=E_{\rm F}- \delta\Omega, \tag{62}\] where \(\delta\Omega\equiv\Omega-E_{\rm F}\). Geometrically, these constraints are depicted in Fig. 3\(b\). The constraints are satisfied if the line \(\epsilon_{\bf p}-\epsilon_{\bf k}=E_{\rm F}-\delta\Omega\) cuts across the semi-infinite band, defined by the inequalities \(0<\epsilon_{\bf k}<\infty\) and \(0<\epsilon_{\bf p}<E_{\rm F}\), which is only possible if \(\Omega>E_{\rm F}=\omega_{\rm I}\). The contribution from the second choice, \(s_{4}=-1,s_{3}=+1\), can be re-written in terms of the first one via an appropriate re-labelling of helicities, and thus this case does not need to be analyzed separately. The threshold \(\Omega=\omega_{\rm I}\) demarcates the onset of AM-like processes, first introduced in the context of doped semiconductors in Ref. [27] and further studied in Refs. [31; 32; 33; 34]. Figure 4 depicts two kinds of AM processes that occur for \(s_{3}=-s_{4}=-1\) (panel _a_) and \(s_{3}=-s_{4}=+1\) (panel _b_). In Fig. 4\(a\), an incoming photon of energy \(\Omega>\omega_{\rm I}\) creates a hole state and a virtual state at the same momentum. The virtual state decays into an electron and a particle-hole pair, formed by two electron states with energy \(\nu\). The particle-hole pair and the electron then decay into another virtual state, which annihilates the hole, and the photon is emitted back. In Fig. 4\(b\), an incoming photon creates an electron and a virtual state. The virtual state decays into a real electron and a particle-hole pair, formed by the electron in the conduction band and hole in the valence band. Finally the virtual state annihilates the electron, and the photon is emitted back. #### iv.1.3 High frequencies: \(\Omega>\omega_{\rm D}\) As we said before, absorption for \(\Omega>\omega_{\rm D}\) occurs even in the absence of electron-electron interaction. The corresponding optical conductivity is plateaued at the universal value in 2D and increases linearly with frequency in 3D. Electron-electron interaction gives rise to logarithmic renormalizations of the velocity and coupling constant [1; 10; 11], which occur already to first order in the static interaction. Dissipative processes, considered in this paper, contribute only to second order in the interaction (cf. Fig. 2) and thus can be neglected in this frequency range. As the final remark for this section, we note that, in addition to being independent of the diagram type, the frequency thresholds are also independent of a particular form of the dispersion and dimensionality. ### Archetypal contributions to the optical conductivity We now analyze the structure of \(\mathcal{R}^{J_{u}}_{\mathcal{S}_{A}}\) [Eq. (55)], using one of the self-energy diagram, namely, SE\({}_{1}\) in Fig. 2, as an example. As follows from Eq. (55), the contribution of this diagram can be written as \[\sum_{\mathcal{S}_{A}}\mathcal{R}^{\rm SE_{1}}_{\mathcal{S}_{A}}(\Omega)=- \frac{\pi^{2}}{32}\int_{\bf k,p,q}\theta(-\xi_{\bf k}^{s_{3}})\theta(\Omega+ \xi_{\bf k}^{s_{3}})\sum_{\mathcal{S}_{A}}\frac{\mathcal{T}^{\rm SE_{1}}_{ \mathcal{S}_{A}}(\bf k,p,q)}{(\Omega-\xi_{\bf k}^{s_{1}}+\xi_{\bf k}^{s_{3}})( \Omega-\xi_{\bf k}^{s_{2}}+\xi_{\bf k}^{s_{3}})}\mathcal{U}(\bf k,p,q,\Omega+ \xi_{\bf k}^{s_{3}}), \tag{64}\] where we used the third line of Eq. (47) for \(\mathcal{G}^{\rm SE_{1}}_{\mathcal{S}_{A}}\), \(\mathcal{T}^{\rm SE_{1}}_{\mathcal{S}_{A}}\) is given by first line of the same equation, and \[\mathcal{U}(\bf k,p,q,\omega)=\int_{\nu}V_{\rm st}^{2}(\bf q)\theta(\xi_{\bf k +q}^{+})\theta(-\xi_{\bf p}^{s_{4}})\theta(\xi_{\bf p+q}^{+})\delta(\omega+ \nu-\xi_{\bf k+q}^{+})\delta(\nu+\xi_{\bf p+q}^{+}-\xi_{\bf p}^{s_{4}}). \tag{65}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Frequency range & Type & \(s_{1}\) & \(s_{2}\) & \(s_{3}\) & \(s_{4}\) & \(s_{5}\) & \(s_{6}\) \\ \hline \(0<\Omega<\omega_{\rm D}\) & _ee_ & +1 & +1 & +1 & +1 & +1 & +1 \\ \hline \(0<\Omega<\omega_{\rm D}\) & _eh1_ & \(\pm 1\) & \(\mp 1\) & +1 & +1 & +1 & +1 \\ \hline \(0<\Omega<\omega_{\rm D}\) & _eh2_ & \(-1\) & \(-1\) & +1 & +1 & +1 & +1 \\ \hline \(\omega_{\rm I}<\Omega<\omega_{\rm D}\) & AM & \(\pm 1\) & \(\pm 1\) & \(\pm 1\) & \(\mp 1\) & +1 & +1 \\ \hline \end{tabular} \end{table} Table 1: Summary of the helicity sets for different scattering processes. Here \(\omega_{\rm D}=2E_{\rm F}\), \(\omega_{\rm I}=E_{\rm F}\), while _ee_, _eh1_, _eh2_ stand for absorption processes involving electrons only, one hole, and two holes, respectively. Note that for Auger-Meitner (AM) processes, the choices of \(s_{1}=\pm 1\) and \(s_{2}=\pm 1\) are not correlated either to each other or to the choices of \(s_{3}\) and \(s_{4}\), while the choices of \(s_{3}\) and \(s_{4}\) are correlated to each other. Examples of diagrams involving _eh1_ and _eh2_ processes are shown in Fig. 5; examples of diagrams involving AM processes are shown in Fig. 6. The structure of the expressions above can be understood by comparing them to their counterparts for the scalar case, when the trace part in Eq. (64) is equal to unity. For our choice of \(\Omega>0\), the theta-functions in Eq. (64) come from the difference of the Fermi functions in the current-current bubble of the \(\mathrm{SE}_{1}\) diagram, where the imaginary part of the Green's function at the bottom of \(\mathrm{SE}_{1}\) diagram was replaced by the \(\delta\)-function, and the ensuing constraint on the frequency (\(\omega=\xi_{\mathbf{k}}^{s_{3}}\)) was resolved. Next, with the trace part replaced by unity, the integral of \(\mathcal{U}(\mathbf{k},\mathbf{p},\mathbf{q},\Omega+\xi_{\mathbf{k}}^{s_{3}})\) in Eq. (65) over \(\mathbf{p}\) and \(\mathbf{q}\) gives the imaginary part of the self-energy at momentum \(\omega=\xi_{\mathbf{k}}^{s_{3}}\). The imaginary part of the \(\mathrm{SE}_{1}\) diagram is then given by \[\mathcal{U}(\mathbf{k},\mathbf{p},\mathbf{q},\Omega+\xi_{\mathbf{k}}^{s_{3}}) =\frac{1}{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{ k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{ k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{ k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k }}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2} \left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{ \xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3 }}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{ k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{ \omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k} }^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2} \left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_ {\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k} }^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2} \left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_ {\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k} }^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2} \left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac {\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k} }^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2} \left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_ {\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left( \frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{ \omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k} }^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2} \left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_ {\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{ \omega}{\xi_{\mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{ \mathbf{k}}^{s_{3}}}\right)^{2}\left(\frac{\omega}{\xi_{\mathbf{k}}^{s_{3}}} \right)^{2 \({\bf k}\) and frequency \(\omega=\Omega+\xi_{{\bf k}}^{s_{3}}\). The denominator of the integrand in Eq. (64) comes from the product of the real parts of two Green's functions adjacent to the self-energy block. As noted earlier, the theta- and delta-function constraints are the same for all diagrams with the only difference being the scalar factors \(K^{J_{u}}\), the trace factors \(\mathcal{T}_{\mathcal{S}}^{J_{u}}\) and the products \(\mathcal{G}_{\mathcal{S}}^{J_{u}}\). Thus, similarly, the contribution from V\({}_{1}\) diagram is given by \[\sum_{\mathcal{S}_{A}}\mathcal{R}_{\mathcal{S}_{A}}^{\text{V}_{1}}(\Omega)= \frac{\pi^{2}}{32}\int_{{\bf k},{\bf p},{\bf q}}\theta(-\xi_{{\bf k}}^{s_{3}}) \theta(\Omega+\xi_{{\bf k}}^{s_{3}})\sum_{\mathcal{S}_{A}}\frac{\mathcal{T}_{ \mathcal{S}_{A}}^{\text{V}_{1}}({\bf k},{\bf p},{\bf q})}{(\Omega-\xi_{{\bf k} }^{s_{1}}+\xi_{{\bf k}}^{s_{3}})(\Omega-\xi_{{\bf k}+{\bf q}}^{+}+\xi_{{\bf k }+{\bf q}}^{s_{2}})}\mathcal{U}({\bf k},{\bf p},{\bf q},\Omega+\xi_{{\bf k}}^{ s_{3}}), \tag{66}\] and so on for other values of \(J_{u}\). For the reader's convenience, the analytic results for the optical conductivity are summarized in Tables 2 and 3 for Hubbard and Coulomb interactions, respectively. ## V Optical conductivity of a 3D Dirac metal In this section, we derive the analytic results for the optical conductivity of a 3D Dirac metal. ### Lowest frequencies: \(\Omega\ll E_{\bf F}\) This is the case with \(s_{3}=s_{4}=+1\) (cf. Sec. IV.1.1). With \(s_{3}\) and \(s_{4}\) being fixed, the only free helicities remaining are \(s_{1}\) and \(s_{2}\). The case \(s_{1}=s_{2}=+1\) corresponds to a purely intra-band absorption, with all states being in the conduction band. The cases of \(s_{1}=-s_{2}=\pm 1\) and \(s_{1}=s_{2}=-1\) correspond to absorption due to scattering processes which involve up to two holes. #### v.1.1 Intra-band absorption due to electron-electron interaction We start with purely intra-band absorption due to electron-electron (\(ee\)) interaction, when all the helicities are positive: \(s_{i}=+1\), \(i=1\dots 6\). Because the hole states in this case are totally passive, one can view the system as a FL, which is isotropic yet not Galilean-invariant due to a non-parabolicity of the electron spectrum, i.e., as a DFL. The absorption probability in this case is severely restricted by momentum conservation. In Refs. [26; 39; 40; 41] it was shown that, for the single-band case, momentum conservation brings in a factor of the "velocity imbalance", \((\Delta{\bf v})^{2}\), to the integrand of the expression of the conductivity. Here, \(\Delta{\bf v}\) is the difference between the ve \begin{table} \begin{tabular}{c c c} Frequency range & \(\eta_{3}\) & \(\eta_{2}\) \\ \hline \(\Omega\ll E_{\text{F}}\) & \(\frac{4}{175\pi}\left(\frac{\Omega}{E_{\text{F}}}\right)^{2}\) ; (90) & \(\left(\frac{1}{80\pi^{2}}\ln\frac{E_{\text{F}}}{\Omega}+\frac{5\ln 2+4}{200\pi^{2}} \right)\left(\frac{\Omega}{E_{\text{F}}}\right)^{2}\) ; (107) \\ \hline \(0<\delta\Omega\ll E_{\text{F}}\) & \(\frac{71}{\frac{5}{3240\pi}}\left(\frac{\delta\Omega}{E_{\text{F}}}\right)^{5}\) ; (94) & \(\frac{5}{108\sqrt{3}\pi}\left(\frac{\delta\Omega}{E_{\text{F}}}\right)^{4}\) ; (108) \\ \end{tabular} \end{table} Table 2: Summary of the analytic results for the optical conductivity of 2D and 3D Dirac metals, \(\Re\sigma(\Omega)\), with Hubbard interaction. Here, \(\eta_{d}=\Re\sigma(\Omega)/\sigma_{0d}\alpha_{\text{H}}^{2}N^{2}\), \(\sigma_{0d}=e^{2}k_{\text{F}}^{d-2}/\hbar\), \(k_{\text{F}}\) is the Fermi momentum, \(d=2,3\) is the spatial dimensionality, \(\alpha_{\text{H}}\) is the dimensionless coupling constant of Hubbard interaction [Eq. (27)], and \(N\) is number of flavors. The results are valid in two regions: at the lowest frequencies (first row), \(\Omega\ll E_{\text{F}}\), and just above \(\omega_{1}=E_{\text{F}}\) (second row), where AM process start to contribute, i.e., for \(0\leq\delta\Omega\equiv\Omega-\omega_{1}\ll E_{\text{F}}\), where \(\omega_{1}=E_{\text{F}}\) is the indirect threshold. Equation numbers after the formulas refer to their locations in the text. locities of the initial and final states of an _ee_ scattering process. The same factor appears in our case as well. To see this, we first note that in the _ee_ case the denominators of the fractions in Eqs. (64) and (66) are reduced to a factor of \(\Omega^{2}\) (and the same is true for other contributions). Next, as shown in Appendix C, the sum of the trace parts of all diagrams in Fig. 2 is given by \[\mathcal{T}_{\mathcal{S}_{+}} =\sum_{J_{u}}K^{J_{u}}\mathcal{T}_{\mathcal{S}_{+}^{+}}^{J_{u}}( \mathbf{k},\mathbf{p},\mathbf{q})\] \[=-\frac{\pi^{2}}{64}(\Delta\mathbf{v})^{2}\left|\Phi_{\mathbf{p}, \mathbf{p}+\mathbf{q}}^{+,+}\right|^{2}\left|\Phi_{\mathbf{k},\mathbf{k}+ \mathbf{q}}^{+,+}\right|^{2}, \tag{67}\] where \(K^{J_{u}}\) is defined in Eq. (46), \(\mathcal{S}_{+}\) denotes the set \(\{s_{1}=+1,s_{2}=+1\ldots s_{6}=+1\}\), \(J_{1,2}\in\{\mathrm{SE}_{1,2},\mathrm{V}_{1,2},\mathrm{PAL}_{1,2},\mathrm{CAL }_{1,2}\}\), \[\Delta\mathbf{v}=\mathbf{v}_{\mathbf{k}}^{+,+}+\mathbf{v}_{\mathbf{p}+ \mathbf{q}}^{+,+}-\mathbf{v}_{\mathbf{k}+\mathbf{q}}^{+,+}-\mathbf{v}_{ \mathbf{p}}^{+,+}, \tag{68}\] \(\mathbf{v}_{\mathbf{k}}^{+,+}\) is the matrix element of the velocity operator between electron-like states, given by Eq. (13), and \(\Phi_{\mathbf{k},\mathbf{k}^{\prime}}^{+,+}=\langle\mathbf{k},+|\mathbf{k}^{ \prime},+\rangle\) is the matrix element of two electron-like states.5\(\Delta\mathbf{v}\) in Eq. (68) is the change in the total velocity (proportional to the current) due a to collision between two electrons with initial momenta \(\mathbf{k}\) and \(\mathbf{p}+\mathbf{q}\), and final momenta \(\mathbf{k}+\mathbf{q}\) and \(\mathbf{p}\), respectively. In a Galilean-invariant system, momentum-conserving electron-electron scattering does not lead to current relaxation and thus does not affect the conductivity. Indeed, we see that \(\Delta\mathbf{v}=0\) if \(\mathbf{v}_{\mathbf{k}}^{+,+}=\mathbf{k}/m\) with \(m\) being the electron mass. A Dirac metal has finite conductivity only inasmuch as it violates Galilean invariance. Furthermore, even if the system is not Galilean-invariant but isotropic, \(\Delta\mathbf{v}\) vanishes if all the momenta in Eq. (68) are projected onto the Fermi surface and, to get a finite conductivity, one needs to expand \(\Delta\mathbf{v}\) near the Fermi surface. For \(\Omega\ll E_{\mathrm{F}}\), a typical deviation of the quasiparticle energy from the Fermi energy is on the order of \(\Omega\). Then \((\Delta\mathbf{v})^{2}\) can be estimated as Footnote 5: For \(\mathbf{k}\to\mathbf{k}^{\prime}\), \(\Phi_{\mathbf{k},\mathbf{k}^{\prime}}^{+,+}\to 1\), and Eq. (67) is reduced to the result of Ref. [26], which considered a Dirac metal with long-range Coulomb interaction. \[(\Delta\mathbf{v})^{2}\sim w^{2}\left(\frac{\Omega}{k_{\mathrm{F}}}\right)^{2}, \tag{69}\] where the "non-parabolicity coefficient" \[w=1-\frac{1}{2}\frac{\mathrm{d}^{2}\epsilon_{\mathbf{k}}}{\mathrm{d}k^{2}} \frac{\mathrm{d}(k^{2})}{\mathrm{d}\epsilon_{\mathbf{k}}}\Big{|}_{k=k_{ \mathrm{F}}} \tag{70}\] quantifies a deviation from Galilean invariance [26]. Introducing a gapped Dirac spectrum, \(\epsilon_{\mathbf{k}}=\sqrt{v_{\mathrm{D}}^{2}k^{2}+\Delta^{2}}\), for a moment, we get \[w=1-\frac{\Delta^{2}}{(\Delta+E_{\mathrm{F}})^{2}}. \tag{71}\] For \(E_{\mathrm{F}}\gg\Delta\), the Dirac spectrum is almost linear, and thus the deviation from the Galilean-invariant case is the strongest. In this case, \(w=1-\Delta^{2}/E_{\mathrm{F}}^{2}\approx 1\). For \(E_{\mathrm{F}}\ll\Delta\), the gapped Dirac spectrum is almost parabolic and, correspondingly, \(w\) is small: \(w\approx 2E_{\mathrm{F}}/\Delta\ll 1\). To obtain an order-of-magnitude estimate for the conductivity due to _ee_ interaction, one can replace the trace part of Eq. (64) by \((\Delta\mathbf{v})^{2}\), and use Eq. (69) with \(w=1\) for \((\Delta\mathbf{v})^{2}\) (gapless Dirac spectrum). This yields the following estimate for the conductivity \[\Re\sigma_{\mathrm{ee}}(\Omega)\sim\frac{e^{2}}{\Omega^{3}}\int_{\mathbf{k}} \theta(-\xi_{\mathbf{k}}^{+})\theta(\Omega+\xi_{\mathbf{k}}^{+})|\Im\Sigma_{ \mathrm{ee}}(\mathbf{k},\Omega+\xi_{\mathbf{k}}^{+})|(\Delta\mathbf{v})^{2}. \tag{72}\] Figure 4: Examples of Auger-Meitner–like processes corresponding to two different helicity sets: \(s_{3}=-1,s_{4}=+1\) (panel _a_) and \(s_{3}=+1,s_{4}=-1\) (panel _b_). The state on the horizontal dashed line is a virtual (off-shell) one. As discussed just below Eq. (65), the theta-function constraints in the equation above come from the current-current bubble with the choice of \(\Omega>0\). Furthermore, \[\Im\Sigma_{\rm ee}(\mathbf{k},\omega)\sim-\int_{\nu}\int_{\mathbf{p},\mathbf{q} }V_{\rm st}^{2}(\mathbf{q})\theta(\xi_{\mathbf{k}+\mathbf{q}}^{+})\theta(-\xi_ {\mathbf{p}}^{+})\theta(\xi_{\mathbf{p}+\mathbf{q}}^{+})\delta(\omega+\nu- \xi_{\mathbf{k}+\mathbf{q}}^{+})\delta(\nu+\xi_{\mathbf{p}+\mathbf{q}}^{+}-\xi _{\mathbf{p}}^{+}) \tag{73}\] is the imaginary part of the self-energy due to _ee_ interaction. As long as \(\Omega\ll E_{\rm F}\), typical electronic momenta are close to \(k_{\rm F}\), therefore, \(\xi_{\mathbf{k}}^{+}\sim\Omega\), and the integral over \(\mathbf{k}\) gives a factor of \(\mathcal{N}_{\rm F,3}\Omega\). Therefore, \[\Re\sigma_{\rm ee}(\Omega)\sim e^{2}\mathcal{N}_{\rm F,3}\frac{|\Im\Sigma_{ \rm ee}(\Omega)|}{\Omega^{2}}\left(\frac{\Omega}{k_{\rm F}}\right)^{2}, \tag{74}\] where \(\Sigma_{\rm ee}(\Omega)\equiv\Sigma_{\rm ee}(k_{\rm F},\Omega)\). For the Hubbard case, the self-energy is of the usual FL form, \[\Im\Sigma_{\rm ee}(\Omega)\sim-(N\alpha_{\rm H})^{2}\frac{\Omega^{2}}{E_{\rm F }}, \tag{75}\] and thus \[\Re\sigma_{\rm ee}(\Omega)\sim e^{2}k_{\rm F}(N\alpha_{\rm H})^{2}\left( \frac{\Omega}{E_{\rm F}}\right)^{2}. \tag{76}\] A detailed calculation presented in Appendix B gives \[\Re\sigma_{\rm ee}(\Omega)=\frac{38}{4725\pi}\frac{e^{2}k_{\rm F}}{\hbar}(N \alpha_{\rm H})^{2}\left(\frac{\Omega}{E_{\rm F}}\right)^{2}, \tag{77}\] which agrees with the estimate (76). The Coulomb case for \(\Omega\ll\omega_{\rm p3}\ll E_{\rm F}\) is similar to the Hubbard one, in a sense that the self-energy is also of the canonical FL form, except for a different coupling constant: \[\Im\Sigma_{\rm ee}(\Omega)\sim-\frac{\kappa_{3}}{k_{\rm F}}\frac{\Omega^{2}} {E_{\rm F}}. \tag{78}\] Consequently, the conductivity is obtained by replacing \((N\alpha_{\rm H})^{2}\) with \(\kappa_{3}/k_{\rm F}\) in Eq. (76), \[\Re\sigma_{\rm ee1}^{\rm C}(\Omega)\sim e^{2}\kappa_{3}\left(\frac{\Omega}{E_ {\rm F}}\right)^{2}. \tag{79}\] The actual calculation gives \[\Re\sigma_{\rm ee1}^{\rm C}(\Omega)=\frac{1}{480}\frac{e^{2}k_{\rm F}}{\hbar} \alpha_{\rm C}\left(\frac{\Omega}{E_{\rm F}}\right)^{2}, \tag{80}\] which agrees with the estimate in Eq. (79).6 Footnote 6: We are using this opportunity to point out that the numerical coefficient in the result for the same quantity in Ref. [26] by a subset of current authors (PS and DLM) is incorrect. In the range of frequencies \(\omega_{\rm p3}\ll\Omega\ll E_{\rm F}\), electrons interact with their own plasmon modes. In this regime, we can replace the screened Coulomb potential with the bare one, as specified in Eq. (36). Recalling also that the imaginary part of the retarded polarization bubble behaves as \(\Im\pi_{0,\rm R}(\mathbf{q},\omega)\sim\mathcal{N}_{\rm F,3}\omega/v_{\rm F}q\) for \(|\omega|/v_{\rm F}\leq q\ll k_{\rm F}\), we obtain the following estimate for the imaginary part of the self-energy \[\Im\Sigma_{\rm ee}(\Omega) \sim-\frac{\kappa_{3}^{4}}{\mathcal{N}_{\rm F,3}v_{\rm F}^{2}} \int_{0}^{\Omega}\mathrm{d}\nu\nu\int_{\max\{\Omega,\Omega-\nu\}/v_{\rm F}}^{ \infty}\frac{\mathrm{d}q}{q^{4}}\] \[\sim-\frac{\kappa_{3}^{2}}{k_{\rm F}^{2}}\frac{\omega_{\rm p3}^{2 }}{\Omega}. \tag{81}\] A crossover between Eqs. (78) and (81) occurs at \(\Omega\sim\omega_{\rm p3}\), as it should. Substituting Eq. (81) into (74), we obtain \[\Re\sigma_{\rm ee2}^{\rm C}(\Omega)\sim e^{2}\frac{\kappa_{3}^{4}}{k_{\rm F}^ {3}}\frac{E_{\rm F}}{\Omega}\sim\frac{e^{6}k_{\rm F}^{2}}{\hbar v_{\rm D} \Omega}, \tag{82}\] and the actual calculation gives \[\Re\sigma_{\rm ee2}^{\rm C}(\Omega)=\frac{(3-4\ln 2)}{24\pi}\frac{e^{2}k_{\rm F}}{ \hbar}\alpha_{\rm C}^{4}\left(\frac{E_{\rm F}}{\Omega}\right), \tag{83}\] which matches the estimate (82). Equations (80) and (83) imply that the conductivity exhibits a maximum at \(\Omega\sim\omega_{\rm p3}\). Figure 5: Examples of single-hole (_a_-_c_) and two-hole (_d_) diagrams. Solid (dashed) lines depict the Green’s functions in the diagonal basis [Eq. (10c)] for positive (negative) helicities. The filled and blank circles denote matrix elements of _ee_ and _eh_ interactions, respectively. The filled and blank squares denote the intra- and inter-band current vertices, respectively. Absorption processes involving up to two holes We now turn to absorption processes that involve holes. There are two types of such processes: with one hole (_eh1_) and with two holes (_eh2_). Recalling that \(s_{3}=s_{4}=+1\) in the all-frequencies regime (cf. Sec. IV.1.1), we have two choices: \(s_{1}=-s_{2}=\pm 1\), which corresponds to _eh1_, and \(s_{1}=s_{2}=-1\), which corresponds to _eh2_. Examples of _eh1_ and _eh2_ diagrams are shown in Fig. 5, where the solid and dashed lines depict the electron and hole Green's functions, respectively, given by Eq. (10c) with \(s=\pm 1\). We first look at the _eh1_ case, when the sum over helicities in the second line of Eq. (66) contains two terms: one with \(s_{1}=+1,s_{2}=-1\) and another one with \(s_{1}=-1,s_{2}=+1\). In _eh1_ diagrams (Fig. 5_a-c_), one of the current vertices is of the intra-band type while another one is of the inter-band type. In self-energy diagrams \(a\) and \(b\), the current vertices enter at the same momenta and are thus orthogonal to each other, see Eqs. (13) and (14). Therefore, the _eh1_ self-energy diagrams vanish. On the other hand, vertex-type diagrams, e.g., diagram \(c\) in Fig. 5, contain current vertices at different momenta, which are not orthogonal to each other, and thus the vertex contribution is finite. In what follows, we will analyze the _eh1_ vertex diagrams, whose general algebraic structure is given by Eq. (66). As soon as a scattering process involves at least one hole, constraints due to momentum conservation are lifted, and the factor of \((\Delta\mathbf{v})^{2}\) does not bring an additional smallness to the result. However, in contrast to the _ee_ case, typical energies involved are now on the order of \(E_{\mathrm{F}}\) rather than \(\Omega\), and the _eh1_ contribution to the conductivity still scales as \(\Omega^{2}\). Indeed, the sum over \(s_{1}=-s_{2}=\pm 1\) in Eq. (66) gives a factor of \(1/(\Omega-2\epsilon_{\mathbf{k}+\mathbf{q}})(\Omega+2\epsilon_{\mathbf{k}})\) which, for \(\Omega\ll E_{\mathrm{F}}\) and \(\epsilon_{\mathbf{k}},\epsilon_{\mathbf{k}+\mathbf{q}}\approx E_{\mathrm{F}}\), is of order \(1/E_{\mathrm{F}}^{2}\), as opposed to \(1/\Omega^{2}\) for the _ee_ case, cf. Eq. (74). Next, the intra- and inter-band matrix elements of the velocity can be estimated as \(v_{\mathrm{D}}\). Finally, a joint between the dashed and solid lines brings in an inter-band matrix element, \(\langle\bar{\mathbf{k}},+|\bar{\mathbf{k}}^{\prime},-\rangle\), where \(\bar{\mathbf{k}}\) and \(\bar{\mathbf{k}}^{\prime}\) are the typical electron momenta. With all of the above taken into account, the _eh1_ contribution to the conductivity can be estimated as \[\Re\sigma_{\mathrm{eh1}}\sim e^{2}\mathcal{N}_{\mathrm{F},3}v_{\mathrm{D}}^{2 }\left|\langle\bar{\mathbf{k}},+|\bar{\mathbf{k}}^{\prime},-\rangle\right| \frac{|\Im\Sigma_{\mathrm{ee}}(\Omega)|}{E_{\mathrm{F}}^{2}}. \tag{84}\] For Hubbard interaction, \(\Im\Sigma_{\mathrm{ee}}(\Omega)\) is given by Eq. (75), while \(|\bar{\mathbf{k}}-\bar{\mathbf{k}}^{\prime}|\sim k_{\mathrm{F}}\) and this \(\left|\langle\bar{\mathbf{k}},+|\bar{\mathbf{k}}^{\prime},-\rangle\right|\sim 1\). Then \[\Re\sigma_{\mathrm{eh1}}(\Omega)\sim e^{2}k_{\mathrm{F}}(N\alpha_{\mathrm{H}}) ^{2}\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}, \tag{85}\] which is of the same order as the _ee_ contribution, Eq. (76). The actual calculation of the _eh1_ contribution gives \[\Re\sigma_{\mathrm{eh1}}(\Omega)=\frac{4}{945\pi}\frac{e^{2}k_{\mathrm{F}}}{ \hbar}(N\alpha_{\mathrm{H}})^{2}\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}, \tag{86}\] which matches the estimate Eq. (85). The two-hole case is similar to the single-hole one, except for now there are two matrix elements between electron and hole states, see Fig. 5\(c\). Therefore, the _eh2_ contribution to the conductivity can be estimated as \[\Re\sigma_{\mathrm{eh2}}(\Omega)\sim e^{2}\mathcal{N}_{\mathrm{F},3}v_{ \mathrm{D}}^{2}\left|\langle\bar{\mathbf{k}},+|\bar{\mathbf{k}}^{\prime},- \rangle\right|^{2}\frac{\Im\Sigma_{\mathrm{ee}}(\Omega)}{E_{\mathrm{F}}^{2}}. \tag{87}\] For Hubbard interaction, the matrix element is on the order of unity, and \[\Re\sigma_{\mathrm{eh2}}(\Omega)\sim\Re\sigma_{\mathrm{eh1}}(\Omega)\sim\Re \sigma_{\mathrm{ee}}(\Omega), \tag{88}\] whereas the actual calculation gives \[\Re\sigma_{\mathrm{eh2}}(\Omega)=\frac{2}{189\pi}\frac{e^{2}k_{\mathrm{F}}}{ \hbar}(N\alpha_{\mathrm{H}})^{2}\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}. \tag{89}\] The final result for the conductivity due to Hubbard interaction is the sum of the _ee_, _eh1_, and _eh2_ contributions, given by Eqs. (77), (86), and (89): \[\Re\sigma(\Omega)=\frac{4}{175\pi}\frac{e^{2}k_{\mathrm{F}}}{\hbar}(N\alpha_{ \mathrm{H}})^{2}\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}. \tag{90}\] Note that Eq. (90) is valid for a gapless Dirac spectrum. For an almost parabolic spectrum, e.g., a gapped Dirac spectrum in the limit of \(E_{\mathrm{F}}\ll\Delta\), the _ee_ contribution is suppressed due to a small value of the non-parabolicity coefficient \(w\) [cf. Eq. (70)]. The _eh1_ and _eh2_ contributions are also suppressed because the eigenstates of the Hamiltonians (7b) and (17b) are either electron-like or hole-like and, therefore, the matrix elements \(\langle\mathbf{k},+|\mathbf{k}^{\prime},-\rangle\) are small. In addition, there is also a partial cancellation between the diagrams in this case [42]. As a result, the total conductivity for an almost parabolic Dirac spectrum acquires a small factor of \((E_{\mathrm{F}}/\Delta)^{2}\ll 1\). This is why these contributions were neglected in Refs. [27, 31, 32, 33, 34]. For Coulomb interaction, \(|\bar{\mathbf{k}}-\bar{\mathbf{k}}^{\prime}|\sim\kappa_{3}\ll k_{\mathrm{F}}\) and, therefore, the matrix element between almost orthogonal electron and hole states is small: \(\left|\langle\bar{\mathbf{k}},+|\bar{\mathbf{k}}^{\prime},-\rangle\right|\sim \kappa_{3}/k_{\mathrm{F}}\ll 1\). Therefore, the _eh1_ and _eh2_ contributions to the conductivity are smaller than the _ee_ one in Eq. (79) by a factor of \(\kappa_{3}/k_{\mathrm{F}}\) and \((\kappa_{3}/k_{\mathrm{F}})^{2}\), respectively. Thus, these contributions can be neglected, and the leading contribution to the conductivity for the Coulomb case is still given by Eqs. (80) and (83). ### Intermediate frequencies: \(\omega_{\mathrm{I}}\leq\Omega\leq\omega_{\mathrm{D}}\) In the intermediate frequency regime, there are eight possible terms contributing for each type of diagrams. These terms are specified by the helicities \(s_{3}=-s_{4}\) and four possibilities of \(s_{1},s_{2}=\pm 1\) therein. As shown in Sec. IV.1.2), these terms start to contribute only for \(\Omega\) above the indirect threshold, \(\omega_{\mathrm{I}}=E_{\mathrm{F}}\), which is below the direct (Pauli) threshold \(\omega_{\rm D}=2E_{\rm F}\). Previous work by Gavoret et al. [27] and others after them [34, 31] has studied only the diagrams allowed by the Hamiltonian (16). For the Dirac spectrum, all diagrams are allowed and we analyze the leading-order ones, either within the large-\(N\) or RPA approximations. #### v.3.1 Threshold behavior for \(\Omega\gtrapprox\omega_{\rm I}\) Analytic results in the intermediate frequency regime can be obtained only for frequencies just above \(\omega_{\rm I}\); for the rest of this regime, we will have to defer to numerical computation, discussed in Sec. V.4. To simplify analysis, we note that the contributions for the \(s_{3}=+1,s_{4}=-1\) case can be mapped onto the \(s_{3}=-1,s_{4}=+1\) one just by relabelling the helicities. Thus, we need to consider only the \(s_{4}=+1,s_{3}=-1\) case. The sum over \(\mathcal{S}_{B}=\{s_{1},s_{2}\}=\{\pm 1,\pm 1\}\) in Eq. (64) contains terms of three types: \(t_{1}\sim 1/\Omega^{2}\), \(t_{2}\sim 1/\Omega\min\{\Omega,\varepsilon_{\bf k}\}\), and \(t_{3}\sim 1/\min\{\Omega^{2},\varepsilon_{\bf k}^{2}\}\). Near the threshold, \(\Omega\gtrapprox\omega_{\rm I}=E_{\rm F}\) while \(\varepsilon_{\bf k}\approx E_{\rm F}\). Therefore, \(t_{1}\sim t_{2}\sim t_{3}\sim 1/E_{\rm F}^{2}\). Next, the current vertex is \(\sim v_{\rm D}\), and the conductivity is estimated as \[\Re\sigma_{\rm IF}(\Omega) \sim\frac{e^{2}}{\omega_{\rm I}}\frac{v_{\rm D}^{2}}{E_{\rm F}^{ 2}}\int_{\bf k}\theta(\omega_{\rm I}+\delta\Omega+\xi_{\bf k}^{-})\] \[\times|\Im\Sigma_{\rm ee}\left({\bf k},\omega_{\rm I}+\delta \Omega+\xi_{\bf k}^{-}\right)|, \tag{91}\] where \(\delta\Omega\equiv\Omega-\omega_{\rm I}\ll E_{\rm F}\) and \(\Im\Sigma_{\rm ee}({\bf k},\omega)\) is given by Eq. (73). The theta function imposes a constraint \(\omega_{\rm I}+\delta\Omega+\xi_{\bf k}^{-}>0\) or \(\epsilon_{\bf k}<\delta\Omega\). Therefore, the integral over \(k\) in Eq. (V.3.1) is confined to a narrow region near the Dirac point \[k\lesssim k_{0}\equiv\frac{\delta\Omega}{v_{\rm D}}\ll k_{\rm F}. \tag{92}\] Under this condition, \(\Im\Sigma_{\rm ee}\) for Hubbard interaction is still of the FL form, but with \(\Omega\) replaced by \(\delta\Omega\), i.e., \(\Im\Sigma_{\rm ee}\sim(\delta\Omega)^{2}\). For Hubbard interaction, the self-energy is given by Eq. (76) with \(\Omega\) replaced by \(\delta\Omega\). Collecting all the estimates together, we obtain \[\Re\sigma_{\rm IF}(\Omega)\sim e^{2}k_{\rm F}(N\alpha_{\rm H})^{2}\left(\frac {\delta\Omega}{E_{\rm F}}\right)^{5}, \tag{93}\] while the actual calculation gives \[\Re\sigma^{\rm IF}(\Omega)=\frac{71}{3240\pi}\frac{e^{2}k_{\rm F}}{\hbar}(N \alpha_{\rm H})^{2}\theta(\delta\Omega)\left(\frac{\delta\Omega}{E_{\rm F}} \right)^{5}. \tag{94}\] For Coulomb interaction, we have \[\Re\sigma^{\rm IF,C}(\Omega)=\frac{71}{3240\pi}\frac{e^{2}k_{\rm F}}{\hbar} \alpha_{\rm C}^{4}\theta(\delta\Omega)\left(\frac{\delta\Omega}{E_{\rm F}} \right)^{5} \tag{95}\] for \(\delta\Omega\ll\omega_{\rm p3}\ll E_{\rm F}\). The results for the Hubbard and Coulomb cases are identical, up to a different coupling constant, because, close to the indirect threshold, the Coulomb interaction is effectively a constant, equal to \(4\pi e^{2}/k_{\rm F}^{2}\). In fact, the results in Eqs. (94) and (95) can be readily generalized for an arbitrary dimensionality and spectrum. Indeed, the dependence on \(\delta\Omega\) comes from the \((\delta\Omega)^{2}\)-scaling of the self-energy, which does not depend on dimensionality (as long as \(d\geq 2\)), and the factor of \(k_{0}^{d}\), whose dependence on \(\delta\Omega\) is determined both by the dimensionality and the energy spectrum. In particular, for \(\epsilon_{\bf k}\propto k^{a}\), we obtain \[\beta_{\rm A}=d/a+2. \tag{96}\] For \(d=3\) and \(a=1\) this gives \(\beta_{\rm A}=5\), in agreement with Eq. (94), while for \(d=3\) and \(a=2\) we obtain \(\beta_{\rm A}=7/2\), in agreement with Ref. [27]. Note that the threshold singularities occur in the presence of slowly varying contributions from the _ee_, _eh1_, and _eh2_ processes, which were discussed in Sec. V.1. Certainly, the asymptotic forms of these contributions, Eqs. (V.3.1) and (83), are no longer valid for \(\Omega\sim\omega_{\rm I}=E_{\rm F}\). However, if we naively extrapolate these expressions to the region of \(\Omega\gtrapprox\omega_{\rm I}\), we would find that the threshold singularities are completely masked by slowly varying contributions, unless, of course, one differentiates the total conductivity with respect to \(\Omega\) an appropriate number of times.7 This result is confirmed by numerical calculations presented in Secs. V.4 and V.4. Only if the spectrum is gapped and almost parabolic, i.e., \(E_{\rm F}\ll\Delta\), can the threshold singularities be detected against the background of other contributions [see the discussion after Eq. (V.3.1)]. Footnote 7: We need to use Eq. (83) for the Coulomb case because we are in the interval \(\omega_{\rm p3}\ll E_{\rm F}\gtrapprox\Omega\). Figure 6: Examples of diagrams describing Auger-Meitner scattering processes. Diagram \(a\) corresponds to the case of \(s_{3}=-1,s_{4}=+1\), when absorption occurs as depicted in Fig. 4a. Diagram \(b\) corresponds to the case of \(s_{3}=+1,s_{4}=-1\), when absorption occurs as depicted in Fig. 4_b._ The filled and blank circles denote matrix elements of \(ee\) and \(eh\) interactions, respectively, while the filled and blank squares denote the intra- and inter-band current vertices, respectively. The lines connecting vertices A and B, and C and D can, in general, be of either helicity; the diagrams shown in the figure correspond to specific choices of those helicities. Generic frequencies in the interval \(\omega_{\rm I}\lesssim\Omega\lesssim\omega_{\rm D}\) For a generic frequency above \(\omega_{\rm I}=E_{\rm F}\) but below \(\omega_{\rm D}=2E_{\rm F}\) and away from both thresholds, we can obtain only an estimate for the conductivity, by replacing \(\delta\Omega\) in Eqs. (94) and (95) with \(E_{\rm F}\). This yields \[\Re\sigma_{\rm IF}(\Omega)\sim\frac{e^{2}k_{\rm F}}{\hbar}\left\{\begin{array} []{c}(N\alpha_{\rm H})^{2},\\ \alpha_{\rm C}^{4},\end{array}\right. \tag{97}\] for the Hubbard and Coulomb cases, respectively. Extrapolating the asymptotic results for the electron-electron and electron-hole contributions by putting \(\Omega\sim E_{\rm F}\) in Eqs. (90) and (83), we see that all the contributions are comparable to each other in this range. The numerical results in this range are discussed in Sec. V.4. ### High frequencies: \(\Omega>\omega_{\rm D}\) At the level of non-interacting electrons, the optical conductivity of undoped and gapless 3D Dirac metal scales linearly with frequency [see Eq. (2)]. In the doped case, the onset of the linear scaling is shifted to \(\omega_{\rm D}\): \[\Re\sigma_{\rm N{\rm I}3}(\Omega)=\frac{Ne^{2}\Omega}{24\pi\hbar v_{\rm D}} \theta(\Omega-\omega_{\rm D}). \tag{98}\] To the best of our knowledge, effects of electron-electron interaction in 3D Dirac systems were studied only for the undoped case. In this case, the Coulomb interaction is marginally irrelevant and, consequently, the Dirac velocity acquires an upward logarithmic renormalization while the coupling constant is renormalized downward [12; 14]. The optical conductivity also experiences a logarithmic renormalization and, at \(\Omega\to 0\), the slope of the linear scaling approaches a universal limit of \(1+1/(N+1)\)[13]. By analogy with the 2D case, however (see Sec. VI.3), we expect the optical conductivity to exhibit a logarithmic singularity at \(\Omega=\omega_{\rm I}\) both for Coulomb and Hubbard interactions. Renormalization of the optical conductivity is the first-order interaction effect, while the absorption processes studied in this paper are second-order ones. Therefore, the latter should be subleading to the former for \(\Omega>\omega_{\rm D}\). Due to the lack of known first-order results for the doped case in this range, we will model the optical conductivity by its non-interacting value in Eq. (98). ### Numerical results in 3D We evaluate Eq. (45) numerically for each diagram, for frequencies up to \(\omega_{\rm D}=2E_{\rm F}\) assuming Hubbard interaction. [To treat the Coulomb case for \(\Omega\) comparable to \(E_{\rm F}\), we would need to use the exact dynamic interaction in Eq. (30), which is very expensive computationally.] Then we sum the results according to Eqs. (43) and (40) to obtain the total Eq. (23). The conductivity in units \(e^{2}k_{\rm F}\alpha_{\rm H}^{2}N^{2}/\hbar\) for \(\Omega<\omega_{\rm D}\) is plotted in the main panel of Fig. 7, left axis. For the region \(\Omega>\omega_{\rm D}\), where, at least in the weak-coupling limit, absorption by non-interacting Dirac electrons dominates over interaction-induced absorption, we plot the non-interacting result, Eq. (98), normalized by \(e^{2}k_{\rm F}N/\hbar\) (right vertical axis). It is worth pointing out that the rescaled conductivity is numerically small for almost the entire range of \(\Omega<\omega_{\rm D}\), except for a narrow window near \(2\omega_{\rm D}\), where the weak-coupling approximation breaks down (see a more detailed discussion at the end of this section). This implies that the interaction effects are numerically weaker than they can be expected to be. For example, for \(\Omega\sim E_{\rm F}\) an order of magnitude estimate for the rescaled conductivity is a number of order one. Instead, the actual result at, for example, \(\Omega/E_{\rm F}=1.1\), is equal to \(0.00667\). This feature is in agreement with the asymptotic results for \(\Omega\ll E_{\rm F}\) in Table 2, all of which have small numerical coefficients. Also, as expected (see Sec. V.2.1), the threshold singularity due to AM processes at \(\Omega=\omega_{\rm I}=E_{\rm F}\) is completely masked by the _ee_ and _eh_ contributions due to the non-parabolicity of the Dirac spectrum: there is no trace of the AM singularity in the main panel of Fig. 7. We illustrate this point further in Fig. 8, in which the contributions to the conductivity from all but AM processes and from AM processes are plotted separately. As can be seen from the figure, the AM contribution is smaller by orders of magnitude than the sum of other contributions near the indirect threshold, and becomes comparable to the latter only near the direct threshold of \(2E_{\rm F}\). While these plots are for a model Hubbard interaction, we expect a similar behavior for a more realistic Coulomb case, because the threshold singularity is not sensitive to the type of interaction. The inset in Fig. 7 shows the numerical results (blue dots) plotted versus the low-frequency analytic result, Eq. (90), on a log-log scale. As we see, the analytic result still works well up to \(\Omega\approx E_{\rm F}\). Lastly, we see an upturn in \(\Re\sigma(\Omega)\) as \(\Omega\) approaches \(2E_{\rm F}\) from below. This indicates that our perturbative approach, in which the Green's functions in all diagrams of Fig. 2 are replaced by the free ones, breaks down near the direct threshold, more precisely, when \(0<\omega_{\rm D}-\Omega\lesssim\alpha_{\rm H}^{2}E_{\rm F}\) for the Hubbard case and for \(0<\omega_{\rm D}-\Omega\lesssim\alpha_{\rm C}^{4}E_{\rm F}\) for the Coulomb case. This breakdown can be seen from, e.g., Eq. (64). Indeed, substituting \(s_{1}=s_{2}=+1,s_{3}=-1\) into the denominators in Eq. (64), we see that the product of two fractions becomes equal to \(1/(\Omega-2\epsilon_{\bf k})^{2}\). Now, from the paragraph above Eq. (92), we know that \(\epsilon_{\bf k}<\delta\Omega=\Omega-E_{\rm F}\) in the intermediate-frequency regime, i.e., for \(E_{\rm F}\leq\Omega<2E_{\rm F}\). As \(\Omega\) approaches \(2E_{\rm F}\), the maximum value of \(2\epsilon_{\bf k}\) also approaches \(2E_{\rm F}\), and the integral over \({\bf k}\) diverges. In principle, this singularity should be mitigated by re-summation of the perturbation theory, which is beyond the scope of this work. ## VI Optical conductivity of a 2D Dirac metal Just as in 3D, we first discuss the lowest frequency regime for 2D, and then the intermediate and high-frequency regimes. ### Lowest frequencies: \(\Omega\ll E_{\rm F}\) As in the 3D case, this regime corresponds to \(s_{3}=s_{4}=+1\). The conductivity can be split into two contributions: a purely electron one and a contribution from processes that involve up to two holes. #### vi.1.1 Intra-band absorption due to electron-electron interaction The reasoning about partial cancellation of diagrams for an isotropic spectrum follows the same lines as for the 3D case, see Sec. V.1.1. We thus have exactly the same expressions for the conductivity as in Eqs. (72) and (73), but now with the momentum integrals being 2D rather than 3D. The self-energy in 2D has an extra logarithmic factor; however, this factor cancels between different diagrams. Nevertheless, the \(q\) integrand in Eq. (73) has an extra factor of \(q\) in the denominator, which does lead to a logarithmic enhancement of the conductivity compared to the 3D case, cf. Ref. [26]. For Hubbard interaction, we estimate the conductivity as \[\Re\sigma_{\rm ee}\ (\Omega)\sim e^{2}(N\alpha_{\rm H})^{2}\left( \frac{\Omega}{E_{\rm F}}\right)^{2}\ln\left(\frac{E_{\rm F}}{\Omega}\right), \tag{99}\] whereas the actual calculation gives \[\Re\sigma_{\rm ee}(\Omega)=\frac{e^{2}}{\hbar}(N\alpha_{\rm H})^{ 2}\left(\frac{1}{80\pi^{2}}\ln\frac{E_{\rm F}}{\Omega}+\frac{30\ln 2-1}{1200 \pi^{2}}\right)\left(\frac{\Omega}{E_{\rm F}}\right)^{2}. \tag{100}\] Note that Eq. (100) contains not only the leading logarithmic term but also a subleading one. Keeping the Figure 7: Numerical results for the optical conductivity, \(\Re\sigma(\Omega)\), as a function of \(\Omega\) (in units of \(E_{\rm F}\)) for a gapless 3D Dirac metal with a Hubbard-like interaction. The left vertical axis is in units of \((e^{2}k_{\rm F}/\hbar)N^{2}\alpha_{\rm H}^{2}\), where \(N\) is the total degeneracy, e.g., the number of distinct Dirac points, and \(\alpha_{\rm H}\) is the dimensionless coupling constant of Hubbard interaction. The blue dots are the numerically evaluated values of \(\Re\sigma(\Omega)\), while the continuous blue curve is a guide to the eye. The green line is the non-interacting result, Eq. (98), plotted along the right vertical axis in units of \((e^{2}k_{\rm F}/\hbar)N\). The dashed vertical line demarcates the direct (Pauli) threshold at \(\omega_{\rm D}=2E_{\rm F}\). Inset: The conductivity in the range of \(0\leq\Omega<2E_{\rm F}\) on a log-log scale (blue dots). The red dashed line is the analytic result for \(\Omega\ll E_{\rm F}\), Eq. (90), which is extrapolated beyond the nominal range of its validity. subleading term is necessary for comparison with the _eh1_ and _eh2_ contributions, which do not have a logarithmic enhancement. As in 3D, the case of Coulomb interaction in the region \(\Omega\ll\omega_{\mathrm{p2}}\ll E_{\mathrm{F}}\) is similar to the Hubbard one. Explicit calculation shows that \[\Re\sigma_{\mathrm{ee1}}^{\mathrm{C}}(\Omega)=\frac{e^{2}}{\hbar} \left(\frac{1}{80\pi^{2}}\ln\frac{\omega_{\mathrm{p2}}}{\Omega}+\frac{5\ln 2 -2}{16\pi^{2}}\right)\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}, \tag{101}\] where \(\omega_{\mathrm{p2}}\) is defined in Eq. (32). The leading logarithmic term in the last equation coincides with the result of Ref. [26]. Note that, in contrast to the 3D case, the conductivity depends on the coupling constant of the Coulomb interaction only via the cutoff of the logarithmic term. In the range of frequencies \(\omega_{\mathrm{p2}}\ll\Omega\ll E_{\mathrm{F}}\), the estimate for the self-energy in Eq. (81) is modified as \[\Im\Sigma_{\mathrm{ee}}(\Omega) \sim-\frac{\kappa_{2}^{2}}{\mathcal{N}_{\mathrm{F,2}}v_{\mathrm{ F}}^{2}}\int_{0}^{\Omega}\mathrm{d}\nu\nu\int_{\max\{\Omega,\Omega-\nu\}/v_{ \mathrm{F}}}^{\infty}\frac{\mathrm{d}q}{q^{3}}\] \[\sim-\omega_{\mathrm{p2}}. \tag{102}\] In contrast to the 3D case, the self-energy remains constant in this frequency interval and, according to Eq. (76), the same is true for the conductivity. The actual calculation gives \[\Re\sigma_{\mathrm{ee2}}^{\mathrm{C}}(\Omega)=\frac{5}{576\pi^{2}}\frac{e^{2} }{\hbar}\frac{e^{2}}{v_{\mathrm{D}}}. \tag{103}\] #### iv.1.2 Absorption processes involving up to two holes Now we analyze the scattering processes which involve up two holes. Again, the general reasoning here is exactly the same as the 3D ch case. Namely, there are two types of such processes: with one hole (_eh1_, \(s_{1}=-s_{2}=\pm 1\)) and with two holes (_eh2_, \(s_{1}=s_{2}=-1\)), with the same corresponding conditions on the helicities as in the 3D case. The estimates for the _eh1_ and _eh2_ contributions to the conductivity are the same as in the 3D case, i.e., they are given by Eqs. (85) and (88), modulo a replacement \(e^{2}k_{\mathrm{F}}\to e^{2}\). Therefore, the estimates for the _eh1_ and _eh2_ contributions in 2D read \[\Re\sigma_{\mathrm{eh1}}(\Omega)\sim\Re\sigma_{\mathrm{eh2}}(\Omega)\sim e^{2} \alpha_{\mathrm{H}}^{2}\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}, \tag{104}\] whereas the actual calculation shows that \[\Re\sigma_{\mathrm{eh1}}\;=\Re\sigma_{\mathrm{eh2}}\;=\frac{1}{96\pi^{2}} \frac{e^{2}}{\hbar}(N\alpha_{\mathrm{H}})^{2}\left(\frac{\Omega}{E_{\mathrm{ F}}}\right)^{2}. \tag{105}\] Adding up the _eh1_ and _eh2_ contributions, we have \[\Re\sigma_{\mathrm{eh}}(\Omega) =\Re\sigma_{\mathrm{eh1}}(\Omega)+\Re\sigma_{\mathrm{eh2}}(\Omega)\] \[=\frac{1}{48\pi^{2}}\frac{e^{2}}{\hbar}(N\alpha_{\mathrm{H}})^{2 }\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}. \tag{106}\] While the combined _eh_ contribution is smaller than the leading logarithmic term in _ee_ contribution [cf. Eq. (100)], it is of the same order as the next-to-leading _ee_ term. The total conductivity is then a sum of Eq. (100) and Eq. (106): \[\Re\sigma(\Omega)=\frac{e^{2}}{\hbar}(N\alpha_{\mathrm{H}})^{2} \left(\frac{1}{80\pi^{2}}\ln\frac{E_{\mathrm{F}}}{\Omega}+\frac{5\ln 2+4}{200\pi^{2}} \right)\left(\frac{\Omega}{E_{\mathrm{F}}}\right)^{2}. \tag{107}\] In terms of numbers, the logarithmic term becomes the leading one for \(\Omega/E_{\mathrm{F}}<0.05\). As in the 3D case, the _eh_ contribution for Coulomb interaction is smaller than the _ee_ one by a factor of \(\alpha_{\mathrm{C}}\), and thus Eqs. (101) and (103) are the leading contributions to the conductivity in the corresponding frequency intervals. Note that the logarithmic term in Eq. (101) becomes the leading one only at very low frequencies: \(\Omega/\omega_{\mathrm{p2}}<6.6\times 10^{-4}\). ### Intermediate frequencies: \(\omega_{\mathrm{l}}\leq\Omega<\omega_{\mathrm{D}}\) The optical conductivity of a 2D Dirac metal in the intermediate frequency regime is completely analogous to the 3D case, discussed in Sec. V.2. As in 3D, the analytic results are attainable only for \(\Omega\gtrapprox\omega_{\mathrm{l}}\). In fact, in Sec. V.2.1 we have already derived a general expression for the scaling exponent \(\beta_{\mathrm{A}}\), see Eq. (96). For the case Figure 8: Numerically evaluated all-frequencies contribution, as defined in Sec. IV.1.1, (blue dots) and the intermediate frequency contribution, as defined in Sec. IV.1.2, (blue triangles) to the optical conductivity as a function of \(\Omega\) (in units of \(E_{\mathrm{F}}\)) for a gapless 3D Dirac metal with Hubbard interaction. The left vertical axis is in the units of \((e^{2}k_{\mathrm{F}}/\hbar)N^{2}\alpha_{\mathrm{H}}^{2}\), where \(N\) is the total degeneracy, e.g., the number of distinct Dirac points, and \(\alpha_{\mathrm{H}}\) is the dimensionless coupling constant of Hubbard interaction [Eq. (27)]. Also plotted is the low-frequency asymptotic result for the all-frequency contribution, Eq. (90), (red dashed curve) and the asymptotic result for the AM contribution near \(E_{\mathrm{F}}\), Eq. (94), (black dotted curve). of \(d=2\) and \(a=2\), we obtain \(\beta_{\rm A}=3\), in agreement with Ref. [34]. For our case of \(d=2\) and \(a=1\), this equation gives \(\beta_{\rm A}=4\). Without repeating the same steps as in 3D, we just present the results for the Hubbard case \[\Re\sigma^{\rm IF}=\frac{5}{108\sqrt{3}\pi}\frac{e^{2}}{\hbar}(N\alpha_{\rm H} )^{2}\theta(\delta\Omega)\left(\frac{\delta\Omega}{E_{\rm F}}\right)^{4} \tag{108}\] and for the Coulomb case \[\Re\sigma^{\rm IF,C}=\frac{5}{108\sqrt{3}\pi}\frac{e^{2}}{\hbar}\alpha_{\rm C }^{2}\theta(\delta\Omega)\left(\frac{\delta\Omega}{E_{\rm F}}\right)^{4}. \tag{109}\] As in the 3D case, the results for the Hubbard and Coulomb cases are identical for the reason explained in Sec. V.2.1. Also, as in 3D, the estimates for the conductivity for a generic frequency within the interval \(\{\omega_{\rm I},\omega_{\rm D}\}\) and away from either of the thresholds, can be obtained by replacing \(\Omega\) with \(E_{\rm F}\) in Eq. (100) and assuming that Eq. (103) continues to be valid within an order of magnitude for \(\Omega\sim E_{F}\). This gives \[\Re\sigma_{\rm IF}(\Omega)\sim\frac{e^{2}}{\hbar}\left\{\begin{array}{c}(N \alpha_{\rm H})^{2},\\ \alpha_{\rm C}^{2},\end{array}\right. \tag{110}\] for the Hubbard and Coulomb cases, respectively. ### High frequencies: \(\Omega>\omega_{\rm D}\) The optical response of 2D Dirac metals, e.g., graphene, has been studied extensively; see, e.g., reviews [8; 9; 10] and references therein. At the non-interacting level, the optical conductivity has a universal form, given by Eq. (1). At finite doping, this result is modified to \[\Re\sigma_{\rm NI2}(\Omega)=\frac{e^{2}N}{16\hbar}\theta(\Omega-\omega_{\rm D }). \tag{111}\] As in 3D, the Coulomb interaction is also marginally irrelevant in 2D, which leads to an upward logarithmic renormalization of the Dirac velocity and, consequently, to the downward renormalization of the coupling constant. On the other hand, Hubbard interaction is irrelevant and can be neglected for frequencies below the ultraviolet cutoff of the model. The optical conductivity of doped graphene was studied by Abedinpour et al. [35] to first order in both Coulomb and Hubbard interaction. As expected, the results reduce to those for the undoped case in the limit of \(E_{\rm F}\ll\Omega\). Near the direct threshold \(\omega_{\rm D}=2E_{\rm F}\), the conductivity is logarithmically enhanced compared to the non-interacting value for both Coulomb and Hubbard cases. Because the absorption processes studied in this paper occur to second order in the interaction, they are subleading to those studied in Ref. [35] and, therefore, we will not extend our results to the region \(\Omega>\omega_{\rm D}\). ### Numerical results in 2D We evaluated the optical conductivity numerically for Hubbard interaction in a way similar to the 3D case, The results are shown in Fig. 9. The conductivity in units of \((e^{2}/\hbar)\alpha_{\rm H}^{2}N^{2}\) in the range of \(\Omega<\omega_{\rm D}=2E_{\rm F}\) is plotted on the left axis of the main panel. The inset shows the same data on the log-log scale (blue dots) and the low-frequency analytic result from Eq. (100) (red dashed line). On the right axis of the main panel, we plot the conductivity in units of \(e^{2}N/16\hbar\) for the non-interacting case, given by Eq. (111), (green solid line) and the analytic result to first order in Hubbard interaction from Ref. [35] for \(\alpha_{\rm H}=0.045\) (red solid curve). As in the 3D case, the rescaled conductivity is small compared to unity even for \(\Omega\sim E_{\rm F}\). Also, as in 3D, the threshold AM singularity from the on-set of AM processes at \(\Omega=\omega_{\rm I}\) is washed out, see Fig. 10. ## VII Conclusions We studied optical absorption is 2D and 3D Dirac metals due to electron-electron (_ee_) and electron-hole (_eh_) interactions. The latter were described by two models: a Hubbard-like interaction, with a radius shorter than the Fermi wavelength but longer than the lattice spacing, and a dynamically screened Coulomb potential. To keep the perturbation theory under control, both types of interactions were assumed to be weak. The optical conductivity, \(\Re\sigma(\Omega)\), was obtained by computing the leading diagrams for the current-current correlation functions, in the large-\(N\) approximation for the Hubbard case and in the random-phase approximation for the Coulomb case. The main focus of this paper is the behavior of \(\Re\sigma(\Omega)\) in the range of frequencies \(0<\Omega<\omega_{\rm D}=2E_{\rm F}\), where absorption is blocked by the Pauli principle in the single-particle picture. This range is further split into two ranges: \(0<\Omega<\omega_{\rm I}=E_{F}\) (I) and \(\omega_{\rm I}<\Omega<\omega_{\rm D}\) (II). In range I, absorption starts at the lowest frequencies. The conductivity in this range comes from purely _ee_ scattering, which is allowed to contribute due to broken Galilean invariance, and from certain _eh_ scattering processes, which involve up to two holes. For \(\Omega\ll E_{\rm F}\), we derived the analytic results for the conductivity, which are presented in Tables 2 and 3, for the Hubbard and Coulomb cases respectively. In both cases, \(\Re\sigma(\Omega)\) scales as \(\Omega^{2}\ln\Omega\) in 2D and as \(\Omega^{2}\) in 3D. In other words, the effective current relaxation rate, \(1/\tau_{j}\equiv(k_{\rm F}/v_{\rm D}ne^{2})\Omega^{2}\Re\sigma(\Omega)\) scales as \(\Omega^{4}\ln\Omega\) in 2D and as \(\Omega^{4}\) in 3D. (Here, \(n\) is the carrier number density, \(k_{\rm F}\) is the Fermi momentum, and \(v_{\rm D}\) is the Dirac velocity.) The _ee_ contribution to \(\Re\sigma(\Omega)\) has been derived in Ref. [26] for the Coulomb case by a different method, via the Heisenberg equations of motion for the current operator, and our results for this contribution agree with those of Ref. [26] (modulo a discrepancy in the numerical coefficient in 3D). Remarkably, the _eh_ contribution, studied in this paper, is comparable to the _ee_ one in 3D and subleading to the _ee_ in 2D only in the leading logarithmic sense. For the rest of range I, \(\Re\sigma(\Omega)\) was calculated numerically. In range II, another type _eh_ scattering processes, similar the Auger-Meitner (AM) processes in atomic physics [28, 29, 30], start to contribute to the conductivity. These processes have been studied extensively in the context of doped semiconductors (see, e.g., Refs. [31, 32, 33, 34]), but only in the model of parabolic bands, within which absorption in range I is absent, and the onset of absorption due to AM processes at \(\Omega=\omega_{\rm I}\) is manifested by a threshold singularity in \(\Re\sigma(\Omega)\). We showed that a similar singularity also exists for Dirac metals. However, in contrast to the parabolic-bands case, the AM singularity occurs at the background of absorption due to _ee_ and other _eh_ processes, which start to contribute in region I, but continue to contribute in region II as well. Our numerical calculations show that the AM threshold singularity is completely masked by these other processes. In the range of \(\Omega\sim E_{\rm F}\) (but not in the immediate vicinity of either \(E_{\rm F}\) and \(2E_{\rm F}\)), all _ee_ and _eh_ scattering processes give comparable contributions to \(\Re\sigma(\Omega)\). As \(E_{\rm F}\) is the only energy scale in this regime, the effective current relaxation rate is of order \(gE_{\rm F}\), where \(g\) is the dimensionless coupling constant for either type of interaction. However, our analytic and numerical results show that the numerical coefficient \(C\) in the relation \(1/\tau_{j}=CgE_{\rm F}\) is anomalously small, on the order of \(10^{-3}\), i.e., in reality \(1/\tau_{j}\ll E_{\rm F}\) even at \(g=1\). This may explain the observation of well-resolved collective modes below \(2E_{\rm F}\) in the helical surface state of a doped 3D topological insulator [43] (see Ref. [44] for more details). As mentioned in Sec. I, experiments on monolayer graphene find significant optical absorption at frequen Figure 9: Numerical results for the optical conductivity, \(\Re\sigma(\Omega)\), as a function of \(\Omega\) (in units of \(E_{\rm F}\)) for a gapless 2D Dirac metal with a Hubbard-like interaction. The left vertical axis is in units of \((e^{2}/\hbar)N^{2}\alpha_{\rm H}^{2}\), where \(N\) is the total degeneracy, e.g., the number of distinct Dirac points, and \(\alpha_{\rm H}\) is the dimensionless coupling constant of Hubbard interaction. The blue dots are the numerically evaluated values of \(\Re\sigma(\Omega)\), while the continuous blue curve is a guide to the eye. The dashed vertical line demarcates the direct (Pauli) threshold at \(\omega_{\rm D}=2E_{\rm F}\). The green solid line is the non-interacting result, Eq. (111), plotted along the right vertical axis in units of \(e^{2}N/16\hbar\). The red solid line is the analytic result from Ref. [35] to first order in Hubbard interaction for \(\alpha_{\rm H}=0.045\). Inset: The conductivity in the range of \(0<\Omega<2E_{\rm F}\) on a log-log scale (blue dots). The red dashed line is the analytic result for \(\Omega\ll E_{\rm F}\), Eq. (100), which is extrapolated beyond the nominal range of its validity. cies above the Drude tail but below \(2E_{\mathrm{F}}\)[15; 16; 17; 18; 19] and also significant Raman response in the same frequency range [20]. In real materials, absorption in this frequency range is not only due to \(ee\) and \(eh\) interactions, but also due to electron-impurity and electron-phonon scattering. Moreover, it was argued in Ref. [23] that the data of Ref. [15] can be well explained by taking into account only electron-impurity and electron-phonon scattering (with an addition of excitonic effects [24]). We hope that future experiments on samples with higher mobilities will be able to resolve intrinsic, \(ee\) and \(eh\) contributions to absorption. ###### Acknowledgements. This paper is dedicated to the memory of Konstantin B. Efetov, an outstanding physicist and a kind human being. We thank D. Basov, A. Jahin, A. Kumar, S. Maiti, and I. Michaloliakos for stimulating discussions. This work was supported by the US National Science Foundation under Grants No. DMR-1720816 and No. DMR-2224000.
2305.13915
DAPR: A Benchmark on Document-Aware Passage Retrieval
The work of neural retrieval so far focuses on ranking short texts and is challenged with long documents. There are many cases where the users want to find a relevant passage within a long document from a huge corpus, e.g. Wikipedia articles, research papers, etc. We propose and name this task \emph{Document-Aware Passage Retrieval} (DAPR). While analyzing the errors of the State-of-The-Art (SoTA) passage retrievers, we find the major errors (53.5\%) are due to missing document context. This drives us to build a benchmark for this task including multiple datasets from heterogeneous domains. In the experiments, we extend the SoTA passage retrievers with document context via (1) hybrid retrieval with BM25 and (2) contextualized passage representations, which inform the passage representation with document context. We find despite that hybrid retrieval performs the strongest on the mixture of the easy and the hard queries, it completely fails on the hard queries that require document-context understanding. On the other hand, contextualized passage representations (e.g. prepending document titles) achieve good improvement on these hard queries, but overall they also perform rather poorly. Our created benchmark enables future research on developing and comparing retrieval systems for the new task. The code and the data are available at https://github.com/UKPLab/arxiv2023-dapr.
Kexin Wang, Nils Reimers, Iryna Gurevych
2023-05-23T10:39:57Z
http://arxiv.org/abs/2305.13915v4
# DAPR: A Benchmark on Document-Aware Passage Retrieval ###### Abstract Recent neural retrieval mainly focuses on ranking short texts and is challenged with long documents. Existing work mainly evaluates either ranking passages or whole documents. However, there are many cases where the users want to find a relevant passage within a long document from a huge corpus, e.g. legal cases, research papers, etc. In this scenario, the passage often provides little document context and thus challenges the current approaches to finding the correct document and returning accurate results. To fill this gap, we propose and name this task Document-Aware Passage Retrieval (DAPR) and build a benchmark including multiple datasets from various domains, covering both DAPR and whole-document retrieval. In experiments, we extend the state-of-the-art neural passage retrievers with document-level context via different approaches including prepending document summary, pooling over passage representations, and hybrid retrieval with BM25. The hybrid-retrieval systems, the overall best, can only improve on the DAPR tasks marginally while significantly improving on the document-retrieval tasks. This motivates further research in developing better retrieval systems for the new task. The code and the data are available1. Footnote 1: [https://github.com/kwang2049/dapr](https://github.com/kwang2049/dapr) ## 1 Introduction Information Retrieval (IR) helps efficiently locate relevant information from a vast resource collection, acting as s a central component of many natural language applications. Traditional approaches like BM25 compute simple statistics such as the frequency of matched terms Robertson et al. (1994). Recent approaches apply neural networks to represent queries and passages into vector representations, extending the task modeling from simple term matching to complex semantic matching and improving the effectiveness significantly Xiao et al. (2022); Formal et al. (2021); Santhanam et al. (2022). Despite their success, these neural approaches are usually limited to short passage inputs, e.g. 512 tokens due to expensive operations such as self-attention Vaswani et al. (2017); Devlin et al. (2019) in their architectures. Such short-passage retrieval faces severe challenges in real-world scenarios, where long documents such as Wikipedia2 articles, scientific papers, etc. can easily go beyond this length limit. Recent work proposes new memory-efficient architectures to allow these neural networks to accept much longer document inputs Dai et al. (2019); Beltagy et al. (2020) and fulfill document-retrieval tasks Chen et al. (2022). However, returning a long document is still inefficient for a user to locate useful information. Kwiatkowski et al. (2019) collects user queries from Google Search3 logs and annotates the relevant passage in Wikipedia pages. We find for the 35.8% queries, the position of the relevant passage Figure 1: An example instance from DAPR. To find the relevant passage to the query, the retriever needs to utilize the document-level context, which in this case means coreference resolution for the noun _the venue_. See other categories of the document-level context and examples in subsection 4.5. is 7.6 on average (std. 12.7), indicating a large further-search range. To fill this gap, we propose the _Document-Aware Passage Retrieval_ (DAPR) task, where the retriever is required to consider the document-level context for returning relevant passages. An example is shown in Figure 1. In this case, the user asks for a musician (or group) that has played at a specific venue. However, the relevant passage does not mention the venue name but only the noun reference, and thus the retriever needs to resolve such reference in the previous context within the document. We collect 5 datasets from heterogeneous domains that provide such annotations of the relevant passage within its belonging document and present them in a new benchmark named DAPR. As a side task, the evaluation of document retrieval is also included. We differentiate these two tasks with task names Q2P (Query-to-Passage) and Q2D (Query-to-Document) in DAPR, respectively. In experiments, we focus on extending the state-of-the-art neural passage retrievers by introducing the document-level context to them in various approaches including _prepending document summaries_, _pooling over passage representations_, and _hybrid retrieval with BM25_. Results show that prepending document summaries can severely harm the Q2P performance on some datasets; the simple pooling approaches fail at modeling the whole documents, yielding poor Q2D performance; hybrid retrieval can significantly improve the Q2D tasks but the Q2P ones; an intuitive cascade system performs poorly. These observations show that the DAPR tasks are challenging and substantial room for solving them remains to be explored. ## 2 The DAPR Benchmark The DAPR benchmark evaluates the systems on modeling long documents, with a special focus on passage retrieval given document-level context. It has two tasks: **(1) Query-to-Passage (Q2P)** requires the model to rank for a given query the _passages_ from a collection of documents; (2) **Query-to-Document (Q2D)** requires the model to rank for a given query the _documents_ from a collection of documents. Additionally, zero-shot cross-domain evaluation is also adopted, since the retrieval systems are often used in such setup without domain-specific human annotations for training Thakur et al. (2021). ### Dataset We select datasets and build five datasets to compose DAPR: **MS MARCO**: is originally a Question-Answering (QA) dataset built with queries from the Bing search log and passages from the Bing index Nguyen et al. (2016). During its annotation, the annotator is provided with independent passage candidates (likely from different documents) retrieved by Bing search along with their source-document URLs4. We use the corpus from the MS MARCO Document Ranking task Craswell et al. (2020). Since the passage span locations are not given in the original dataset, we apply regex fuzzy match with a 5-mismatch allowance between the QA passage and its belonging document in the corpus. The QA pairs whose passage span cannot be located in any documents are discarded (around 50% of the cases5). We then build the gold labels by viewing the question/passage in each remaining QA pair as the query/gold-relevant passage in DAPR. Since the documents in the original dataset do not contain passage boundaries, we simply chunk each document into a sequence of chunks as the passages, with a window size of 384. Each chunk with a non-empty intersection to the gold-relevant passage is labeled as relevant. Footnote 4: Although such document URLs are available, the annotators are suspected not to read these source documents for selecting the relevant passages. Footnote 5: The corpus is crawled after the creation of the QA dataset and the mismatch is mainly because many pages have been updated. **Natural Questions**: are originally a fact-seeking QA dataset built with queries from the Google search log and documents from the Wikipedia pages Kwiatkowski et al. (2019). In its annotation process, the top-5 candidate Wikipedia pages are returned by the Google search engine and the annotator is asked to select the earliest HTML bounding box containing enough information to infer the answer. In the original dataset, the answers are categorized into long/short and paragraph/table/list answers. We keep only the QA pairs with long paragraph answers, since these examples are more challenging. The corpus is built by gathering all the gold-relevant passages. **MIRACL**: is a multilingual information-retrieval dataset Zhang et al. (2022) and we use the English subset. Its corpus is composed of paragraphs from the English Wikipedia dump. Each query is written by human annotators based on the first-100 words in a randomly sampled Wikipedia page. The annotator is asked to write queries that cannot be answered by these first-100 words. The candidate passages are retrieved from Wikipedia paragraphs using an ensemble model of multiple retrievers. The annotators then annotate the top-10 candidates with binary judgments. Footnote 5: [https://www.highwirepress.com/](https://www.highwirepress.com/) **Genomics**: is a passage retrieval task for biomedical question answering Hersh et al. (2006, 2007). We combined its TREC 2006 Genomics Track and TREC 2006 Genomics Track in DAPR. The queries are biomedical questions about biological objects (e.g. genes, proteins, etc.)/processes (e.g. physiological processes or diseases) and their explicit relationship (e.g. _causes_, _contributes to_, etc.). The corpus is composed of scientific articles distributed by Highwire Press6. Expert judges are involved to annotate for each query 1000 candidate passages from a pool of submitted runs. Three-level relevance is adopted: definitely relevant, possibly relevant, and not relevant. Such relevance is defined as: in general, a passage is definitely/possibility relevant if it contains all/majority of the required elements of the question and it answers/possibly answers the question, respectively. Footnote 6: [https://www.highwirepress.com/](https://www.highwirepress.com/) **COLIEE**: is originally a series of tasks of legal case retrieval Moraes et al. (2021) and legal case entailment Moraes et al. (2021), based on cases mainly from Federal Court of Canada. In the legal-case-entailment task, given a decision of a new case and a relevant case, a specific paragraph that entails the decision needs to be identified. We take these decisions as queries7 and all the legal-case paragraphs as passages. The corpus is composed of legal cases from both the legal case retrieval/entailment tasks. Footnote 7: We do not use the queries from the legal case retrieval task as its queries are the entire case documents. The statistics are shown in Table 1. The examples of the data instances are shown in Table 7. All of these datasets contain judged passages and the Q2P task is defined directly. For the Q2D task, following Hersh et al. (2006), we simply take the belonging documents of these gold-relevant passages as the gold-relevant documents with the same relevance. ### Evaluation Since the relevance can be both binary and three-level in DAPR, we use nDCG@10 and recall@100 as the evaluation metrics. In detail, we transform the binary/3-level judgments into 0-1/0-1-2 labels first, respectively and then calculate the metrics with pytrec_eval Van Gysel and de Rijke (2018). Considering the realistic setting where the retrieval systems are often used in a zero-shot cross-domain scenario Thakur et al. (2021), we also adopt the zero-shot evaluation fashion in BeIR. That is, the models can be trained on the training split of the MS MARCO dataset and should be tested on the test splits of all the five datasets. The evaluation scores on the Q2C and Q2D tasks are reported separately. ### Document-Context Contribution Among these datasets, the importance of modeling document-level context for passage retrieval is different. To understand such document-context contribution, we apply the convex-combination fusion (cf. subsection 3.2) between BM25 document rankings and BM25 passage rankings for each query. We then record the fusion weight (on the document side) and the corresponding passage-retrieval performance. Intuitively, datasets with more document-context contribution are expected to peak at a higher fusion weight. The results are shown in Figure 2. We find the passage-retrieval performance on Genomics, MIRACL, and Natural Questions peaks at 0.3, 0.4, and 0.5 fusion weight, respectively, which implies that more document \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline **Name** & **Domain** & **\#Docs.** & \begin{tabular}{c} **\#Psg.** \\ **per doc.** \\ \end{tabular} & \begin{tabular}{c} **Psg.** \\ **len.** \\ \end{tabular} & \begin{tabular}{c} **Tilte** \\ **types** \\ \end{tabular} & \begin{tabular}{c} **Rel.** \\ **rel. passage** \\ \end{tabular} & \begin{tabular}{c} **Depth of** \\ **rel. passage** \\ \end{tabular} & \begin{tabular}{c} **\#Queries / \#judgements per query** \\ **Train** \\ \end{tabular} \\ \hline MS MARCO & Misc. & 3,201,821 & 4.7 & 162.0 & ✓ & Binary & 1.8\(\pm\)2.5 & 195,933/1.1 & 12,954/1.1 & 12,954/1.1 \\ Natural Questions & Wiki. & 108,626 & 24.7 & 105.5 & ✓ & Binary & 7.6\(\pm\)12.7 & 93,275/1.0 & 3,610/1.0 & 3,610/1.2 \\ MIRACL & Wiki. & 5,758,285 & 5.7 & 105.3 & ✓ & Binary & 13.4\(\pm\)21.3 & 2,064/2.8 & 799/2.7 & 799/2.9 \\ Genomics & Biomed. & 162,259 & 77.9 & 150.5 & ✓ & 3-level & 38.0\(\pm\)38.6 & – & – & 62/121.9 \\ COLIEE & Legal & 5,025 & 47.5 & 138.7 & ✗ & Binary & 22.7\(\pm\)22.9 & – & – & 625/1.2 \\ \hline \end{tabular} \end{table} Table 1: Statistics of the datasets in DAPR. Depth of rel. passage indicates the position (starting from 1) of the relevant passage in its belonging document. level context is required on them to retrieve the relevant passages. On the other hand, the much lower peaking fusion weights on MS MARCO and COLIEE (both 0.1) show that document-level context contributes little to passage retrieval on them. ## 3 Experiments In the experiments, we mainly extend the state-of-the-art passage retrievers (subsection 3.1) with document-level context with various approaches (subsection 3.2). ### Base Retrievers We experiment with BM25 and neural passage retrievers. We use PySerini Lin et al. (2021) with the default setting for BM25 retrieval. For a stronger baseline, we also include **BM25 + Doc2Query** into comparison, where each passage is extended by 20 generated queries from a T5 generator fine-tuned on MS MARCO Nogueira et al. (2019). Following Gospodinov et al. (2023), we filter the generated queries to its 30% with a reranker trained on MS MARCO. For the neural retrievers, we use: (1) **RetroMAE8**Xiao et al. (2022), a dense retriever which is pre-trained with Masked Auto-Encoder on English Wikipedia, BookCorpus Zhu et al. (2015) and the MS MARCO corpus and then fine-tuned with cross-entropy on the MS MARCO training split; (2) **SPLADEv29**Formal et al. (2022), a sparse retriever which is pre-trained with coCondensor Gao and Callan (2022) on the pre-training corpora of RetroMAE and then finetuned with knowledge distillation on the MS MARCO training split; (3) **ColBERTv2**Santhanam et al. (2022), a late-interaction retriever trained with knowledge distillation and cross-entropy on the MS MARCO training split. These retrievers represent the three main architectures for neural retrieval. And all of them achieve both strong in-domain performance on the MS MARCO passage ranking task and zero-shot out-of-domain performance. In experiments, we apply exact search over the whole corpus for all these three retrievers. Footnote 8: The checkpoint from [https://huggingface.co/Shiato/RetroMAE_BEIR](https://huggingface.co/Shiato/RetroMAE_BEIR). Footnote 9: The checkpoint from [https://huggingface.co/naver/splade-cocondenser-ensembledistil](https://huggingface.co/naver/splade-cocondenser-ensembledistil). The BM25 retriever can index texts of arbitrary length and we can obtain indices of passages or documents directly. For the neural passage retrievers, they can only encode short passages of 512 tokens at maximum in an out-of-the-box manner. To make them work with document retrieval, the **MaxP** and the **FirstP** approaches can be used, where the document score is computed as the max. passage score or the score of the first passage in that document, respectively Xiong et al. (2021). ### Introducing Document-Level Context We experimented with three ways of introducing document-level context to the neural passage retrievers: **Prepending document summaries**: condenses the document-level context into a short piece of text and prepends it to each passage in the document. Three types of such document summaries are compared: (1) the leading 3 sentences of the document, (2) the title of the document, and (3) the keyphrases of the document. The titles are not used by default. For keyphrase extraction, we use the TopicRank algorithm Bougouin et al. (2013) with the default setting to extract the top-10 keyphrases for each document. For Q2C, the passage retrieval is done on the passages with the prepended summary; for Q2D, the MaxP approach is applied to the passage scores in the Q2C task for retrieving documents. **Hybrid retrieval with BM25**: fuses the relevance scores from a BM25 retriever and a neural retriever. We compute the fusion as the convex combination of the normalized relevance scores Wang et al. (2021): \[s_{\mathrm{convex}}(q,c)=\alpha\hat{s}_{\mathrm{BM25}}(q,c)+(1-\alpha)\hat{s} _{\mathrm{neural}}(q,c),\] where \(q\) represents the query, \(c\) represents the passage/document candidate, \(\alpha\in[0,1]\) is the fusion Figure 2: Influence of the fusion weight between BM25 document retrieval and BM25 passage retrieval on the Q2P task performance. The small triangles indicate the peaking points. coefficient, and \(\hat{s}_{\rm BM25}\hat{s}_{\rm BM25}\) represents the normalized BM25/neural-retrieval relevance score, respectively. The normalization for a relevance score \(s\) is calculated as: \[\hat{s}(q,c)=\frac{s(q,c)-m_{q}}{M_{q}-m_{q}}, \tag{1}\] where \(m_{q}\) and \(M_{q}\) are the min. and max. relevance scores of the top candidates for \(q\), respectively. For any misaligned candidates, a zero score is taken for the candidate-missing side. There are possibly many combinations of the retrievers to be fused. We mainly experiment with three combinations: (1) **BM25 Doc./Passage + Neural FirstP** where BM25 on documents/passages and a neural retriever with FirstP on documents are fused for Q2D/Q2C, respectively; (2) **BM25 Passage + Neural MaxP** where BM25 on passages and a neural retriever on documents is fused followed by MaxP (over the fused scores) or not for Q2D or Q2C, respectively; (3) **BM25 Doc. + Neural MaxP** where BM25 on documents and a neural retriever on passages with/without MaxP (over the neural scores10) is fused for Q2D and Q2C, respectively. For reference, we also include a baseline system, **BM25 Doc + BM25 Passage**, which simply fuses the document and the passage relevance by BM25 for the Q2P tasks and relies on BM25 document relevance solely for the Q2D tasks. We tune \(\alpha\) for each system on the MS MARCO dev split and fix this best \(\alpha\) value for evaluation on other datasets. Footnote 10: We also try the MaxP approach over the fused scores here and we find the performance is almost identical. **Pooling over passage representations** models the document-level context by a pooling operation over the vector representations of the passages within the document. Three pooling operations are compared: mean, max, and sum. For Q2D, the document retrieval is done on the pooled representations; for Q2C, the passage retrieval is done by fusing the passage relevance score and the document relevance score from the Q2D task. ## 4 Results ### Q2P and Q2D Performance **BM25** Without neural retrieval, BM25 + Doc2Query performs the best overall on both Q2P and Q2D tasks. As the only degeneration case i.e. on Genomics, it implies that query generation is likely to struggle at generating meaningful queries for specialized domains (biomedical in this case). BM25 Doc + BM25 Passage also improves BM25 by up to 1.3 nDCG@10 points (on MIRA of Q2P), which shows BM25 document retrieval can help improve the Q2P tasks. **Document summary** Prepending the title to the passages improves the Q2P performance on Natural Questions (+7.7% nDCG@10 for ColBERTv2) and MIRACL (+1.9% nDCG@10 for ColBERTv2) substantially. This result is in line with our case study in subsection 4.5, where the title as a type of background provision (i.e. the topic of the document) can serve as a necessary document-level context for a large portion of queries. It also improves the Q2D performance on all the datasets for ColBERTv2 by 2.5% nDCG@10 on average. However, all these three types of document summaries harm the performance on the Q2P task of Genomics, implying that the document summary possibly interferes with locating the relevant passage in some cases. We also observe a dramatic Q2P decrease on MIRACL with Lead. + Neural MaxP. We suspect that this is because of the annotation guide of MIRACL subsection 2.1, which deliberately avoids questions that can be answered by the first paragraphs. All these trends above are consistent among different neural retrievers. **Pooling** We find pooling over the passage representations can only improve the Q2P performance marginally, with Mean Pooling being the best. The poor performance on the Q2D task shows that these pooling methods fail at representing long documents for retrieval tasks with a single pooled vector. It also explains the poor Q2P performance in general. **Hybrid** On Q2P, BM25 Passage + Neural MaxP and BM25 Doc. + Neural MaxP with ColBERTv2 achieve the best/second-best overall performance, respectively, improving the baseline approach Neural MaxP by 1.2% and 1.9% nDCG@10 in average, respectively. On Q2D, BM25 Doc. + Neural MaxP achieves the best overall performance on all the datasets except Natural Questions. The largest improvement comes to MIRACL by 8.6% nDCG@10 with RetroMAE. We find the MaxP approach are much better than the FirstP approach, which is in line with Xiong et al. (2021). These observations are consistent among different neural retrievers. ### Broadcasting Passage Relevance to Document Relevance As an alternative to using different strategies between the Q2C and Q2D tasks, broadcasting the passage relevance to the document relevance can also perform the Q2D task. In detail, given a top-\(K\) passage ranking of \(s(q,p_{1}),s(q,p_{2}),...,s(q,p_{K})\) for \(q\) and passages \(\{p_{i}\}_{i=1}^{K}\), the query-document relevance can be computed as \(s(q,d_{i})=s(q,p_{i})\), where \(d_{i}\) is the belonging document of \(p_{i}\). We compare the performance with or without broadcasting. The results are shown in Table 3. We find broadcasting can improve BM25, BM25 Doc. + BM25 Passage and BM25 + doc2query by 2.6 to 3.9 nDCG@10 points on average. This is intuitive as the Q2D tasks in DAPR are labeled by exactly broadcasting the passage judgment to its belonging document (cf. subsection 2.1). Interestingly, on the contrary, broadcasting harms the performance of BM25 Doc. + Neural MaxP on all the datasets by up to 2.6 nDCG@10 points (on MIRACL). ### Other Fusion Methods Besides the convex combination mentioned in subsection 3.2, there are also other fusion methods for the hybrid search. Reciprocal Rank Fusion (RRF) [12] fuses the reciprocal ranks of the candidates: \[s_{\rm{RRF}}(q,c)=\frac{1}{\eta+\pi_{\rm{BM25}}(q,c)}+\frac{1}{\eta+\pi_{\rm{ neural}}(q,c)},\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline **Group** & **Method** & \multicolumn{6}{c|}{**Q2P (Query-to-Passage)**} & \multicolumn{6}{c|}{**Q2P (Query-to-Document)**} \\ \cline{3-11} & **MS.** & **NO.** & **COLI.** & **Geno.** & **MRLA.** & **Avg.** & **MS.** & **NO.** & **COLI.** & **Gene.** & **MRLA.** & **Avg.** \\ \hline \multirow{3}{*}{BM25} & BM25 & 23.5 & 24.1 & 28.8 & 32.8 & 28.7 & 27.6 & 27.2 & 50.9 & 27.5 & 43.5 & 43.1 & 38.4 \\ & BM25 Doc + BM25 Passage & 23.6 & 25.1 & 29.1 & 33.3 & 30.0 & 28.2 & 27.2 & 50.9 & 27.5 & 43.5 & 43.1 & 38.4 \\ & BM25 + Doc2Query & 27.0 & 30.5 & 29.0 & 31.0 & 34.0 & 30.3 & 31.6 & 60.0 & 26.0 & 48.1 & 49.3 & 43.0 \\ \hline \multicolumn{11}{|c|}{_ReproMME_} \\ \hline \multirow{3}{*}{Doc.} & Neural MaxP & 33.5 & 45.3 & 22.2 & 30.6 & 45.6 & 35.5 & 39.9 & 71.6 & 31.1 & 42.6 & 60.5 & 49.1 \\ & Lead + Neural MaxP & 25.7 & 33.8 & 15.2 & 27.1 & 24.6 & 25.3 & 40.1 & 79.1 & 24.1 & 42.1 & 63.5 & 49.8 \\ & Tittle + Neural MaxP & 33.1 & 52.3 & 22.2 & 27.1 & 51.0 & 37.1 & 41.0 & 77.1 & 31.1 & 43.5 & 67.4 & 52.0 \\ & TopicRank + Neural MaxP & 30.9 & 46.0 & 19.2 & 30.3 & 42.9 & 33.9 & 38.7 & 73.1 & 27.9 & 44.8 & 60.0 & 48.9 \\ \hline \multirow{3}{*}{Pooling} & Max Pooling & 33.9 & 46.6 & 22.5 & 30.8 & 45.0 & 35.8 & 24.3 & 22.3 & 9.3 & 20.3 & 3.4 & 15.9 \\ & Mean Pooling & 33.9 & 46.8 & 22.6 & 31.4 & 45.4 & 36.0 & 28.9 & 46.8 & 17.1 & 22.0 & 10.5 & 25.1 \\ & Sum Pooling & 33.5 & 45.3 & 22.2 & 30.6 & 45.6 & 35.4 & 0.0 & 0.0 & 0.0 & 0.1 & 0.0 & 0.0 \\ \hline \multirow{3}{*}{Hybrid} & BM25 Doc./Passage + Neural FirstP & 34.3 & 36.9 & 11.4 & 30.8 & 38.1 & 30.3 & 43.5 & 72.6 & 11.3 & 35.1 & 56.4 & 43.8 \\ & BM25 Passage + Neural MaxP & 36.8 & 45.2 & 26.2 & 35.4 & 53.2 & 39.4 & 43.3 & 72.3 & 35.5 & 47.1 & 67.3 & 53.1 \\ & BM25 Doc. + Neural MaxP & 34.9 & 47.1 & 23.4 & 33.0 & 50.6 & 37.8 & 42.6 & 75.0 & 34.3 & 51.0 & 69.1 & 54.4 \\ \hline \multicolumn{11}{|c|}{_SPADE2_} \\ \hline \multirow{3}{*}{Doc.} & Neural MaxP & 35.4 & 46.7 & 29.4 & 39.2 & 50.8 & 40.3 & 41.6 & 72.1 & 38.2 & 48.6 & 63.5 & 52.8 \\ & Lead + Neural MaxP & 26.6 & 36.0 & 22.3 & 28.9 & 26.3 & 28.0 & 42.1 & 80.7 & 32.6 & 47.6 & 65.5 & 53.7 \\ \cline{1-1} & Tittle + Neural MaxP & 34.6 & 53.9 & 29.4 & 31.5 & 53.4 & 40.6 & 43.2 & 78.2 & 38.2 & 48.7 & 68.8 & 55.4 \\ \cline{1-1} & TopicRank + Neural MaxP & 32.6 & 48.6 & 26.4 & 32.7 & 48.9 & 37.8 & 40.9 & 75.2 & 35.7 & 48.3 & 64.8 & 52.9 \\ \hline \multirow{3}{*}{Pooling} & Max Pooling & 35.4 & 46.7 & 29.4 & 39.2 & 50.8 & 40.3 & 26.6 & 37.5 & 47.1 & 16.6 & 34.4 & 19.0 \\ \cline{1-1} & Mean Pooling & 35.9 & 49.0 & 29.9 & 39.4 & 51.1 & 41.1 & 30.4 & 50.2 & 25.2 & 28.6 & 12.6 & 29.4 \\ \cline{1-1} & Sum Pooling & 35.4 & 46.7 & 29.3 & 39.2 & 50.8 & 40.3 & 0.1 & 7.1 & 0.7 & 2.6 & 7.1 & 3.5 \\ \hline \multirow{3}{*}{Hybrid} & BM25 Doc./Passage + Neural FirstP & 35.1 & 36.8 & 7.9 & 31.7 & 37.0 & 29.7 & 44.6 & 72.9 & 12.3 & 38.5 & 57.3 & 45.1 \\ \cline{1-1} & BM25 Passage + Neural MaxP & 37.5 & 45.3 & **30.3** & **43.5** & 57.1 & 42.7 & 44.0 & 71.6 & 39.3 & 52.0 & 69.7 & 55.3 \\ \cline{1-1} & BM25 Doc. + Neural MaxP & 36.4 & 48.6 & 28.9 & 42.5 & 54.7 & 42.2 & 43.7 & 75.2 & 38.8 & **53.8** & **71.1** & 56.5 \\ \hline \multicolumn{11}{|c|}{_CoBERIV2_} \\ \hline \multirow{3}{*}{Doc.} & Neural MaxP & 38.8 & 48.2 & 28.5 & 40.7 & 51.6 & 41.6 & 44.7 & 72.9 & 38.2 & 49.5 & 64.0 & 53.9 \\ \cline{1-1} & Lead + Neural MaxP & 31.6 & 38.4 & 24.6 & 28.6 & 27.1 & 30.1 & 45.3 & **81.4** & 35.5 & 50.1 & 68.4 & 56.2 \\ \cline{1-1} & Tittle + Neural MaxP & 37.0 & **55.9** & 28.5 & 29.3 & 53.5 & 41.1 & **46.0** & 78.2 & 38.2 & 51.0 & 68.7 & 56.4 \\ \cline{1-1} & TopicRank + Neural MaxP & 37.0 & 50.5 & 28.0 & 33.3 & 49.9 & 39.7 & 44.4 & 75.5 & 37.1 & 49.3 & 65.5 & 54.4 \\ \hline \multirow{3}{*}{Hybrid} & BM25 Doc./Passage + Neural where \(\pi(q,c)\) is the rank of \(c\) for query \(q\) and \(\eta\) is a hyper-parameter which is set to 60 by default. Another intuitive method is Reranking as Fusion (RF) where the neural ranking is reranked by BM25 or vice versa. The results with ColBERTv2 are shown in Table 4. We find both RRF and RF yield worse results or on-par results than the convex combination, which is in line with the results in Bruch et al. (2023) for the normal passage-retrieval tasks. Interestingly, the sub-optimal results of RF imply that a trivial cascade system cannot work well on DAPR. ### Fusion-Weighting Generalization In our experiments, the fusion weight is tuned on MS MARCO and transferred directly to other datasets. We are interested in how good such fusion-weight generalization is. The results with ColBERTv2 are shown in Figure 3. We find the fusion weight can generalize much better on BM25 Doc. + Neural MaxP (\(\sigma_{\alpha}=0.07\)) than BM25 Doc./Passage + Neural FirstP (\(\sigma_{\alpha}=0.25\)) and BM25 Passage + Neural MaxP (\(\sigma_{\alpha}=0.17\)), where \(\sigma_{\alpha}\) is the std. value of the best fusion weights on different datasets. As an alternative, Bruch et al. (2023) shows that using the theoretical minimum to replace the running minimum in Equation 1 can help reduce the variance. However, the theoretical minimums for dot-product in RetroMAE/SPLADE and the max-sim operation in ColBERT are undefined. ### Document-Context Categories We conduct an error analysis on the retrieval results to better understand how document-level context contributes to the passage retrieval task. Since we are mainly interested in the challenging queries which rely on document-level context, the best passage retriever without modeling such context, i.e. BM25 Passage + Neural MaxP (ColBERTv2) is investigated. To further filter out uninteresting cases, following Wang et al. (2021), we tune the fusion weight for each query to make the system achieve its best nDCG@10 performance on that query. We collect the test queries on which the retriever achieves 0.0 nDCG@10 (\(\geq\)25% of the total queries) and sample 100 examples11 from them. We then manually bin them along with their corresponding gold-relevant passage into five categories Footnote 11: They correspond to 110 query-passage pairs. (1) **Self-contained:** the gold-relevant passage can answer the query already without any document-level context; (2) **coreference resolu \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Method** & **Fusion** & **Q2C** & **Q2D** \\ \hline \multirow{3}{*}{BM25 Doc./Passage + Neural FirstP} & Convex combination & 28.1 & 43.5 \\ & RF (BM25\(\rightarrow\)Neural) & 16.9 & 40.7 \\ & RF (Neural\(\rightarrow\)BM25) & – & 38.5 \\ & RRF & 29.1 & 44.3 \\ \hline \multirow{3}{*}{BM25 Passage + Neural MaxP} & Convex combination & 43.5 & 55.5 \\ & RF (BM25\(\rightarrow\)eural) & 41.9 & 54.0 \\ & RF (Neural\(\rightarrow\)BM25) & 29.5 & 43.5 \\ & RRF & 39.4 & 52.7 \\ \hline \multirow{3}{*}{BM25 Doc. + Neural MaxP} & Convex combination & 42.8 & 56.7 \\ & RF (BME25\(\rightarrow\)eural) & – & 53.8 \\ \cline{1-1} & RF (Neural\(\rightarrow\)BM25) & 13.1 & 40.2 \\ \cline{1-1} & RRF & 34.1 & 51.9 \\ \hline \end{tabular} \end{table} Table 4: Performance in nDCG@10 of different fusion methods with ColBERTv2. The results are averaged over the 5 datasets. BM25\(\rightarrow\)Neural means reranking the BM25 results with a neural retriever and vice versa for neural\(\rightarrow\)BM25. The numbers for the convex-combination fusion comes from Table 2. Figure 3: Best fusion weight on different datasets with ColBERTv2. The x-axis is the fusion weight on the neural retriever side. The small triangles indicate the peaking point. tion:** key coreference information with the gold-relevant passage needs to be resolved by certain document-level context; (3) **background provision:** the gold-relevant passage can only answer the query by knowing the background topic (usually the title) of the document; (4) **multi-hop reasoning:** the inference path which connects the entities in the query and the gold-relevant passage includes other nodes in the document-level context; (5) **relative position:** the gold-relevant passage can only answer the query by knowing its relative position in the document, e.g. the end of a movie plot. We carry out the analysis on Natural Questions, since it exhibits high document-context contribution (cf. subsection 2.3). The analysis results are shown in Table 5. We find half of the errors correspond to cases where it is necessary to utilize the document-level context to retrieve the gold-relevant passages. Among these cases, background provision (20.0%), coreference resolution (14.5%), and multi-hop reasoning (8.2%) account for the most share. The cases of relative position are the fewest (1.8%). ## 5 Related Work **Neural passage retrieval** mainly maps queries and passages into vector representations, modeling query-passage relevance as the distance between the corresponding vectors. Single-vector dense retrieval simply embeds the input text into a single fixed-sized vector and computes cosine-similarity or dot-product between the query and the passage vectors Karpukhin et al. (2020); Xiao et al. (2022). Dense retrieval is limited in expressivity as its vec \begin{table} \begin{tabular}{|p{42.7pt}|p{28.5pt}|p{113.8pt}|p{113.8pt}|} \hline **Category** & **\%** & **Query** & **Gold-relevant passage** & **Refviewed (top-1)** \\ \hline \multirow{4}{*}{Self-contained} & \multirow{4}{*}{48.2\%} & how did early that & **Predistor technology is technology** & **\(\mathbf{\mathcal{X}}\)** The Stone Age is a broad presibtrice period during which stone was widely used in the manufacture of implements with a sharp dog, a point, or a percussion surface. The period lasted roughly 2.5 million years, from the time of early homnids to Homo sapiens in the later Pleistocene era, and largely ended between 6000 and 2000 BCE with the advent of metalworking [citation needed]. \\ & & & **’**’** Joy to the World_” is a song written by _Hoyt Aaron and made famous by the_ **Hoyt Aaron and made famous by the_ **Hoyt Aaron** _and_ **Thee Dog Night_. & **’**’**’** tor representation is usually low-dimensional (e.g. 768D) due to efficiency concerns. Sparse retrieval improves expressivity by mapping the input text into a long sparse vector (usually vocabulary-sized) and computes dot-product as query-passage relevance (Mallia et al., 2021; Formal et al., 2021). Alternatively, other work like poly-encoder (Humeau et al., 2020) and late-interaction (Santhanam et al., 2022) represents the input text with multiple vectors and aggregates the vector distance between these multiple-vector representations. All of these neural retrievers can only accept short-length texts, e.g. 512 tokens, limiting their application scenarios. In this work, we solve this by introducing the document-level context to these passage retrievers. Long-document retrievalOne simple strategy to extend the passage retriever is taking the maximum of the passage relevance within its belonging document as the document relevance (named MaxP); or encoding only the first passage of the document (named FirstP) (Xiong et al., 2021). This strategy is sub-optimal due to still ignoring document-level context. Xiong et al. (2022) compares different long-range attention modules on a document-retrieval task for Transformer-based neural networks (Vaswani et al., 2017). Chen et al. (2022) propose a hierarchical neural network and show it can outperform the MaxP-based approaches. All these previous works do not study how to retrieve passages while considering their document-level context as in DAPR. Hybrid retrievalfuses candidate rankings returned from different retrieval systems (usually BM25 and a neural retriever) into one single ranking (Bruch et al., 2023). The fusion methods can be convex combination (Wang et al., 2021) or fusing reciprocal ranks (Cormack et al., 2009). In this work, we comprehensively study the contribution of BM25 document retrieval to our DAPR tasks by applying the hybrid retrieval methods. ## 6 Conclusion We present DAPR, a benchmark for document-aware passage retrieval. In DAPR, the retrievers require to rely on the document-level context for retrieving the relevant passages. Our empirical results show that simple yet intuitive approaches like prepending document summaries and cascade ranking perform poorly in many cases; the best-performing approaches based on hybrid retrieval struggle at improving the DAPR tasks while achieving significant improvement on the document-retrieval tasks. This leaves substantial room for future studies on solving the DAPR tasks. ## 7 Limitations In this work, we focus on extending neural passage retriever with document-level context. However, the potential of using other model architectures which can support long-range text encoding to our new task is not studied. We leave this to future work. ## 8 Ethical Concerns All the datasets in DAPR are public available. We provide the processing scripts to process the original data under our format and requirements.
2302.06401
Nonsingular black holes from conformal symmetries
We derive the form of the metric for static, nonsingular black holes with a de Sitter core, representing a deformation of the Schwarzschild solution, by assuming that the gravitational sources describe a flow between two conformal points, at small and great distances. The resulting black-hole metric turns out to be a particular case of the Fan $\&$ Wang metric, whose parameters have been recently constrained by using the data of the S$2$ star orbits around the galactic centre SgrA$^\ast$.
Mariano Cadoni, Andrea Pierfrancesco Sanna
2023-02-13T14:40:11Z
http://arxiv.org/abs/2302.06401v2
# Nonsingular black holes from conformal symmetries ###### Abstract We derive the form of the metric for static, nonsingular black holes with a de Sitter core, representing a deformation of the Schwarzschild solution, by assuming that the gravitational sources describe a flow between two conformal points, at small and great distances. The resulting black-hole metric turns out to be a particular case of the Fan & Wang metric, whose parameters have been recently constrained by using the data of the S2 star orbits around the galactic centre SgrA\({}^{*}\). ## I Introduction In recent times there has been renewed interest for asymptotically flat (AF), nonsingular black-hole solutions, which deform the Schwarzschild solution at subleading order [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Among them, the most interesting class of solutions is represented by nonsingular black holes with a de Sitter (dS) core [8]. These black-hole solutions are of interest for several reasons. Firstly, they allow to circumvent Penrose's theorem [12] by removing the classical singularity at \(r=0\). Secondly, they are solutions of Einstein's field equations sourced by an anisotropic fluid, effectively encoding the deviations responsible for the smearing of the singularity. These deviations are described by an external length scale \(\ell\), which represents an additional "hair" of the black hole. An intriguing possibility is that it could be also of superplanckian origin [8]. Thirdly, they can play the role of black-hole "mimimickers", i.e., they are indistinguishable from the Schwarzschild solution at great distances, but could nonetheless lead to observable deviations from the latter, for instance in the orbits of massive particles and photons and in the gravitational-wave spectrum (see Refs. [8; 10] and references therein). Last but not least, they could be very useful in solving the information puzzle arising during black-hole evaporation [13]. On the other hand, such models suffer from a strong limitation, which is purely theoretical. We can obtain them using general relativity (GR) with anisotropic fluids as sources, but the underlying microscopic physics is mostly unknown. This difficulty becomes particularly severe in those cases in which the deformations from the usual Schwarzschild solution have superplanckian origin [8]. The consequence is that we have a huge degeneracy, giving rise to a broad class of metric solutions, which all describe nonsingular black holes with a dS core. The coarse-grained description in term of the anisotropic fluid is not stringent. The equation of state (EOS) relating the radial pressure with the energy density, \(p_{\parallel}=p_{\parallel}(\rho)\), and the density profile \(\rho(r)\), interpolating between small and large \(r\), are very weakly constrained. Probably these difficulties are hinting at the fact that the microscopic explanation of this kind of solutions cannot be found by merely looking at GR, which only allows for an effective description of the sources in terms of anisotropic fluids. What is needed, then, is a general guiding principle to select the physically relevant solutions. In this paper, we will use conformal symmetries, including Lorentz boosts, as the guiding principle to remove the above mentioned degeneracy of solutions. There is striking evidence that conformal symmetry could be a crucial feature of any quantum theory of gravity. It is the pillar of the AdS/CFT correspondence [14; 15] and is also crucial for most microscopic derivations of the Bekenstein-Hawking black-hole entropy [16; 17; 18; 19; 20]. We will select the EOS for the anisotropic fluid using invariance under rotations and radial Lorentz boosts. In order to fix the density profile \(\rho(r)\), we will use conformal symmetries, motivated by the role played by the latter in black-hole physics. In particular, we will assume that \(\rho(r)\) describes the flow of matter fields between two conformal points, near \(r=0\) and \(r\rightarrow\infty\). We will show that these requirements select a specific spacetime metric, i.e., a particular case of the Fan & Wang metric [4], which represents the one having the strongest subleading deviations from Schwarzschild at infinity and which was recently constrained by S2 observational data [10]. The present paper is organized as follows. In Section II, we will briefly review some basic features of nonsingular black holes with a dS core and we will fix the EOS using Lorentz symmetries. In Section III, we discuss the conformal symmetries we use to constrain the density profile and derive the form of the metric. In Section IV, we give a simple example for the source in terms of nonlinear electrodynamics. Finally, in Section V we summarize our results. ## II Nonsingular black holes with a de Sitter core Due to Birkhoff's theorem, any nonstandard GR black-hole solution has to be obtained from Einstein's equations sourced by a nonzero stress-energy tensor. The most general one is that of an anisotropic fluid, which has been widely adopted to effectively parametrize several differ ent effects and deviations from GR phenomenology, both at small and cosmological scales (for an incomplete list, see, e.g., Refs. [5; 6; 8; 11; 21; 22; 23; 24; 25; 26; 27] and references therein). This fluid is described by the stress-energy tensor1 Footnote 1: Throughout the entire paper, we will use natural units in which \(\hbar=c=1\) and we will use \(G=\ell_{\rm P}^{2}\) interchangeably. \[T_{\mu\nu}=(\rho+p_{\perp})\,u_{\mu}u_{\nu}+p_{\perp}g_{\mu\nu}+\left(p_{\parallel }-p_{\perp}\right)w_{\mu}w_{\nu}\,, \tag{1}\] where \(\rho\), \(p_{\parallel}\) and \(p_{\perp}\) are the energy density, the radial and perpendicular components of the pressure, respectively, while \(u_{\mu}\) and \(w_{\mu}\) are a time-like and space-like 4-vectors, respectively, satisfying the relations \(u^{\mu}u_{\mu}=-w^{\mu}w_{\mu}=-1\). A particular choice of the EOS \(p_{\parallel}=p_{\parallel}(\rho)\) determines and characterizes the solutions, whereas \(p_{\perp}\) is determined by the covariant conservation of the stress-energy tensor (see, e.g., Refs. [1; 2; 3; 4; 5; 8; 11; 28; 29; 30; 31] and references therein). Requiring symmetry properties of the fluid constrains the free functions \(\rho\) and \(p_{\parallel}\) in Eq. (1). In the following, we focus on fluids whose dynamic equations are covariant under rotations, i.e., we require spherical symmetry and invariance under radial Lorentz boosts. The physical reason behind this choice is that a stress-energy tensor satisfying these properties has an infinite set of comoving reference frames and it is, therefore, identified as describing a well-defined spherically symmetric and Lorentz invariant vacuum [28; 32]. Its structure reads \[T_{\theta}^{\theta} =T_{\phi}^{\phi}\,; \tag{2a}\] \[T_{t}^{t} =T_{r}^{r}\,. \tag{2b}\] Equation (2b), in particular, fixes the EOS to be \[p_{\parallel}=-\rho\,. \tag{3}\] Notice that, apart from being dictated by symmetry arguments, this EOS is quite natural, as well as simple, in an emergent gravity framework (see, e.g., Refs. [8; 33]). Additionally, it appears in several physical contexts, such as the simplest form of dark energy (the cosmological constant, in the isotropic case), exotic compact objects [30; 34] or solutions of GR coupled with nonlinear electrodynamics [35; 36]. \(p_{\perp}\), instead, is entirely determined by the covariant conservation of the stress-energy tensor \[p_{\perp}=-\rho-\frac{r}{2}\rho^{\prime}\,. \tag{4}\] With Eq. (3), the general solution of Einstein's equations, sourced by Eq. (1) and written in Schwarzschild coordinates \((t,r,\theta,\varphi)\), reads \[\mathrm{d}s^{2} =-f(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+r^{2}\mathrm{d }\Omega^{2}\,; \tag{5a}\] \[f(r) =1-\frac{2Gm(r)}{r}\,,\quad m(r)=4\pi\int_{0}^{r}\mathrm{d} \tilde{r}\,\tilde{r}^{2}\,\rho(\tilde{r})\,, \tag{5b}\] where \(m(r)\) the Misner-Sharp (MS) mass of the system. Equation (3) fixes \(p_{\parallel}\) and \(p_{\perp}\), but leaves the density profile and, hence, the form of \(m(r)\), completely unconstrained. On the other hand, the behavior of \(\rho\) at \(r=0\) and \(r\to\infty\) can be determined by stringent physical considerations. In light of the particular form of the EOS (3) and requiring the absence of spacetime singularities, we expect that, whenever matter contribution is negligible (at \(r\sim 0\) and \(r\to\infty\)), the source of gravity is given by an approximately isotropic fluid, which gives \(\rho\sim\) constant using Eqs. (3) and (4). Assuming the validity of the weak energy condition we have \(\rho\geq 0\). From Eq. (3), it follows now that, in the core, at \(r\sim 0\), the spacetime behaves as a dS spacetime 2. The density reads Footnote 2: See, however, Ref. [5] for a model with an asymptotic Minkowski core. \[\rho\sim\frac{1}{4\pi\ell_{\rm P}^{2}\,L^{2}}\,, \tag{6}\] where \(L\) represents the dS length in the core. This behavior at the center breaks the strong energy condition, allowing to circumvent Penrose's theorem and to replace the classical singularity region with a completely regular spacetime [1; 2; 3; 4; 8; 28]. Equation (6) constrains the MS mass (5b) to behave extensively, as \(m(r)\sim r^{3}/\ell_{\rm P}^{2}L^{2}\), near \(r=0\). If one considers the cosmological regime, dominated by a cosmological constant at large \(r\), one can still have \(\rho\sim\text{constant}\neq 0\), and hence a dS behavior. Since we are considering isolated bodies, we discard a dS asymptotics, assuming that the density profile decays sufficiently rapidly to zero at \(r\to\infty\), so that we have AF solutions. Here, \(m(r)\) reduces to a constant \(M\). We also note that, at infinity, \(p_{\parallel}=p_{\perp}\to 0\) according to Eq. (4). Moreover, \(M\) appears as an integration constant in Eq. (5b), so that imposing a Schwarzschild behavior for the solution at \(r\to\infty\) implies \[\rho\sim r^{c}\,,\quad\text{with}\quad c<-3\,. \tag{7}\] Thus, our physically motivated "boundary conditions" at \(r=0\) and \(r\to\infty\) imply that the function \(\rho\) interpolates between the constant value (6) near \(r=0\) to Eq. (7) at \(r\to\infty\). A major drawback of this construction, however, is that, as a consequence of the freedom in choosing both the exponent \(c\) in Eq. (7) and the interpolating density profile \(\rho(r)\), the model is not unique, but it exists an infinite class of models which realize the same flow [8]. Particularly relevant examples, discussed in the literature, are the Fan & Wang [4], Bardeen [1], Hayward [2] nonsingular black holes, and black holes with Gaussian cores [37] (see also Ref. [8] and references therein.) In the following, we will see that requiring some conformal symmetries and scaling properties for the density profile and for the field generating this energy density will select a particular metric belonging to this general class. ## III Conformal symmetries and scalar field description Although GR is not a conformal field theory (see, however, Ref. [38]), it is known that these symmetries could play an important role for particular spacetime backgrounds, like e.g., the anti de Sitter (or also the dS) spacetime, for which they take the form of holographic correspondences [14; 15; 39]. Moreover, there is some evidence that conformal symmetry could regularize the short-distance behavior of gravity, by generating an UV fixed point, which is at the base of the asymptotic safety scenario [40]. Conformal symmetry plays also an important role for black holes, in particular in the description of their near-horizon physics. It has been widely used to give a microscopic derivation of the Bekenstein-Hawking entropy [16; 17; 18; 20]. Moreover, extremal black-hole background geometries (e.g., BPS states) typically describe the flow between different conformal points, or a conformal point and a flat spacetime [41; 42]. Finally, conformal symmetries are very important also for nonsingular black holes with a dS core. In fact, the 4D dS spacetime is endowed with a scale invariance [38] and, in particular, invariance under transformations induced by the conformal group \(\mathrm{SO}(2,4)\)[43]3. For the nonsingular black holes under consideration here, this scale symmetry holds only in the dS core and it is broken at greater distances, when localized matter begins to dominate [8]. On the other hand, the presence of the dS core implies that, for some values of the hair \(\ell\), the black hole has necessarily two horizons, which, for a critical value of \(\ell\), merge into a single one. This produces an extremal configuration, whose near-horizon geometry has an AdS\({}_{2}\) factor, with an associated dual, near-horizon conformal symmetry [8]. Footnote 3: This becomes evident when embedding dS spacetime in R[1; 4] and writing it in the flat slicing. These considerations strongly suggest that the density \(\rho(r)\) sourcing our black hole could generate a flow between a conformal point near \(r=0\), described by the dS spacetime, and some conformal field theory in the \(r\to\infty\) region. The scale invariance is broken during the flow by the nucleation of a local mass \(M\), with a related generation of an intermediate scale \(\ell\)[8]. The latter represents an additional "hair" of these models, and allows to realize the interpolation between the small \(r\sim 0\) and the large scales \(r\to\infty\). Lacking a fundamental microscopic description of our nonsingular black holes, we are unable to exactly identify the field content of the conformal matter sourcing the black hole in the \(r\to\infty\) region. However, scale symmetry strongly constraints the form of \(\rho\) in this regime. It must transform with definite weight \(\Delta\) under dilatations \(r\to\omega r\): \(\rho(\omega r)=\omega^{\Delta}\rho(r)\). For conformal field theories, the scaling dimension \(\Delta\) must be equal to the engineering dimensions, \(\Delta=-4\), in such a way that the theory does not contain dimensional constants. This fixes the exponent \(c\) in the asymptotic behavior (7), so that we have \[\rho(r)=\frac{\alpha}{4\pi}\frac{1}{r^{4}}\,, \tag{8}\] where \(\alpha\) is a dimensionless constant. This scaling is typical of the energy density of conformal matter fields in four dimensions [44]. Naively, this density characterizes a system of \(N\) quanta inside a sphere of radius \(r\). Each mode has a typical Compton energy \(E\sim r^{-1}\), so that the total energy density is \(\rho\sim N/(r\cdot r^{3})=N/r^{4}\). The density profile (8) diverges at \(r=0\). This is due to modes with arbitrarily short wavelength contributing to the spectrum. This singularity is, however, not physical, because the density \(\rho\) must interpolate between the constant value (6) at \(r=0\) and Eq. (8) at great distances. The simplest way to regularize this divergent behavior is through a translation of the radial coordinate \(r\to r+\ell\), which moves the singularity to nonphysical negative values of the radial coordinate \(r\) \[\rho(r)=\frac{\alpha}{4\pi}\frac{1}{(r+\ell)^{4}}\,. \tag{9}\] This introduces a length scale related to the local mass \(M\) and to the dS length \(L\), which is the physical source of the breaking of the scale symmetry. Evaluating Eq. (9) in \(r=0\), comparing it with Eq. (6) and considering the Schwarzschild limit \(m(r)\to M\) as \(r\to\infty\), we can easily identify the dimensionless constant \(\alpha\) and write the hair \(\ell\) in terms of the Schwarzschild radius \(R_{\mathrm{S}}=2\ell_{\mathrm{P}}^{2}M\) and of \(L\) \[\alpha=\frac{\ell^{4}}{\ell_{\mathrm{P}}^{2}\,L^{2}}\,,\quad\ell\sim R_{ \mathrm{S}}^{1/3}\,L^{2/3}\,. \tag{10}\] The second equation, in particular, gives a universal scaling for every geometry interpolating between the dS at \(r=0\) and the Schwarzschild spacetimes at \(r\to\infty\) (see Ref. [8]). Specifically, Eq. (10) represents a universal relation between \(\ell\) and the black-hole mass \(M\). One can now easily find the mass function, using Eqs. (5b) and (9) \[m(r)=\frac{Mr^{3}}{(r+\ell)^{3}}\,, \tag{11}\] which gives a particular case of the Fan & Wang model, investigated in details in Ref. [4]. This model is characterized by strong, order \(1/r^{2}\) corrections to the Schwarzschild solution at infinity. Additionally, the parameter \(\ell\) in this model was recently constrained by S2 observational data [10]. In the next subsection, we will show that the result (9), which is dictated by an Occam razor argument, can be derived using, again, conformal symmetry arguments. ### Scalar field description The regularization proposed above is obviously not unique. A different choice corresponds to different flows between the \(r=0\) and \(r\to\infty\) points, and to different patterns of the symmetry breaking. The most general solution compatible with the boundary conditions (6) and (8), and with an analytic behavior at \(r\to\infty\) is \(\rho\propto(P_{n})^{\gamma}/(P_{m})^{\delta}\), with \(P_{n}\), \(P_{m}\) polynomials of degrees \(n\) and \(m\), respectively, and \(n\gamma-m\delta=-4\) (to guarantee the scaling (8) at great distances). A remarkable particular case of this formula is \[\rho(r)=\frac{\alpha}{4\pi}\frac{1}{\left(r^{m}+\ell^{m}\right)^{4/m}}\,, \tag{12}\] which gives, once used in Eqs. (5a) and (5b), for \(m=1,\,2,\,3\), the Fan & Wang, Bardeen and Hayward black holes, respectively. Let us assume, for simplicity, that there is a weak-field regime in which the flow can be described by a scalar field \(\Phi\). This will be sourced by the density \(\rho\) and in a static and spherically symmetric background, it will satisfy the Poisson equation \[\nabla^{2}\Phi=4\pi\rho\,. \tag{13}\] One can now easily find, using Eqs. (6) and (8), the asymptotic solutions of Eq. (13) near \(r=0\) and \(r\to\infty\), \[\Phi(r)\sim\,\begin{cases}&r^{2}/\ell^{2}\,,\quad\text{ for }\quad r\sim 0\\ &r^{-2}\,,\quad\quad\text{ for }\quad r\to\infty\end{cases}\,, \tag{14}\] where we have neglected the constant and \(1/r\) terms in the \(r\to\infty\) behavior, which are related to the presence of the mass \(M\). As expected in the flow from \(r\to\infty\) to \(r=0\), the scaling dimension of \(\Phi\) changes from its engineering one \(\Delta=-2\) to \(\Delta=2\), which is associated with a constant \(\rho\). An important feature of the two conformal points, which is immediately evident in Eq. (14), is that they are mapped one into the other by the inversion \[r\to\frac{\ell^{2}}{r}\,. \tag{15}\] Discrete symmetries, changing small with large radii, are common in string theory, where they are called \(T\)-dualities. They have been already used in the past to investigate nonsingular black holes [45; 46]. The inversion can be used in combination with translations to produce special conformal transformations, which, together with dilatations and translations, generate the conformal group (isomorphic to the \(\text{SL}(2,\mathbb{R})\) group) realized here in one dimension as \[r\to\omega\,r,\quad r\to\frac{r}{1-\nu r}\quad r\to r+\sigma \tag{16}\] where \(\omega\), \(\nu\), \(\sigma\) are the group parameters. A generic flow, for instance the one described by Eq. (12), will preserve neither the scaling behavior for \(\Phi\) nor the symmetry under inversion. However, we can select a particularly symmetric profile for \(\rho\), such that the solution for \(\Phi\) preserves at least part of the conformal symmetries, in particular the scaling with \(\Delta=2\) attained in the \(r=0\) conformal point. One can show that this happens if we choose the simple profile for \(\rho\) given by Eq. (9). Integrating the Poisson equation (13), we get \[\Phi(r)=\frac{\alpha}{6\,\ell^{2}}\frac{r^{2}}{(r+\ell)^{2}}\,. \tag{17}\] One can now check that the field \(\Phi\) (17) transforms as \[\Phi\to\omega^{2}\,\Phi \tag{18}\] under a conformal transformation of the form \[r\to\omega\frac{r}{1-\nu r}\,, \tag{19}\] with \(\omega\equiv 1+\nu\ell\), which represents the composition of a dilatation and a special conformal transformation. It is important to notice that Eq. (17) does not arise as the Newtonian limit of the full GR solution with the mass function (10). The EOS (3), indeed, prevents the weak-field limit from being performed together with the usual nonrelativistic limit, and a Newtonian fluid, with \(\rho\gg p_{\parallel}\), \(p_{\perp}\), from being considered. We can still perform a weak field limit, which gives the Poisson equation, sourced however by the _active mass_\(\rho+p_{\parallel}+2p_{\perp}\). Using Eqs. (3) and (4), together with the profile (12) (with \(\beta=1\)) yields the potential \(\tilde{\Phi}=-GMr^{2}/(r+\ell)^{3}\). ## IV Nonlinear electrodynamics It is interesting to note that the large scale behavior \(r^{-4}\) of Eq. (12), and the related scale invariance, could be explained in terms of the embedding of these regular models as solutions of GR coupled with nonlinear electrodynamics [35]. The action for such theory is \[\mathcal{S}=\int\mathrm{d}^{4}x\,\sqrt{-g}\,\left[\frac{R}{16\pi G}-\mathscr{ L}(\mathcal{F})\right]\,, \tag{20}\] where \(R\) is the Ricci scalar, while \(\mathcal{F}=\frac{1}{4}F^{\mu\nu}F_{\mu\nu}\) is the trace of the electromagnetic potential. \(\mathscr{L}\) is, in general, a nonlinear function of \(\mathcal{F}\). Maxwell's theory is of course recovered in the linear case \(\mathscr{L}\propto\mathcal{F}\). If we compute the stress-energy tensor related to \(\mathscr{L}\), we see that it naturally satisfies the EOS (3) and that \(\rho(r)=\mathscr{L}(\mathcal{F})\). We can now combine this with Eq. (12) and the magnetic monopole solution of Maxwell's equations \[\mathcal{F}=\frac{q_{\rm m}^{2}}{2r^{4}}\,, \tag{21}\] which gives the lagrangian \[\mathscr{L}\left(\mathcal{F}\right)=\frac{\alpha}{4\pi}\frac{\mathcal{F}}{\left[ \ell^{\beta}\mathcal{F}^{\beta/4}+2^{-\beta/4}q_{\rm m}^{\beta/2}\right]^{4/ \beta}}\,. \tag{22}\] We see now that the particular large-scale conformal scaling \(r^{-4}\) can be explained by the fact that the subclass of models described by Eq. (22) reduces to the standard Maxwell theory in the weak field limit \(\mathcal{F}\to 0\), which is also conformally invariant. ## V Summary and outlook One of the most unsatisfactory aspect of nonsingular black-hole solutions is that, although we can generate them using anisotropic fluids as sources, their physical origin in terms of elementary fields is mostly unknown. This is particularly true if one considers nonsingular black holes in which the deformations from the usual Schwarzschild solution have superplanckian origin [8; 10]. An unpleasant consequence of this lack of knowledge is the existence of a large number of solutions. Although it is possible that, in the near future, astrophysical and gravitational waves data may be used to select/exclude models [8; 10], some theoretical guiding principle is more than welcome. It is likely that these difficulties are indicating that the microscopic origin of this kind of solutions cannot be found in a GR framework or its extensions (see Ref. [47]), which allows only for a coarse-grained description in terms of anisotropic fluids. For this reason, it is important to look at general guiding principles, like symmetries, which are expected to underpin the classical GR description. In this paper, we have adopted this philosophy to constrain the broad class of nonsingular black-hole models with a dS core. We have used conformal symmetries, which are believed to be a crucial ingredient of any quantum theory of gravity, as a selecting principle to single out the physically relevant nonsingular black-hole solution. We have found that the conformal symmetry selects a particular case of the Fan & Wang metric, which has been recently investigated and constrained using data of the orbits of the S2 star around the SgrA\({}^{*}\) black hole. Obviously, the use of conformal symmetry to select solutions is only a first step. Understanding the microphysics from which these symmetries originate is the next important task.
2304.05286
Unveiling the non-Abelian statistics of $D(S_3)$ anyons via photonic simulation
Simulators can realise novel phenomena by separating them from the complexities of a full physical implementation. Here we put forward a scheme that can simulate the exotic statistics of $D(S_3)$ non-Abelian anyons with minimal resources. The qudit lattice representation of this planar code supports local encoding of $D(S_3)$ anyons. As a proof-of-principle demonstration we employ a photonic simulator to encode a single qutrit and manipulate it to perform the fusion and braiding properties of non-Abelian $D(S_3)$ anyons. The photonic technology allows us to perform the required non-unitary operations with much higher fidelity than what can be achieved with current quantum computers. Our approach can be directly generalised to larger systems or to different anyonic models, thus enabling advances in the exploration of quantum error correction and fundamental physics alike.
Suraj Goel, Matthew Reynolds, Matthew Girling, Will McCutcheon, Saroch Leedumrongwatthanakun, Vatshal Srivastav, David Jennings, Mehul Malik, Jiannis K. Pachos
2023-04-11T15:36:27Z
http://arxiv.org/abs/2304.05286v1
# Unveiling the non-Abelian statistics of \(D(s_{3})\) anyons via photonic simulation ###### Abstract Simulators can realise novel phenomena by separating them from the complexities of a full physical implementation. Here we put forward a scheme that can simulate the exotic statistics of \(D(S_{3})\) non-Abelian anyons with minimal resources. The qudit lattice representation of this planar code supports local encoding of \(D(S_{3})\) anyons. As a proof-of-principle demonstration we employ a photonic simulator to encode a single qutrit and manipulate it to perform the fusion and braiding properties of non-Abelian \(D(S_{3})\) anyons. The photonic technology allows us to perform the required non-unitary operations with much higher fidelity than what can be achieved with current quantum computers. Our approach can be directly generalised to larger systems or to different anyonic models, thus enabling advances in the exploration of quantum error correction and fundamental physics alike. + Footnote †: These authors contributed equally + Footnote †: These authors contributed equally _Introduction:-_ The exotic statistics of non-Abelian anyons make them of interest in fundamental physics [1; 2; 3; 4; 5]. In addition, their resilience to local perturbations has given rise to several schemes for topological quantum computing and other applications [6; 7; 8; 9; 10]. This behavior is key for fault-tolerant quantum computing, making non-Abelian anyons a potential solution to error problems that limit the scaling of quantum computers [11; 10; 12]. In the last decade we witnessed an intense effort to identify signatures of non-Abelian anyons in various physical platforms, such as FQH liquids at \(\nu=5/2\)[13], \(p+ip\) topological superconductors [14; 15] or quantum wires [8]. Unfortunately, the complexity of these systems allows for alternative interpretation of the observed signatures [16]. The conclusive characteristic of non-Abelian anyons is their exchange statistics, which is currently too complex to realise in the laboratory. At the same time, several investigations have focused on simulating non-Abelian anyons [17; 18; 19; 20; 21; 22]. These efforts aim to establish the necessary conditions for observing non-Abelian statistics and addressing technical challenges in scaling and accuracy. Often, such simulations suffer from key loopholes. For example, the simulation of Majorana fermions utilizes a non-local encoding of fermion-like anyonic states in many qubits, with the help of the Jordan-Wigner transformation. However, this non-local encoding lacks the desired topological stability against local errors inherent in anyonic systems. Here we propose and implement a photonic simulation that demonstrates the core features of non-Abelian anyon statistics corresponding to the \(D(S_{3})\) planar code [11]. Planar codes are both quantum error-correcting codes and condensed matter systems that host anyonic excitations. Although they require many-body interactions, their local encoding on spins makes them attractive for quantum simulations. The simplest version of the planar code is the toric code that supports Abelian anyons. The toric code has been already simulated in the laboratory with Josephson junctions [23] and photonic systems [24; 25]. We show that a single qutrit is sufficient to encode the core manipulations of \(D(S_{3})\) non-Abelian anyons and demonstrate their non-Abelian fusion and braiding properties. The operations required to generate and manipulate anyons are in general non-unitary matrices that have a unitary action on the anyonic Hilbert space [26; 27; 28]. The implementation of non-unitary operations is typically experimentally challenging with current quantum computing architecture. To overcome this problem we adopt photonic technologies that can perform non-unitary operations accurately and with high fidelity. This simulation can be expanded in two directions with advancements in technology. First, it can be scaled to larger lattice systems, allowing for a broader range of anyonic operations. Second, Hamiltonian interactions can be added to provide active fault-tolerance in the topologically encoded quantum information. _The non-Abelian \(D(S_{3})\) anyonic model:-_ The \(D(S_{3})\) model is based on the group transformations of a triangle, \(S_{3}=\{e,c,c^{2},t,tc,tc^{2}\}\), where \(e\) is the identity element, \(c\) the generator of \(2\pi/3\) rotations and \(t\) the generator of reflections. The \(D(S_{3})\) planar code consists of a square lattice where \(d=6\) qudits are positioned at its links parameterised by the group elements of \(S_{3}\), as shown in Fig. 1. The Hamiltonian of the model has mutually commuting plaquette and vertex operators [11]. The ground state of the model is identified as the vacuum and its anyons are manifested as localised excitations at the vertices and/or the plaquettes. This \(D(S_{3})\) planar code supports eight different anyonic excitations labelled by \(\{A,B,C,D,E,F,G,H\}\)[29; 30; 31]. Particle \(A\) corresponds to the vacuum that fuses trivially with the rest of the anyons. Here we restrict ourselves to the \(\{A,B,G\}\) subgroup that is closed under fusion, \(B\times B=A\), \(G\times B=G\) and \(G\times G=A+B+G\). Moreover, \(G\) has non-trivial braiding statistics. Its corresponding fusion, \(F^{G}_{GGG}\), and braiding, \(R^{GG}\), unitary matrices are given by \[F^{G}_{GGG}=\frac{1}{2}\begin{pmatrix}1&1&\sqrt{2}\\ 1&1&-\sqrt{2}\\ \sqrt{2}&-\sqrt{2}&0\end{pmatrix},\ R^{GG}=\begin{pmatrix}\bar{\omega}&0&0 \\ 0&\bar{\omega}&0\\ 0&0&\omega\end{pmatrix}, \tag{1}\] where \(\omega=e^{2\pi i/3}\), which give a non-trivial braiding matrix \(B^{GG}=FR^{2}F^{\dagger}\). The non-Abelian character of the \(G\) anyons is manifested in the non-trivial commutation relation between \(F^{G}_{GGG}\) and \((R^{GG})^{2}\). The \(\{A,B,G\}\) subgroup should be contrasted to \(\{A,B,C\}\), which has similar fusion rules but trivial braiding statistics, \(B^{CC}=\mathbb{1}\)[30, 32]. It is possible to verify fusion and braiding properties of the planar code anyons by generating and manipulating the corresponding anyonic excitations. Such excitations are created from the vacuum state by applying operations on the links of the lattice. In general, these rotations are given in terms of ribbon operators \(F^{X}_{\rho}\), where \(\rho\) is the path along which the rotations are applied, giving rise to two \(X\) anyons at its endpoints. These ribbon operators, together with their action on the ground state, encode the anyonic fusion and braiding properties. To define the ribbon operators we employ the oriented lattice representation of \(D(S_{3})\) shown in Fig. 1. A dual triangle \(\tau\) has support on a link \(e_{\tau}\) and is connecting two plaquettes \(p_{1}\) and \(p_{2}\) adjacent to the link \(e_{\tau}\), as shown in Fig. 1(a). A direct triangle \(\tau^{\prime}\) has support on a link \(e_{\tau^{\prime}}\) and is connecting two vertices \(v_{1}\) and \(v_{2}\) adjacent to the link \(e_{\tau^{\prime}}\), as shown in Fig. 1(b). We now assign a six-dimensional Hilbert space \(\{|h\rangle,h\in S_{3}\}\), to each link \(e_{\tau}\). To every triangle \(\tau\) we define an operator \(L^{h}_{\tau_{\text{dual}}}=\sum_{g\in S_{3}}|h\beta\rangle\!\langle g|\), with \(h\in S_{3}\), acting on \(e_{\tau}\), if \(e_{\tau}\) points towards \(v\). Otherwise \(L^{h}_{\tau_{\text{dual}}}=\sum_{g\in S_{3}}|gh^{-1}\rangle\!\langle g|\). Similarly, we define \(P^{g}_{\tau_{\text{dir}}}=|g\rangle\!\langle g|\), with \(g\in S_{3}\), if \(e_{\tau}\) is clockwise w.r.t. \(p\), otherwise \(P^{g}_{\tau_{\text{dir}}}=|g^{-1}\rangle\!\langle g^{-1}|\). We next define the composite operators \(F^{h,g}_{\rho,g}=L^{h}_{\tau_{\text{dual}}}P^{g}_{\tau_{\text{dir}}}\), where \(\rho=\tau_{\text{dir}}\tau_{\text{dual}}\) with \(\tau_{\text{dir}}\) a direct and \(\tau_{\text{dual}}\) a dual triangle. Ribbon operators, \(F^{X}_{\rho}\), that give rise to \(X\) anyons are built out of such matrix elements, \(F^{h,g}_{\rho}\), acting on qudits [30]. The \(A\) and \(B\) anyons are constructed from strings of operators corresponding to direct \(\tau\)'s. They give rise to anyons positioned at vertices, i.e. they have \(h=e\) with \(L^{e}=\mathbb{1}\). Similar to the toric code's \(e\) and \(m\) anyons the \(A\) and \(B\) anyons can be created and moved around by applying single qudit unitary operator, \(F^{A}_{\tau}\) and \(F^{B}_{\tau}\), to a string of qudits [11], where \[F^{A}_{\tau} \!=\!|e\rangle\!\langle e|+|c\rangle\!\langle c|+|c^{2}\rangle\! \langle c^{2}|+|t\rangle\!\langle t|+|t\rangle\!\langle c|+|t\!\langle c|+|t \!\langle c^{2}\rangle\!\langle tc^{2}|,\] \[F^{B}_{\tau} \!=\!|e\rangle\!\langle e|+|c\rangle\!\langle c|+|c^{2}\rangle\! \langle c^{2}|-|t\rangle\!\langle t|-|t\!\langle c|\!\langle tc|-|tc^{2} \rangle\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}| \!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}| \!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc ^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! \langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}| \!\langle tc^{2}|\!\langle tc^{2}|\!\langle tc^{2}|\! which acts only on three states. With the minimal string and ribbon operators \(F^{A}\), \(F^{B}\) and \(F^{G}\) we can explicitly verify the fusion properties of the \(\{A,B,G\}\) subgroup of \(D(S_{3})\) by acting on a single qudit with six levels. By direct multiplication of the operators given in (2) and (4) we can verify their non-trivial fusion rules. In particular, when two ribbon operators, \(F^{G}_{\rho_{0}}\), act on top of each other then the \(G\) anyons at their endpoints are fused resulting to the ribbon operator of their fusion outcomes, i.e. \[F^{G}_{\rho_{0}}F^{G}_{\rho_{0}}=F^{A}_{\rho_{0}}+F^{B}_{\rho_{0}}+F^{G}_{\rho_ {0}}. \tag{5}\] This fusion process can be realised with a three level system as only the states \(|e\rangle\), \(|c\rangle\) and \(|c^{2}\rangle\) are involved. We next consider the braiding properties of \(G\) anyons. In the case of the toric code the anyonic statistics of \(e\) and \(m\) anyons is given in terms of the commutation relations between their ribbon operators \(F^{e}_{\rho_{1}}F^{m}_{\rho_{2}}=R^{em}F^{m}_{\rho_{2}}F^{e}_{\rho_{1}}\), where \(\rho_{1}\) and \(\rho_{2}\) are two crossing paths of \(e\) and \(m\) anyons, respectively [11]. Due to topological invariance with respect to the exact shape of the path, the braiding relation can be realised by isolating the site, \(\rho_{0}\), where paths \(\rho_{1}\) and \(\rho_{2}\) cross each other. As a result we can take the full system to be site \(\rho_{0}\), with the anyons positioned outside the system's boundary. Then we have \(F^{e}_{\rho_{1}\rightarrow\rho_{0}}=Z_{\rho_{0}}\) and \(F^{m}_{\rho_{2}\rightarrow\rho_{0}}=X_{\rho_{0}}\) acting on the same qubit at \(\rho_{0}\), thus obtaining the exchange statistics \(R^{em}=-1\)[24; 25]. For the \(D(S_{3})\) model the exchange of two \(G\) ribbon operators takes the form [11] \[F^{G}_{\rho_{1}}F^{G}_{\rho_{2}}=R^{GG}F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}, \tag{6}\] where \(R^{GG}\) is given by (1). Hence, to determine \(R^{GG}\) we need to implement \(F^{G}_{\rho_{1}}F^{G}_{\rho_{2}}\) and \(F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}\) and compare them. Similarly to the toric code case we employ a single site, \(\rho_{0}\) and the minimal ribbon operator \(F^{G}_{\rho_{0}}\), given in (4), acting one it. As the operators we want to exchange are identical when acting on the single site system we adopt the following prescription. We first identify \(F^{G}_{\rho_{1}}F^{G}_{\rho_{2}}\) with the product of two \(F^{G}_{\rho_{0}}\) operators as given in (5). Next, to determine \(F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}\), we employ the exchange of their building blocks \(F^{h,g}_{\rho}\) \[F^{h,g}_{\rho_{2}}F^{k,l}_{\rho_{1}}=F^{k,lg\bar{l}hg}_{\rho_{1}}F^{h,g}_{\rho _{2}}, \tag{7}\] valid for ribbons \(\rho_{1}\) and \(\rho_{2}\) with one common end. We employ these relations to compute \(F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}\) and then identify \(\rho_{1}\rightarrow\rho_{0}\) and \(\rho_{2}\rightarrow\rho_{0}\) to obtain (see Supplementary Material) \[F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}=\bar{\omega}(F^{A}_{\rho_{0}}+F^{B}_{\rho_{0} })+\omega F^{G}_{\rho_{0}}. \tag{8}\] Direct comparison of (5) and (8) deduces the desired braiding matrix \(R^{GG}=\text{diag}(\bar{\omega},\bar{\omega},\omega)\) in the \(\{A,B,G\}\) basis. Having a minimal system facilitates the simulation of braiding statistics with current technology. A natural first candidate is to employ current quantum computers to implement the ribbon operator, \(F^{G}_{\rho_{0}}\). As \(F^{G}_{\rho_{0}}\) is non-unitary it cannot be straightforwardly implemented with unitary quantum logic gates [33]. However, as for any matrix, a unitary block encoding [34; 35; 36; 37; 38] can be constructed where \(F^{G}_{\rho_{0}}\) is embedded within a larger unitary, \(U_{F}\). Through the preparation and measurement of a subset of qubits, a quantum circuit describing \(U_{F}\) allows for \(F^{G}_{\rho_{0}}\) to be applied to the labelled qutrit state. Based on the singular value decomposition of \(F^{G}_{\rho_{0}}\)[39], \(U_{F}\) must act on a minimum of 3 qubits (see Supplementary Material). Up to a rescaling of \(F^{G}_{\rho_{0}}\), the success probability \(p\) of implementing the transformation on pure states is bounded as \(1/4\leq p\leq 1\). Compiling an explicit \(U_{F}\) using the native qiskit transpiler into the typical device gateset (e.g. single qubit rotations + CNOT) gives a circuit depth of 74 operations with 20 CNOTs [40]. Using a simplified model of current device noise, we simulated the circuits applying this \(U_{F}\) to states that maximise and minimise the success probability of applying \(F^{G}_{\rho_{0}}\) to the labeled qutrit state (see Supplementary Material). Unfortunately, current error rates are too high, and we observe low fidelities between the idealized circuits and the noisy implementations. As an alternative we resort to a photonic platform that can encode non-unitary operations in a straightforward way. _Experimental photonic simulation:-_ Here we show how a photonic simulator can accurately perform the non-unitary operations needed to realise the fusion and braiding properties of the \(G\) non-Abelian \(D(S_{3})\) anyons with mimimal errors. We experimentally implement the ribbon operator \(F^{G}_{\rho_{0}}\), that encodes two \(G\) anyons at its endpoints, and its compositions \(F^{G}_{\rho_{1}}F^{G}_{\rho_{2}}\) and \(F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}\) given in (4), (5) and (8), respcetively. The experiment consists of three distinct parts, as shown in Fig. 2(a). First, we generate a qutrit state encoded in the transverse-spatial degree of freedom of light. We then evolve the state through the desired operations programmed on our photonic simulator. Finally, we characterise the implemented operations via quantum process tomography. The qutrit is encoded in a transverse-spatial modal basis consisting of discrete macro-pixels, as shown in Fig. 2(b). We choose this particular basis as it can be tailored to perform high-quality projective measurements [42; 43]. Next, the qutrit state is evolved through the \(G\) ribbon operator (4) and its compositions (5) and (8), following which it is mapped onto spatially separated outcomes that can be measured on a camera. These operations are by definition non-unitary. Therefore, the task of performing an operation and sorting the outcomes spatially can be mapped to the problem of state discrimination between non-orthogonal states. There exist multiple schemes that perform this task by compromising either efficiency or accuracy of the discrimination [44]. As we are interested in simulating the anyonic properties, we choose to enhance the accuracy of the operations at the expense of efficiency through optical losses. Herein we use the formalism of unambiguous state discrimination [45; 46; 47; 48], where one employs auxiliary modes to embed a low-dimensional non-unitary operation within a higher-dimensional unitary. The outcomes corresponding to the auxiliary modes can be ignored since they provide no information about the input state, and thus correspond to loss. Interestingly, the three operations we aim to implement, \(\mathbf{T}\in\{F_{\rho_{0}}^{G},F_{\rho_{1}}^{G}F_{\rho_{2}}^{G},F_{\rho_{2}}^{G}F _{\rho_{1}}^{G}\}\), are Hermitian and have symmetric overlaps between different columns, i.e. \(|T_{rj}\cdot T_{r_{i}\neq j^{*}}|=\alpha,\ \forall\ r\) where \(T_{rc}\) corresponds to the \(r^{th}\) row and \(c^{th}\) column of the given \(\mathbf{T}\) matrix. This allows us to use a single auxiliary mode to perform these operations [49] as was recently shown to be experimentally viable with optical circuits [50]. Due to the non-unitarity, these operations cannot be performed with unit success probability. Theoretically, the maximum average success probabilities are \(50\%\), \(37.5\%\) and \(75\%\) for \(F_{\rho_{0}}^{G}\), \(F_{\rho_{1}}^{G}F_{\rho_{2}}^{G}\) and \(F_{\rho_{2}}^{G}F_{\rho_{1}}^{G}\), respectively. Using this approach, we encode our 3-dimensional Hermitian operators into 4-dimensional unitaries to proceed with this task. These unitary operations are implemented using the recently demonstrated "top-down" approach, where an arbitrary optical circuit is embedded within a higher-dimensional mode-mixer sandwiched between two programmable phase planes [41]. Our circuit uses a commercial multi-mode fibre (MMF) as a mode-mixer that is placed between two programmable spatial light modulators (SLMs) as shown in Fig. 2(a). We use an inverse design technique known as the wavefront-matching (WFM) algorithm to program the SLMs. The WFM algorithm calculates the phase plane solutions by iteratively maximising the overlap between a set of input fields with the desired output ones. After updating the SLMs with the phase solutions given by the WFM algorithm, we couple a coherent light source with a wavelength of \(810\) nm to characterize the implemented operation. The statistics of a single-photon qutrit state propagating through the system are identical to those obtained for a coherent state, allowing us to simplify the experiment and use a camera for detection [51]. We perform quantum process tomography to quantify the fidelity of the implemented operations \(\widetilde{\mathbf{T}}\) in relation to the ideal operations \(\mathbf{T}\). Note that in addition to both SLMs being used for implementing the target operation \(\mathbf{T}\), SLM\({}_{1}\) is used for generating the complete set of input modes (macropixel MUBs M0-M3, Fig. 2(b)) and SLM\({}_{2}\) is used for performing the projective measurements needed for quantum process tomography (QPT). For each input mode, we measure the intensity at each of the three designated output modes at the camera, ignoring the auxiliary output. Next, SLM\({}_{2}\) is used to sequentially project the output into all mutually unbiased bases (MUBs) of the desired output modes. This is done in a manner similar to how projective measurements are performed with an SLM and a single mode fibre (SMF) [52], with the center region of the CCD camera used in place of the SMF. Using these measurements, we construct a coupling matrix between the complete set of input and output modes. This coupling matrix is then used to recover the implemented process via QPT, which we represent via its Choi state, \(\rho_{\widetilde{\mathcal{T}}}=\widetilde{\mathcal{T}}\otimes\mathbb{1}(\rho^ {+})\), where \(\rho^{+}\) is the maximally entangled state. The Choi state \begin{table} \begin{tabular}{|c||c||c|} \hline Operation & Fidelity & Purity \\ (\(\mathbf{T}\)) & \((\mathcal{F}(\rho_{\mathcal{T}},\rho_{\widetilde{\mathcal{T}}}))\) & \((\mathcal{P}(\rho_{\widetilde{\mathcal{T}}}))\) \\ \hline \(F_{\rho_{0}}^{G}\) & \(95.23\pm 0.93\%\) & \(96.04\pm 0.03\%\) \\ \(F_{\rho_{1}}^{G}F_{\rho_{2}}^{G}\) & \(94.44\pm 0.85\%\) & \(97.65\pm 0.05\%\) \\ \(F_{\rho_{2}}^{G}F_{\rho_{1}}^{G}\) & \(97.59\pm 0.59\%\) & \(94.43\pm 0.06\%\) \\ \hline \end{tabular} \end{table} Table 1: Experimental results for the best case fidelity and purity of the processes corresponding to the \(F_{\rho_{0}}^{G}\), \(F_{\rho_{1}}^{G}F_{\rho_{2}}^{G}\) and \(F_{\rho_{2}}^{G}F_{\rho_{1}}^{G}\) operations. The error values are reported up to 3 standard deviations and correspond to systematic misalignment error (see Supplementary Material). Figure 2: (a) Experimental Setup. A coherent light source (\(810\) nm) is incident on a phase-only spatial light modulator (SLM\({}_{1}\)) and then coupled into a 2 m-long graded-index (GRIN) multi-mode fiber (MMF) with core diameter \(50\)\(\mu m\). The output of the MMF is incident on SLM\({}_{2}\) followed by a CCD camera. The combination of a high-dimensional mode mixer (MMF) sandwiched between two phase planes (SLM\({}_{1}\) and SLM\({}_{2}\)) serves as a programmable optical circuit that can encode any non-unitary operations as shown in [41]. Additionally, SLM\({}_{2}\) is used for performing projective measurements required for quantum process tomography (QPT) to check the fidelity of the implemented transformations \(\mathbf{T}\in\{F_{\rho_{0}}^{G},F_{\rho_{1}}^{G}F_{\rho_{2}}^{G},F_{\rho_{2}}^{G }F_{\rho_{1}}^{G}\}\). (b) Qutrit Encoding. Images showing three-dimensional photonic transverse-spatial modes in the macro-pixel basis (M0) generated by SLM\({}_{1}\). Modes from all mutually unbiased bases (M1, M2, M3) of the three-dimensional macro-pixel basis are also shown, which are used for performing QPT. captures complete information about the process and can be used to evaluate the purity and fidelity to the target operations (See Supplementary Material) [41]. We implement 10 realisations of each operation \(F^{G}_{\rho_{0}}\), \(F^{G}_{\rho_{1}}F^{G}_{\rho_{2}}\) and \(F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}\), and reconstruct their processes using the methods described above. Since different positions of measurement outcomes result in different performance, we vary the positions of measurement outcomes on the camera in each realisation in order to realize the best possible implementation of these operators. Out of all the implementations, the best-case fidelities, \(\mathcal{F}\), and purities, \(\mathcal{P}\), of the process for each operation \(F^{G}_{\rho_{0}}\), \(F^{G}_{\rho_{1}}F^{G}_{\rho_{2}}\) and \(F^{G}_{\rho_{2}}F^{G}_{\rho_{1}}\) are shown in Table 1. To visualise the quality of these operations, it is convenient to use the Kraus representation, which is an alternative way to represent these processes (see Supplementary Material). The ideal target operations have only one non-zero Kraus operator, which we can compare to the leading Kraus operators of the implemented processes owing to their high purity. Fig. 3 (top row) depicts the leading Kraus operator of the implemented processes, showing that it agrees well with the target operations (insets). The full Choi state representation of these high purity processes is shown in Fig. 3 (bottom row), with the ideal state shown for comparison. _Conclusions:-_ Non-Abelian anyons present a fascinating and promising avenue for fault-tolerant quantum computation. Emulating their complex braiding statistics has so far evaded experimental realisation. In this letter, we have demonstrated a photonic simulation of the ribbon operators corresponding to \(D(S_{3})\) non-Abelian anyons with fidelities and purities above 94%. Our simulation has certified the minimal requirements and operations necessary to identify the statistics of these anyons, which can guide future efforts towards experimentally realising these exotic systems. Moreover, the extension of our experimental scheme to multi-qudit scalable quantum systems [53, 54, 55] can potentially unlock the applications of non-Abelian anyons in quantum information processing. For example, two qutrits encoded in a nine-dimensional photonic system or in a scalable architecture with Josephson junctions or ion traps [56] could be employed to encode distinguishable \(F^{G}_{\rho}\) ribbon operators where both the \(F^{G}_{GGG}\) and \(R^{GG}\) matrices can be realised. This work represents a significant step forward in the experimental study of non-Abelian anyons and opens the door to further exploration of their properties and potential applications. ###### Acknowledgements. We are grateful to Gavin Brennen, Sofyan Iblisdir and James Wootton for helpful discussions. This work was in part supported by EPSRC Grant No. EP/R020612/1.
2301.03001
Fluid Tunnel Research for Challenges of Urban Climate
Experimental investigations using wind and water tunnels have long been a staple of fluid mechanics research for a large number of applications. These experiments often single out a specific physical process to be investigated, while studies involving multiscale and multi-physics processes are rare due to the difficulty and complexity in the experimental setup. In the era of climate change, there is an increasing interest in innovative experimental studies in which fluid (wind and water) tunnels are employed for modelling multiscale, multi-physics phenomena of the urban climate. High-quality fluid tunnel measurements of urban-physics related phenomena are also much needed to facilitate the development and validation of advanced multi-physics numerical models. As a repository of knowledge in modelling these urban processes, we cover fundamentals, recommendations and guidelines for experimental design, recent advances and outlook on eight selected research areas, including (i) thermal buoyancy effects of urban airflows, (ii) aerodynamic and thermal effects of vegetation, (iii) radiative and convective heat fluxes over urban materials, (iv) influence of thermal stratification on land-atmosphere interactions, (v) pollutant dispersion, (vi) indoor and outdoor natural ventilation, (vii) wind thermal comfort, and (viii) urban winds over complex urban sites. Further, three main challenges, i.e., modelling of multi-physics, modelling of anthropogenic processes, and combined use of fluid tunnels, scaled outdoor and field measurements for urban climate studies, are discussed.
Yongling Zhao, Lup Wai Chew, Yifan Fan, Christof Gromke, Jian Hang, Yichen Yu, Alessio Ricci, Yan Zhang, Yunpeng Xue, Sofia Fellini, Parham A. Mirzaei, Naiping Gao, Matteo Carpentieri, Pietro Salizzoni, Jianlei Niu, Jan Carmeliet
2023-01-08T08:59:09Z
http://arxiv.org/abs/2301.03001v1
# Fluid Tunnel Research for Challenges of Urban Climate ###### Abstract The _Nanenclature_ ###### Abstract Experimental investigations using wind and water tunnels have long been a staple of fluid mechanics research for a large number of applications. These experiments often single out a specific physical process to be investigated, while studies involving multiscale and multi-physics processes are rare due to the difficulty and complexity in the experimental setup. In the era of climate change, there is an increasing interest in innovative experimental studies in which fluid (wind and water) tunnels are employed for modelling multiscale, multi-physics phenomena of the urban climate. High-quality fluid tunnel measurements of urban-physics related phenomena are also much needed to facilitate the development and validation of advanced multi-physics numerical models. As a repository of knowledge in modelling these urban processes, we cover fundamentals, recommendations and guidelines for experimental design, recent advances and outlook on eight selected research areas, including (i) thermal buoyancy effects of urban airflows, (ii) aerodynamic and thermal effects of vegetation, (iii) radiative and convective heat fluxes over urban materials, (iv) influence of thermal stratification on land-atmosphere interactions, (v) pollutant dispersion, (vi) indoor and outdoor natural ventilation, (vii) wind thermal comfort, and (viii) urban winds over complex urban sites. Further, three main challenges, i.e., modelling of multi-physics, modelling of anthropogenic processes, and combined use of fluid tunnels, scaled outdoor and field measurements for urban climate studies, are discussed. _Keywords_: fluid tunnel measurements, multi-physics urban climate processes, scaled outdoor measurements, field measurements ## 1 Introduction Wind tunnels have been an indispensable tool in advancing mankind's technology, from Wright Brothers' Flyer to Neil Armstrong's statement "One small step for man, one giant leap for mankind" during Apollo 11 moon mission. Even with today's advanced computational power and the popularity of computational fluid dynamics (CFD), wind tunnel experiments are still widely used due to their good capability and feasibility and are still considered to be not replaceable in fluid mechanics research, especially dealing with complex flow mechanisms. While the contribution of wind tunnels in rocket science (literally) is well known, this experimental approach is essential in other areas of research, including wind engineering, urban physics, sports engineering, and many others. Water tunnels serve the same purpose in experimental fluid mechanics and offer some advantages for studies of urban climate, for instance, the possibility in measuring non-isothermal flow field at high spatial resolution and reducing model sizes because of higher viscosity of water compared to air. Both air and water are fluids, and the physics is governed by the same equations, namely the Navier-Stokes equations. Therefore, we use the term "fluid tunnel" to cover both wind and water tunnels in this paper. Fluid tunnel experiments for urban climate research adopt scaled-down models due to the limitation of the dimensions of the fluid tunnel test section. One commonly asked question is the size of a fluid tunnel required for the modelling of a realistic, full-scale urban climate problem (Meroney, 2016). Are we able to capture the same physics at full scale using building models with a scale-down ratio of 1:10? What about scales of 1:100, 1:1000 or beyond? Similarities or dimensional analysis in fluid mechanics can provide the key dimensionless parameters important for a specific application, but often these dimensionless parameters cannot be matched in fluid tunnels. Therefore, care should be taken while applying fluid tunnel experiments results to full-scale applications. CFD can complement this weakness of scaling mismatch in fluid tunnel experiments by means of full-scale simulations (Blocken, 2014). This raises another commonly asked question: Why do we still use fluid tunnels despite the great advance in CFD and computational resources (Meroney, 2016)? The answer could be that the mutual and complementary use of these techniques guarantees an enhanced performance of both by leading to a better understanding and interpretation of the underlying physics under study(Murakami, 1990, Stathopoulos, 1997, Li et al., 2006, Blocken, 2014). It is worth noting that parameterisations of thermal and vapour fluxes at boundaries of complex geometries remain a challenge for CFD. There are many high-quality reviews and guidelines for the use of CFD in urban physics and more specifically in urban climate (e.g. Franke et al., 2007, Tominaga et al., 2008, Britter and Schatzmann, 2010, Blocken and Gualtieri, 2012, Garcia-Sanchez et al., 2018), but such reviews and guidelines are lacking for the applications of fluid tunnels in urban climate analysis. Therefore, we believe this paper is timely to address the capability and challenges of fluid tunnel applications in urban climate. The working principle of fluid tunnels is rather simple, where fluid is driven in bulk by fans or pumps to generate a (usually steady or quasi-steady) flow across the test section. These two ways of generating flows have the advantage of easy and accurate control of the flow rate (and hence velocity) and the use of wire mesh, screens, and honeycombs can reduce undesirable turbulence of the generated flow (e.g. Groth and Johansson, 1988, Kulkarni et al., 2011). In addition to a basic setup of a fluid tunnel experiment, three major requirements for urban climate modelling in fluid tunnels are: (i) generation and development of a desired atmospheric boundary layer (ABL), (ii) generation of heated fluxes on/from objects of interest, and (iii) generation of scalar transport. First, to achieve a fully developed approach neutral ABL flow following a log-law or power law (Stull, 1988), a roughness fetch and vortex generators can be arranged upstream of the test section (e.g. Castro and Robins, 1977, Chew et al., 2017, Zhang et al., 2017, Catarelli et al., 2020). Second, to simulate heated surfaces, for example, streets and exterior building walls heated up by solar radiation, heating elements need to be integrated into the fluid tunnel without obstructing the flows (e.g. Allegrini et al., 2013, Cui et al., 2016, Figure 1: Fluid tunnel modelling capabilities for physical processes of the urban climate. Zhao et al., 2022b). Third, modelling of scalar transport, for example pollutant dispersion, requires the release (and purging) of tracer gas or particles in the fluid tunnels (e.g. Meroney et al., 1996; Liu et al., 2010; Gromke and Ruck, 2012; Fellini et al., 2022). When these requirements are fulfilled, fluid tunnels have the capability to realistically reproduce and model multi-physic processes of the urban climate. As depicted in Fig. 1, the eight key multi-physics research areas commonly studied in fluid tunnels are covered in this paper, though some other applications of fluid tunnels, e.g., modelling of extreme events (such as tornadoes and downburst winds), are not covered. The eight research topics are summarized below and the requirements for their fluid tunnel modelling are reported in Table A1 (Appendix A). * Modelling of thermal buoyancy effects (Section 2.1) * Modelling of vegetation (Section 2.2) * Modelling of solar radiation (Section 2.3) * Modelling of thermal stratification (Section 2.4) * Modelling of pollutant dispersion (Section 2.5) * Modelling of indoor and outdoor natural ventilation (Section 2.6) * Modelling of outdoor wind thermal comfort (Section 2.7) * Modelling of urban flow over complex urban sites (Section 2.8). Solar radiation is the main source controlling the urban surface energy budget and driving many processes in built environments. The presence of thermal buoyancy effects in built environments is in fact largely due to solar radiation adsorption/release and anthropogenic heat generation. Thermal stratification may further establish when the buoyancy effects develop differently along the height direction, which affects wind and heat transfer in cities and also thermal comfort of residents. To mitigate excessive urban heat, urban vegetation as a means to provide shade, transpirative cooling, and modification of mean flow (wind) has become popular in many cities. The presence of vegetation and buoyancy effects in turn leads to more complex pollutant dispersion in cities. As a result, indoor and outdoor ventilation for venues in realistic urban sites have to be understood from thermal comfort and air quality point of view. The use of fluid tunnel experiments to model the urban climate in realistic urban sites links the research to applications and implementation in the real world, including urban planning and policy making. Each of these selected research areas will be discussed in Section 2.1 - 2.8. In each section, the fundamental considerations will be firstly provided, followed by recommendations for the design of experiments, and recent advances and outlook. The remainder of the paper is organized as follows: Section 2 describes the main advances in fluid tunnel modelling of multi-physics of the urban climate; Section 3 focuses on three main challenges for fluid tunnel modelling of urban climate; Section 4 closes the paper with conclusions and remarks. ## 2 Advances in modelling of multi-physics of urban climate ### Modelling of thermal buoyancy effects #### 2.1.1 Fundamental considerations Urban climate is dominated by many physical processes involving thermal buoyancy. For example, urban wind flowing over asphalt pavement, building facades or roofs at high surface temperature due to solar radiation adsorption is heated up in summer daytime and thus gains buoyancy through convective heat transfer. The thermal buoyancy of urban airflow may play a vital role in pollutant dispersion, heat removal, thermal comfort of residents, etc (e.g. Dallman et al., 2014; Mei and Yuan, 2022; Mouzourides et al., 2022). To characterise the urban airflow involving convective heat transfer from urban surfaces (e.g., ground, facades, etc.), the overall process can be regarded as an approximation of a Poiseuille-Rayleigh-Benard-type flow, where the approaching wind can be characterised as a Poiseuille flow and the heat release from urban surfaces (grounds, buildings, roofs, etc) can be approximated as a Rayleigh-Benard-type flow. The ratio between buoyancy and shear forcing of the incoming wind can be quantified by the bulk Richardson number (_Ri_), which can be defined in Eq.(2.1-1) (e.g. Chew et al., 2018, Zhao et al., 2022b): \[Ri=\frac{Gr}{Re^{2}}=\frac{g\beta H^{3}\Delta T/\nu^{2}}{(U^{2}H/\nu)^{2}}= \frac{g\beta H\Delta T}{U^{2}}\] (2.1-1) where \(Gr\) is the Grashof number characterising the buoyancy effect, _Re_ is the Reynolds number reflecting the shear effect. \(H\) is the length scale of heat source that releases heat to ambient fluid (e.g. air), \(\Delta T\) is the temperature difference between the heat source and the ambient, \(\beta\) is the thermal expansion coefficient of fluid (e.g. air), \(g\) is the acceleration due to gravity, \(\nu\) is the fluid kinematic viscosity, and \(U\) is the freestream fluid velocity. When the hydrostatic pressure variation of urban airflow is considerable, potential temperature is commonly used in the calculation of _Ri_, instead of using the absolute temperature. #### 2.1.2 Recommendations for the design of fluid tunnel experiments Fluid tunnels and heated building models have been used extensively to generate and study buoyancy effects in urban airflows at reduced scales (e.g. Allegrini et al., 2014, Tsalicoglou et al., 2020). An example experimental setup is shown in Fig. 2a. In design of experiments, particular attention needs to be paid to the control of heating of the models, in additional to considering well-established requirements for the blockage ratio of measurement section (Jeong et al., 2018) and proper generation of the boundary layer flow profile (Catarelli et al., 2020). Heating of model surfaces can be designed in two ways, that is, constant surface temperature (Zhao et al., 2021) or constant heat flux (Gaheen et al., 2021). The implementation of constant surface temperature can be achieved using electronic heating pads (Mouzourides et al., 2022) or by circulating heated water inside the models from a water bath (Shah et al., 2018). The implementation of constant heat flux can be achieved by using a heating power control. As the convective heat transfer coefficient of a building model surface could vary significantly under different wind conditions, the heating capacity of the electronic heater or water bath has to be chosen according to the desired surface temperature and the maximum convective heat transfer coefficient. Uniformity of surface temperatures needs to be ensured as much as possible prior to measurements, though small spatial variations could still exist due to limited control precision of heating for a large heat transfer surface. For temperature measurements, multiple thermocouples can be placed in a rack on the velocity measurement plane to allow quasi-temperature field measurements at low spatial resolution. While this approach is straightforward to implement, it does not allow non-intrusive and simultaneous velocity and temperature field measurements, and therefore the temperature measurement needs to be performed separately. The design of experiments also needs to facilitate the measurements. For velocity field measurements, motorized multi-dimensional stages may be used to allow efficient multiple field-of-view (FOV) measurements where lasers and the camera have to be moved in a synchronized way (Li et al., 2021). Depending on the laser intensity and optical access to the measurement plane, a mirror might be needed to enhance local laser intensity (Mouzourides et al., 2022). #### 2.1.3 Recent advances and outlook Despite the complexity mentioned above, fluid tunnel experiments remain a good option for studying non-isothermal flows or thermal buoyancy effects in urban climate processes, given their benefits in providing an approaching flow of desired profile, flexibility in setting up models, established optical setup and measurements, etc. Off-shelf equipment for quasi-field temperature measurements needs to be engineered and offered, as an auxiliary to well-established particle image velocimetry (PIV) equipment. Water tunnel measurements, as a promising approach, provide the opportunity to perform simultaneous velocity and temperature field measurements using PIV and laser-induced fluorescence (LIF)(Zhao et al., 2022b). An example setup is shown in Fig. 2b-e. For LIF, the fluorescence intensity measured at every pixel of the image is expected to have a linear relationship between the local laser intensity and temperature. The main challenges are the non-uniform spatial distribution of the laser sheet intensity, fluctuations of the laser intensity, and errors due to dye concentration and dye absorption (Vanderwel and Tavoularis, 2014, Zhao et al., 2022b). Uncertainties of the thermal couples used to calibrate the LIF results also limit the accuracy of the determined temperature field. Conductive models in water tunnels can be placed onto heating power-controlled heating plates (Fig. 2b-c) to mimic heat sources in urban climate. Non-toxic fluorescence, such as chlorophyll and Uranine (Fig. 2d), can be used for LIF measurements. LIF measurements using 1-color/1-dye or 2-color/2-dye can be adopted, depending on the desired temperature resolution(Yen et al., 2016, Zhao et al., 2022b). ### Modelling of vegetation #### 2.2.1 Fundamental considerations Flow inside and past vegetation is complex. Vegetation elements (leaves or needles, twigs, branches, trunks) generate local boundary layers and wakes which interact with each other and thereby forming shear layers and other intricate flow structures. The fundamental physical phenomena occurring in the flow through vegetation are (i) extraction of momentum due to aerodynamic resistance, (ii) conversion of mean into turbulence kinetic energy, and (iii) break-up of larger-scale turbulent motions into smaller-scale ones and in this way short-circuiting the eddy cascade (Shaw, 1985). In reduced-scale fluid tunnel studies of the built and natural environment, typically the aerodynamic load on and the modification of the flow and dispersion field in the surrounding Figure 2: Wind and water tunnel measurements of buoyancy effects of urban flows. (a) Wind tunnel setup showing electrical heated models (HM1-3); (b-e) water tunnel setup showing (b) conductive models on heating plates, (c) individual controllers of heating plates, (d) fluorescence chlorophyll / Uranine, and (e) optical setup. of vegetation are of interest (Fig. 3). Hence, scaling and similarity considerations should be directed on aerodynamic resistance (drag) and permeability. While scaling of the aerodynamic resistance ensures similarity in the extraction of momentum from the flow, it does not ensure similarity for the kinematics of the flow. The latter requires scaling of the permeability which implies the partitioning of flow going through and around the vegetation. This is pivotal for the major vegetation-induced turbulent flow structures including the characteristics of the recirculation in the lee. The break-up of turbulent motions and the short-circuiting of the eddy cascade are inherently complied with, however, only in a qualitative manner. A rigorous representation of the short-circuiting of the eddy cascade and all involved scales of turbulent motions eludes scaling. This is since typical _Re_ numbers in reduced-scale experiments are two orders of magnitude smaller compared to the real scale and due to complex non-linear interactions in the vortex decay mechanism. #### 2.2.2 Recommendations for the design of fluid tunnel experiments For the modelling of vegetation in reduced-scale wind tunnel studies, it is recommended to utilize prefabricated plastic-based open porous foams. Such foams are commercially available from several manufacturers with various porosities denoted by PPI-x, where PPI stands for 'pores per inch' and x is their count. The foams are available for with \(7<x<100\) and can be processed e.g. with a knife or scissors to reproduce the contour of the vegetation under consideration (Fig. 3). Alternatively, open porous objects can be self-made as clusters of interwoven filament- or stripe-like components as successfully applied in previous works (Gromke and Ruck, 2008, Gromke and Ruck, 2009, Gromke, 2011) In order to characterise the permeability for airflow of a porous medium in an aerodynamic manner, the pressure loss coefficient can be employed. The pressure loss coefficient \(\lambda\) of a porous sample in forced-flow is defined as (Gromke, 2011): \[\lambda=\frac{\Delta p_{\mathrm{st}}}{p_{\mathrm{dyn}}}\mathrm{d}=\frac{p_{ \mathrm{ww}}\ \text{-}\ p_{\mathrm{lw}}}{0.5\ \rho\ U_{\mathrm{f}}^{2}\ \mathrm{d}}\] (2.2-1) with \(\Delta p_{\mathrm{st}}\) the difference in static pressure between windward (subscript ww) and leeward (subscript lw) of the porous sample, \(p_{\mathrm{dyn}}\) the dynamic pressure, \(\rho\) the fluid density, \(Ut\) the bulk flow speed, and \(d\) the porous sample thickness in streamwise direction. The \(\lambda\)-values of foams with identical PPI-value from various manufactures and also among production batches may vary to some extent. As an indication \(\lambda_{\mathrm{PPI-}7}=200\) m-1, \(\lambda_{\mathrm{PPI-}10}=250\) m-1, \(\lambda_{\mathrm{PPI-}20}=500\) m-1, and \(\lambda_{\mathrm{PPI-}30}=1000\) m-1, which cover most of the required permeability of model vegetation in fluid tunnel studies, can be adopted (Gromke et al., 2016, Klausmann and Ruck, 2017). Figure 3: Application examples of open porous foam processed to model (a) avenue-trees in a urban street canyon (Gromke and Ruck, 2007), (b) hedge-rows in a urban street canyon (Gromke et al., 2016), (c) conifer trees of a forest stand (Gromke, 2018, Gromke and Ruck, 2018). The similarity in regard to aerodynamic resistance is ensured if the ratio of momentum extraction \(F_{\mathrm{d}}\) by the (model) vegetation to the momentum of the undisturbed approach flow \(F_{\mathrm{f}}\) is equal in model-scale and full-scale, i.e \([F_{\mathrm{d}}/F_{\mathrm{f}}]_{\mathrm{ms}}=[F_{\mathrm{d}}/F_{\mathrm{f}}]_{ \mathrm{fs}}\), where subscripts ms and fs stand for model-scale and full-scale, respectively. As is shown in Gromke (2011) and Gromke (2018), by employing Eq. (2.2-1), the following relationship can be derived \[\frac{\lambda_{\mathrm{fs}}}{\lambda_{\mathrm{ms}}}=\frac{d_{\mathrm{ms}}}{d_{ \mathrm{fs}}}=M\] (2.2-2) with \(M\) a geometric scale factor. Eq. (2.2-2) is the scaling relation which links the aerodynamic resistance, expressed by the pressure loss coefficient, of model and real vegetation. It states that the pressure loss coefficient of the model vegetation is that of the real vegetation divided by geometric scale factor. For pressure loss coefficients of real vegetation the reader is referred to the work of Grunert et al. (1984). Herein, pressure loss coefficients for various trees and shrub species are given as 1.8 m\({}^{-1}\)\(<\)\(\lambda_{\mathrm{fs}}\)\(<\) 6.9 m\({}^{-1}\) at a wind speed of 4 ms\({}^{-1}\) and as 0.8 m\({}^{-1}\)\(<\)\(\lambda_{\mathrm{fs}}\)\(<\) 3.5 m\({}^{-1}\) at a wind speed of 11 ms\({}^{-1}\). The proposed scaling and similarity concept complies with the requirements towards drag and permeability as outlined in the previous section on fundamental considerations. The modelling approach including the scaling and similarity concept is, next to its application in reduced-scale wind tunnel studies, also applicable in investigations with scaled vegetation in water channels. #### 2.2.3 Recent advances and outlook Studies with model vegetation in fluid tunnels contributed to our understanding in the areas of flow and turbulence in and above forest canopies, (e.g. Meroney, 1968, Sadeh et al., 1971, Chen et al., 1995, Novak et al., 2000, Marshall et al., 2002, Morse et al., 2002, Ruck et al., 2010, Tischmacher and Ruck, 2013, Conan et al., 2015), wind loads on trees and storm stability of forest stands (e.g. Stacey et al., 1994, Gardiner et al., 1997, Marshall et al., 1999, Marshall et al., 2002, Gardiner et al., 2005, Tischmacher and Ruck, 2013), exchange and deposition of scalar species and pollutants in forest canopies (e.g. Meroney, 1970, Ruck and Adams, 1991, Aubrun and Leitl, 2004, Aubrun et al., 2005, Wuyts et al., 2008, Conan et al., 2015, Coudour et al., 2016), wind energy-related subjects at forest sites (e.g. Sanz Rodrigo et al., 2007, Desmond et al., 2014, Desmond et al., 2017), and on windbreak by vegetation shelterb lets (Guan et al., 2003, Bitog et al., 2011). The vegetation modelling approach described in this contribution was successfully applied in studies of the effect of avenue-trees and hedge-rows on flow and pollutant dispersion in urban street canyons (Fig.3a, b) (Gromke, 2011, Gromke and Ruck, 2012, Gromke et al., 2016) as well as flow above forest canopies and wind loads on trees in forest stands (Fig.3c) (Gromke and Ruck, 2018). Next to their contribution to fundamental knowledge, the data of these studies widely serve for validation of numerical flow simulations by computational fluid dynamics (CFD). In particular, the modelling concept described herein is due to its parametrization straightforwardly applicable or implementable in CFD (e.g. Balczo and Ruck, 2009, Buccolieri et al., 2009, Salim et al., 2011, Moonen et al., 2013, Gromke and Blocken, 2015, Jeanjean et al., 2015, Vranckx et al., 2015, Morakinyo and Lam, 2016, Merlier et al., 2018, Moayedi and Hassanzadeh, 2022, Zhu et al., 2022). Future advancement in modelling of vegetation in fluid tunnels may envisage fluid-structure interactions typically occurring at moderate and higher flow speeds(Stacey et al., 1994, Hao et al., 2020). Moreover, most of the model vegetation models utilized in the past and current investigations do not, or only partly, reproduce reconfiguration and streamlining. The associated aerodynamic effects, such as changes in permeability and reduction of drag coefficient with increasing flow speed, are in general not sufficiently represented (Manickathan et al., 2018). Future fluid tunnel studies in the nexus of vegetation and urban climate may address the effects of e.g. facade or roof greening and parks or green spaces on urban flows with their implications for air quality, natural ventilation, and cooling (Li et al., 2022a, Manickathan et al., 2022, Zhao et al., 2023). ### Modelling of solar radiation #### 2.3.1 Fundamental considerations Short- and long-wave radiation are major contributors in the energy balance at urban and building surfaces. As illustrated in Fig. 4a, short-wave radiations are mainly assumed to be a heat source on impacted surfaces. On the other hand, long-wave radiation in an urban context concerns radiative exchanges between a specific surface and other urban surfaces, ground, and sky dome (Energy, 2018). Radiative fluxes in combination with convective fluxes over the external surfaces result in non-isothermal conditions in street canyons. This phenomenon is mainly presented by an applied heat from heat sources in fluid tunnel studies (Cui et al., 2016, Gong et al., 2022). To more realistically model both radiation and convective fluxes and avoid difficulties in the implementation of heated surfaces by artificial heaters, one can argue that using a solar simulator can technically present the radiative fluxes in fluid tunnels. Solar simulators have been used in many industrial applications, ranging from vehicle producers to photovoltaic manufacturers (Gallo et al., 2017, Li et al., 2022b). Nonetheless, studies to simulate radiation in atmospheric fluid tunnels are rare in literature. This is again due to the difficulties to provide a consistent and uniform radiation intensity on the impacted surfaces in addition to the blockage that a solar simulator might create against the working fluid (Mirzaei et al., 2014). #### 2.3.2 Recommendations for the design of fluid tunnel experiments Simulations of solar radiation within fluid tunnels face a wide range of challenges and barriers, which hinders a widespread adoption of such studies (Mirzaei and Carmeliet, 2015). Nevertheless, the conducted studies pave the way for future research to benefit from a combined observation of radiative and convective fluxes in fluid tunnels. When experiments are designed to integrate solar simulators into the fluid tunnels, a range of considerations should be taken into account. In terms of safety and hazard prevention of the fluid tunnel experiment, materials placed against solar simulators to absorb the radiation intensity should be cautiously selected to avoid melting and fire problems when a constant radiative flux can cause a sudden increase in the surface temperature of the exposed surfaces. While the excess surface temperature may not occur and therefore be noticed in the higher airflow velocities, this can be the case when the operating velocity in a fluid tunnel is considerably decreased. Hence, ensuring a safe range of operating temperature for the exposed surfaces can be initially tested when the wind tunnel is turned off and the operating velocity is zero. The wiring and installation of lamps also should follow the existing safety protocols to prevent any electrocution risks and fire hazards. In terms of technical challenges, the choice of hot-wire anemometer, the place of the probe and the way it is protected against the radiation are of paramount importance. Probe surfaces can be covered with aluminium foils, and this can be an effective strategy for other building model surfaces manufactured with materials of low melting points. Moreover, solar simulators generate the radiative flux with one or multiple lamps, which should be selected based on the needs of an experiment (Tawfik et al., 2018). Placing one or multiple lamps as a solar simulator may cause a nonuniform heat flux at the target surface. This is due to the shape of the lamp units even though a more effective design can reduce this discrepancy. In general, the radiation intensity can be expected to be uniform at the middle of the target surface, but less uniform on the edges. Thus, it is essential to monitor the uniformity of radiation at different points of a target surface with the related sensors such as Thermopiles (Renne, 2016) before starting experiments to ensure that the variation range is not more than few percent. #### 2.3.3 Recent advances and outlook A solar simulator installed in an atmospheric fluid tunnel can be combined with the advanced techniques such PIV to observe the flow above and within a cavity behind a building integrated photovoltaic (BIPV) (e.g., see Fig. 4b, c). In such studies, the laser beam should penetrate through an unblocked pathway of objects to enable visualizing the flow in the cavities. It should be noted that the PIV technique has a relatively small laser beam area, which might not be large enough to observe the whole domain of the flow. In such cases, focusing the PIV in multiple smaller planes and combining them can be a potential solution, especially when the use of hotwires in small areas could disturb the flow (Mirzaei et al., 2014). While working with a laser beam demands its own health and safety considerations and training, designing a pathway for the laser sheet to reach cavities and enclosed spaces is an essential part of the experiment with suitable transparent materials such as glass or Plexiglas. Note that a high-speed camera also needs to be installed in a way to have a clear focus over the monitored cavity. The experiment should also ensure that the flow reaching the cavities contains enough seeds to be illuminated by the laser beam to visualise the flow field. As another advanced technology, thermography techniques are used in different studies to observe the surface temperatures in large and atmospheric fluid tunnels (Le Sant et al., 2002, Mirzaei and Carmeliet, 2015). In atmospheric fluid tunnels it is crucial to ensure that the calibration of infrared cameras follows the standard operating procedures (Martin et al., 2022). In such experiments, infrared cameras should be preferably mounted in the upstream direction within a fluid tunnel to avoid disturbance to the flow while not being optically impacted by the fluid tunnel's reflecting surfaces, e.g., plexiglass. ### Modelling of thermal stratification #### 2.4.1 Fundamental considerations Stable stratification is a common phenomenon in the ABL, which affects the land-atmosphere interactions in terms of heat, mass and momentum transfer processes (Lee, 1979). The stability of the ABL is quantified with the buoyancy frequency (also called Brunt-Vaisala frequency) (Stull, 1988), \(N\), which is defined in Eq. (2.4-1) as follows. Figure 4: (a) Short-wave and long-wave radiation over building surfaces; (b) and (c) employment of a solar simulator in a wind tunnel (Mirzaei and Carmeliet, 2015) \[N=(g\beta\,\partial\theta/\partial z)^{1/2}\] (2.4-1) where \(g\) is the gravitational acceleration, \(\beta\) is the fluid thermal expansion rate, \(\theta\) is the potential temperature in the vertical boundary layer, \(z\) is the vertical coordinate, and \(\partial\theta/\partial z\) is the potential temperature gradient. In the ABL, the typical value of \(N\) is around 0.015 s\({}^{-1}\)(Hunt et al., 1988, Reuten, 2006). Three physical processes can cause stable stratification (Largeron and Staquet, 2016, Czarnecka and Nidzgorska-Lencewicz, 2017, Ning et al., 2018, Niedzwiedz et al., 2021). The first one is a warm front over a cold front, which causes a strong temperature gradient at the interface of the two fronts and is also named elevated inversion. The second one is due to hot air subsidence at a certain location caused by the large-scale (regional scale or global scale) atmospheric circulation. The third one is radiative cooling of land surfaces at night, which causes surface-based inversion and is the most common type in diurnal cycles. The inversions in the ABL can be caused by the joint effect of the above three physical processes. The accumulation of cold air in basins or valleys can form cold pools (Clements et al., 2003, Princevac and Fernando, 2008, Vosper and Brown, 2008, Lareau et al., 2013, Yu et al., 2017) and creates extremely strong inversion (\(N\) is as high as 0.1 s\({}^{-1}\)). To simulate the flow phenomenon in a stable ABL, a temperature gradient or density gradient also needs to be created in fluid tunnels. The non-dimensional parameters to guarantee similarities between the prototypes and reduced scale models are \(Fr\)(Lu et al., 1997b, Cenedese and Monti, 2003, Fan et al., 2018, Fan et al., 2020, Yin et al., 2020) or \(Ri\)(Ogawa et al., 1985, Uehara et al., 2000, Zhao et al., 2022a). #### 2.4.2 Recommendations for the design of fluid tunnel experiments There are two main methods for generating stable gradients, i.e., one is using a temperature gradient and the other one is using a density gradient (such as salt water). In wind tunnels, the temperature gradient can be built up by cooling the bottom and heating the top (Ogawa et al., 1981, Guo et al., 2021), which is also applicable in water tank experiments (Lu et al., 1997a, Lu et al., 1997b). Stable stratification created with salt water can be achieved in water tank experiments with the two-bucket filling technique (Fan et al., 2018, Fan et al., 2019), as shown in Fig.5a. There are several advantages of using salt water. First, the density gradient can be much larger than that with a temperature gradient method. Second, the density gradient will Figure 5: (a) Illustration of the procedure of creating the stable stratification with salt water. Modified from Fan et al. (2018). (b) Temperature field, presenting a dome shape, over a reduced-scale city model in a water tank under stable stratification condition, which is visualized by thermochromic liquid crystal sheet (Fan et al., 2017). last a longer time than the temperature gradient and does not require a thermal insulation of water tank walls and bottoms. Third, the heating at the top can block the laser or light access when applying PIV to measure the velocity field, which is not a problem in setups with salt water stratifications. Fourth, density gradient profiles can be flexible by adjusting the filling speed of salt water and pure water, for example, a neutral layer covered by a stable layer or a stable layer covered by a neutral layer (Fan et al., 2021). If the heating method is adopted, only a linear gradient can be achieved as it utilizes heat conduction to build up the temperature gradient. #### 2.4.3 Recent advances and outlook As the frequency of heatwaves and calm weather conditions increases, the buoyancy-driven flow, such as an urban heat dome in Fig. 5b, becomes more and more important due to their impact on pollutants and heat dispersion at building and city scales (Zhao et al., 2020, Fan et al., 2021). When the buoyancy-driven flow is dominant or at least comparable to the approaching flow, the similarity is easier to be achieved in water channels than that in wind tunnels. Moreover, as city size increases, the Coriolis force, which is quantified by the Rossby number (_Ro_) (Warn et al., 1995, Embid and Majda, 2006, van der Laan et al., 2020), starts to play a role in modulating the buoyancy-driven flow over urban areas. In this case, a rotating water tank would be a useful tool for simulating Coriolis force related large-scale flow structures at the city scale under Coriolis force. ### Modelling of pollutant dispersion #### 2.5.1 Fundamental considerations Pollutant dispersion in urban areas is driven by local meteorological conditions and the urban morphology. Furthermore, the multiplicity of pollutants and sources broadens the range of parameters that affect the phenomenon. The ability to control the different variables individually and to isolate their effects is the main advantage of dispersion studies in fluid tunnel experiments. Similarity in laboratory models for pollutant dispersion in urban areas is ensured by means of geometric similarity, and the matching of the main dimensionless numbers for the flow field: _Re_, _Pr_, densimetric _Fr_ and _Sc_ numbers (Snyder, 1981, Meroney, 2004, Tominaga and Stathopoulos, 2016, Mei et al., 2023). Moreover, specific similarity criteria concerning the pollutant emission, should be matched, including the geometric similarity of the source, _Re_ and _Fr_ at emission location, and the density and speed ratios at emission location (Pournazeri et al., 2012, Marro et al., 2014). These parameters are fundamental for simulating the release of non-neutrally buoyant plumes from elevated sources within the city. To reproduce vehicle exhausts, the buoyancy effects and the emission speed are generally neglected, and the similarity is attained using a tracer with density similar to that of the fluid. In this case, however, traffic-induced turbulence has non-negligible effects on pollutant dispersion and the similarity criterion by Plate (1982) can be applied (Gromke and Ruck, 2007). #### 2.5.2 Recommendations for the design of fluid tunnel experiments Pollutant dispersion in urban areas has been studied in both wind tunnels and in water tunnels. Besides the urban geometry, a key component of the experimental setup is the emitting sources which are generally elevated or ground-level point sources reproduced via metallic tubes, or line sources to simulate exhausts from vehicles. The latter is designed to minimize the vertical momentum and maximize lateral homogeneity (Meroney et al., 1996). Above the line source, traffic-induced turbulence can be mimicked by plates mounted on rotating bells (Kastner-Klein et al., 2001), as shown in Fig. 6.a. To simulate neutrally buoyant emissions in wind tunnel experiments, a mixture of hydrocarbon with air is generally injected (Yee et al., 2006; Salizzoni et al., 2009; Perry et al., 2016). Point concentration measurements are achieved with a Flame Ionization Detector (FID). Alternatively, sulfur hexafluoride is used as a tracer gas (Yassin et al., 2005; Gromke and Ruck, 2007; Chavez et al., 2011), which can be collected and sent to the detector via a capillary tube or taken from measurement taps at building walls. Water vapor produced by a H\({}_{2}\)O atomizer and measured by humidity sensors has also been used as a tracer for dispersion over urban areas (Mo and Liu, 2018). Buoyant plume emissions (Fig. 6.c) in wind tunnels are reproduced by means of light or heavy gases (He, CO\({}_{2}\)) or heated air (Robins et al., 2001; Snyder, 2001; Kanda et al., 2006). In the first case, a tiny quantity of a gas tracer detectable by a FID is generally added to the buoyant gas (Vidali et al., 2022). In the second case, measurements are performed with standard thermocouples (Marro et al., 2014). Beside gaseous sources, Rodriguez et al. (2020) measured the concentration of ultrafine particles in the wake of vehicles. In water channels, fluorescent dyes are released as passive tracers and the LIF technique allows for simultaneous multipoint concentration measurements (Yee et al., 2006; Wangsawijaya et al., 2022) (Fig. 6.6 b). Buoyancy effects can be obtained by mixing tracer dyes with alcohol and water or releasing salt water (Pournazeri et al., 2012). To evaluate chronic pollution in urban areas, it is often sufficient to estimate the average pollutant concentration over time. To ensure a reliable prediction, the acquisition and averaging time has to be longer than the typical time scale of the vortical structures within the domain (Pavageau and Schatzmann, 1999; Garbero et al., 2010). Conversely, the assessment of hazards due to toxic or explosive pollutants, or the impact of odours, requires the analysis of concentration fluctuations (Cassiani et al., 2020) from which higher-order concentration statistics and probability density functions can be determined (Gailis and Hill, 2006; Yee et al., 2006; Klein et al., 2011). Velocity and concentration must be simultaneously measured to estimate turbulent mass fluxes, which are key for understanding pollutant exchange in complex geometries (Carpentieri et al., 2012; Marro et al., 2020). #### 2.5.3 Recent advances and outlook In the last decades, physical models in fluid tunnels have brought great advances in understanding the mechanisms of dispersion in urban areas at different scales (Britter and Hanna, 2003; Xia et al., 2014; Zhang et al., 2020). For a group of sparse obstacles in the wake regime (Fig. 6d), the planar and frontal area density of buildings are found to affect the dispersion process, and the concentration profiles within the building array are in good agreement with Gaussian plume models (Davidson et al., 1996; Macdonald et al., 1998). This behaviour differs in realistic and dense urban geometries (e.g. Garbero et al., 2010), where the decoupling between the layer above the roofs and the region within the streets leads to channelling effects, deflection of plume centreline and complex horizontal spreading. As regards the effect of non-neutral approaching ABL winds on a regular array, the average concentrations in the canopy can be up to two times higher in stable stratification than in the neutral case and three times lower in convective conditions (Marucci and Carpentieri, 2020) Much effort has been devoted to the understanding of dispersion in a single street canyon and at street intersections (Ahmad et al., 2005; Yazid et al., 2014). Recent research has shown how the warming of the walls can lead to the deterioration or improvement of air quality depending on the canyon aspect ratio (Marucci and Carpentieri, 2019; Fellini et al., 2020). Different studies (Hajra and Stathopoulos, 2012; Nosek et al., 2016; Llaguno-Munitxa et al., 2017) have shown how the shape of buildings and roofs (Fig. 6e) have a non-negligible effect on the concentration in the streets. Recently, a growing interest has been devoted to the effect of tree planting (Fig. 6f) (Gromke and Ruck, 2007, Gromke and Ruck, 2009, Gromke and Ruck, 2012) on pollutant dispersion in a single canyon. To face the challenges related to climate change and urbanization, a greater number of experimental studies on the effect of vegetation and solar heating on air pollution is crucial. Further experiments on the dispersion of ultrafine particles in the wake of vehicles would help considerably to characterise particle exposure at pedestrian level, though care must be given to the similarity criteria for particles. This review also reveals the lack of experimental data on the concentration of reactive plumes in urban geometries that would be fundamental to validate the large number of numerical models covering the topic. ### Modelling of indoor and outdoor natural ventilation #### 2.6.1 Fundamental considerations Natural ventilation occurs due to pressure differences arising naturally between openings in a building, which drive the exchange of air between indoor and outdoor spaces. The pressure differences can be caused by two main mechanisms: wind and buoyancy effects (or a combination of both). In both cases, the flow regime that controls the exchange of air between indoor and outdoor is largely dependent on the location and geometry of the openings, pressure and temperature boundary conditions at the building envelope and the presence of local buoyancy sources (Linden, 1999). Pressure distributions due to wind are generally assessed through wind tunnel experiments. The nature of the urban environment and the building openings, with all their sharp edges makes wind speed largely irrelevant due to flow separation. This aspect simplifies experimental investigation as it makes the problem essentially _Re_-independent (Linden, 1999). On the other hand, buoyancy-driven flows due to temperature differences may represent a challenge for fluid tunnel modelling, due to the lower _Re_ leading to an increased importance of viscous effects (Linden, 1999). The buoyancy force can be described in terms of the reduced gravity, \(g^{\prime}\): \[g^{\prime}=g\,\frac{\Delta\rho}{\rho}=g\,\frac{\Delta T}{T}\] (2.6-1) Figure 6: Fluid tunnel testing of pollutant dispersion in cities: (a) Moving belts along a street canyon to simulate two-lane traffic (Kastner-Klein et al., 2001) (b) PIV-PLIF setup in a water flume (Lim et al., 2022). (c) Laser tomographic visualisation of a heavy gas plume (Vidali, 2021). (d) Point source (Marucci and Carpentieri, 2020) and (e) line (Nosek et al., 2016) source at street level to simulate emissions in an urban array. (f) Street-level linear source to mimic pollutant dispersion in a vegetated canyon (Fellini et al., 2022). where \(g\) is the acceleration of gravity, \(\rho\) the density and \(T\) the temperature. The dimensionless number of concern are the Reynolds number (\(Re\)) and the Peclet number (\(Pe\)), which can be written in terms of the reduced gravity as: \[Re=\frac{\sqrt{g^{\prime}H}\ H}{\nu};\ \ \ \ \ Pe=\frac{\sqrt{g^{\prime}H}\ H}{\kappa}\] (2.6-2) where \(\nu\) is the kinematic viscosity, \(\kappa\) is the coefficient of molecular diffusivity, and \(H\) is the relevant vertical scale. In order to reduce the mismatch in \(Re\) and \(Pe\), small-scale experiments in water (e.g., using salinity to simulate buoyancy) are generally used (Linden et al., 1990, Davies Wykes et al., 2020). Fluid tunnel experiments can still be of value in buoyancy-driven flow, as specialized facilities might help in characterising temperature and heat exchange around buildings in urban areas, e.g. Marucci and Carpentieri (2019). #### 2.6.2 Recommendations for the design of fluid tunnel experiments Measurements of ventilation rates in a wind tunnel are generally done using tracer concentration techniques (Etheridge, 2011). This would require an injection of a tracer gas in the building of interest and measuring the concentration decay due to ventilation. A fast response instrument is then needed to capture the transient and typically this is done through a Fast-response Flame Ionisation Detector (FFID), e.g., Marucci and Carpentieri (2020a) used hydrocarbons as tracer gases. FFID can also be used for air quality studies when assessing interactions between the interior and the exterior of the building for pollution dispersion purposes. Creating the correct wind environment is relatively easy in large boundary-layer fluid tunnels, where neutral conditions can be easily achieved in urban wind flows. Specialized facilities on the other hand, are required if non-neutral (stable or convective) boundary layers are to be generated (Marucci and Carpentieri, 2020b). As mentioned in the previous section, correct scaling of combined wind- and buoyancy-driven flows can be particularly challenging as temperature differences between indoor and outdoor environments might have to be greater than 75\({}^{\circ}\)C (Etheridge, 2011). Conventional techniques (e.g. laser doppler anemometry, LDA, or PIV, can be used to characterise boundary conditions around buildings. Direct pressure measurements are less frequent as characterising pressure distributions with a high resolution on building surfaces can be challenging, especially when pressure values are small (Nathan et al., 2021). #### 2.6.3 Recent advances and outlook Figure 7: (a) Study of combined wind- and buoyancy-driven ventilation and (b) study of the effects of local buoyancy sources on flow and dispersion in street canyons, from (Marucci and Carpentieri, 2019); both studies from the EnFlo wind tunnel, University of Surrey, UK. Simulating natural ventilation in a fluid tunnel has great potential, but also several limitations, as explained above. Indoor and outdoor studies have traditionally been carried out independently, usually for different purposes, and only recently some research projects have started the connection between the two environments, see, for example, the MAGIC project by Song et al. (2018) in Figure 6(a). Thermal characterisation of the building boundary conditions cannot be achieved easily in non-specialised facilities, but some specialised ones are starting to bridge the gap, with studies on non-neutral urban boundary layer (UBL) flows (Marucci and Carpentieri, 2020a, Marucci and Carpentieri, 2020b) and local heating sources (Marucci and Carpentieri 2019; see Fig. 6(b)). Recent advances include the development of post-processing techniques for measuring pressure and pressure fluctuations in low-speed wind tunnels, where the pressure variations are very small (Nathan et al., 2021). ### Modelling of outdoor wind thermal comfort #### 2.7.1 Fundamental considerations Thermal comfort is a subjective evaluation of the thermal environment. Over the last century, researchers have created several thermal comfort models to predict human thermal responses to different combinations of environmental parameters (including air temperature, humidity, radiation, and wind velocity) and personal variables (e.g., clothing and activity levels) (Gagge et al., 1972, Tanabe et al., 2002, Fiala et al., 2012, Chew and Norford, 2018). These models mathematically describe the thermoregulation processes, in which convective heat transfer coefficient (CHTC) is required to estimate the heat exchange between the human body and its surrounding environment. To measure the anatomically specific CHTCs for individual body segments, thermal manikin experiments are conducted in fluid tunnels, which features the ability to generate a wide range of wind conditions in a limited time and to maintain the same wind condition for a manikin to reach a steady state. The dimensionless numbers to describe force convection are \(Nu\), \(Re\), and \(Pr\)(De Dear et al., 1997). Although a full-scale thermal manikin is used in most of the studies, the reduced-scale models may be the focus of future modelling, since they can significantly reduce the sampling time under each test condition. #### 2.7.2 Recommendations for the design of fluid tunnel experiments A thermal manikin has been proven as an effective instrument for measuring sensible heat transfer between the human body surface and the surrounding environment (Tanabe et al., 1994, De Dear et al., 1997). It features an anatomically realistic human morphology, with a precision heating element and temperature sensor system embedded within the "skin". For the approaching flow, in addition to mean wind velocity, turbulence intensity and power spectral density distribution also play a key role in determining thermal perception and, therefore, the pedestrian level turbulence characteristics need to be properly simulated for thermal comfort studies in fluid tunnel modelling (Zhou et al., 2006).Turbulence generators, such as oscillating aerofoils and passive grids (Fig. 7(b)) are implemented upstream to simulate the full-scale natural wind. The approaching wind profile or the wind profile close to the body segments needs to be measured using fast-response anemometers. The manikin is often placed at the centre of the fluid tunnel test section, in a sitting, standing, or walking posture (Fig. 8 a-c), and tested under target wind conditions. CHTC is calculated based on the thermal state (skin temperature and heat loss) automatically logged by the manikin itself, and the air temperature and wind tunnel inner surface temperature can be measured by additional temperature sensors (Fig. 7(a)). The relationship between CHTC and the approaching wind condition is generally established through empirical regressions. To further investigate pedestrians' physiological and perceptual responses to wind, human subjects are invited to the wind tunnel to experience different wind conditions. Their thermal physiological parameters (e.g., local skin temperature) and perceptual responses can be collected and compared with thermal comfort model output (Yu et al., 2021). #### 2.7.3 Recent advances and outlook Thermal manikin and human subject experiments in fluid tunnels enable us to quantify the impact of wind on thermal perception. The CHTCs for individual body segments and whole-body are generated for typical outdoor wind conditions, including wind speed from still air to around 13 m s\({}^{-1}\)(Li and Ito, 2014), turbulence intensity from 0 to around 30% (Yu et al., 2020), evenly spaced horizontal directions, and different body postures, including sitting, standing, and walking. The prevailing thermal comfort models should adjust the CHTC formula according to the target environments and activity conditions to improve the prediction accuracy. While the effects of wind on sensible heat loss have been thoroughly investigated in the literature, few studies have focused on evaporative heat loss, which plays a role in determining thermal comfort when subjects' sweat accumulation increases (Bakkevig and Nielsen, 1995). To better understand the impact of body motion on CHTCs, the flow field around the manikin or human subjects during activities such as walking, and cycling is worthy of further investigation (Luo et al., 2014). Also, the impact of relative humidity on thermal perception should be studied in a conditioned wind tunnel, where relative humidity can be controlled to simulate transpiration and sweating. Based on the field measurement in urban areas, the pedestrian level turbulence intensity ranging between 10 to 60% is not uncommon (Murakami and Deguchi, 1981, Tse et al., 2017, Zou et al., 2021), and about half of the energy of the turbulence concentrates at frequencies less than 0.1 Hz (Hunt et al., 1976). More work needs to be done on simulating in the fluid tunnel a full-scale wind profile with high turbulence intensity and low frequency random gustiness. Future research efforts could also be applied to the interaction effect between wind and radiation by introducing solar simulators into the fluid tunnel. ### Modelling of urban flow over complex urban sites #### 2.8.1 Fundamental considerations The use of the fluid tunnels to test urban flow at complex urban sites or a small portion of these has a long heritage. Probably the first experiments inspecting the effect of adjacent buildings were carried out by C. L. Harris in 1934 on the Empire State Building (New York, USA) at the Figure 8: Wind tunnel testing on convective heat transfer coefficient: (a) a schematic diagram of wind tunnel setup and a thermal manikin in sitting posture, modified from Li and Ito (2014); (b) a thermal manikin in standing posture downwind of a passive turbulence generator (Yu et al., 2020); (c) a thermal manikin in a walking posture, modified from Oliveira et al. (2014). wind tunnel facility of the Bureau of Standards (Harris, 1933). The tests of Harris followed up a series of tests previously performed by H.L. Dryden and G.C. Hill, who investigated the wind pressure on the Empire State Building model not including the surrounding buildings (Dryden and Hill, 1933). Since then, an increasing number of studies have been carried out not only on isolated buildings/structures but also on realistic urban models, leading to a significant improvement in wind tunnel facilities, measurement techniques and the overall scientific background (Fig.9). In 2022, despite the different eras and the progress made on numerical (e.g. CFD simulations) and field measurements (e.g. anemometers and LiDAR profilers) over different decades, the use of fluid tunnel for testing realistic, complex urban sites is still remarkably high. This is justified by the fact that academicians as well as practitioners typically find with such tests practical and reliable solutions for a large variety of topics of urban physics and wind engineering field, such as the high pollutant dispersion, pedestrian-level wind discomfort, indoor/outdoor thermal comfort, wind energy, and wind loading (Davenport, 2002, Baker, 2007, Moonen et al., 2012, Blocken, 2014, Stathopoulos et al., 2018, Weerasuriya et al., 2018, Simiu and Yeo, 2019, Solari, 2019, Kareem, 2020, Gao et al., 2022). As mentioned earlier in this paper, a proper downscaling of atmospheric winds and realistic urban models are generally quite challenging. Publications released in the last 50 years have not only further enhanced the scientific background previously developed, but also provided useful practical guidelines to accurately reproduce scaled neutral, stable and unstable atmospheric boundary layer (ABL) winds (Stull, 1988) in fluid tunnel tests (ASCE 49-12, 2012). Important aspects related to similarity criteria (geometric, kinematic and dynamic) to be satisfied between full and reduced scale, both for the ABL flow and urban model, have been extensively investigated. On these grounds, large variety setups of roughness fetch and vortex generators (i.e. spires) to accurately reproduce the turbulent structures of the approach ABL flow upstream of the model have been systematically calibrated and tested in aeronautical, climatic and ABL fluid tunnels (e.g. Jensen, 1958, Castro and Robins, 1977, Cermak, 1981, Irwin, 1981, Saathoff and Melbourne, 1987, Davenport, 1992, Farell and Iyengar, 1999, Plate, 1999, Cermak, 2003, Holmes, 2004). In particular, a set of key parameters has been extensively monitored and found to be crucial for the proper development of a neutral ABL wind along the wind tunnel test-section: mean velocity profiles, stream-wise turbulence intensity, lateral turbulence intensity, vertical turbulence intensity, power spectral density distributions and integral length scales of turbulence (Cermak, 1975). However, the agreement between reduced-scale experimental data and meteorological data from codes and standards (VDI-3783, 2000) still remains an important topic to be addressed to guarantee the validity of the scaled wind characteristics. #### 2.8.2 Recommendations for the design of fluid tunnel experiments Matching the most relevant dimensionless parameters between reduced scale and full scale can almost never be realised. However, the exact matching of some of these parameters is not that relevant for urban flow studies. As mentioned also in Section 2.6.2, in most cases, buildings of realistic urban models are mainly characterised by sharp edges with fixed separation points of the flow, which makes the problem often _Re_-independent. In addition, although in urban studies the _Re_ threshold (for turbulent flows) is typically exceeded, specific tests to investigate the _Re_ independence are always highly recommended. The _Ro_ and _Fr_ which account respectively for the effect induced by the rotation of the earth on the wind (e.g. Ekman spiral) and the gravity effect on the flow pattern are typically irrelevant in urban studies, considering also that the former is almost unrealisable in ordinary fluid tunnels. In contrast, the _Ri_, _Gr_ and _Sc_ may play a key role in properly scaling thermal effects and dispersion phenomena (see also sections 2.1, 2.4, 2.5, 2.7). On top of that, there are some other important aspects that may be crucial for the choice of the scale: (i) the fluid tunnel test-section length, necessary to properly develop the approach ABL wind (Cermak, 1975); (ii) the blockage ratio, such as the ratio between the frontal area of the model and the wind tunnel cross-section that should not exceed the 5% (see ASCE 49-12, 2012); (iii) the need to manufacture small-scale architectural features to avoid oversimplifications that might threaten the reliability of the experimental results (Carpentieri and Robins, 2015, Ricci et al., 2017b, Paden et al., 2022); (iv) the encompassment of a sufficient portion of the environment surrounding the area of interest (Ricci et al., 2022). Hence, it is clear that defying the most appropriate scale of a realistic urban model is ultimately a compromise of multiple factors, which gives rise to a wide range of scales commonly adopted, from 1:200 (for small districts) to 1:1000 (for large portions of cities) approximately. With such small scales, the choice of the materials and the manufacturing technique can also help to improve the resolution of geometries by significantly reducing the gap between reality and models. Geometrical simplifications adopted for buildings or a portion of these might have an influence on the UBL and urban canopy layer (UCL) development but also on the local wind flow pattern (e.g. inside canyon streets) (Ricci et al., 2017a, Ricci et al., 2017b). The choice of materials is also strictly related to the type of tests to be performed (e.g. wind speed, wind pressure, temperature, pollutant dispersion measurements) and consequently to the instrumentation that should be used (e.g. pressure taps, Irwin probes, cobra probe, hot-wire anemometry, laser-doppler anemometry, particle image velocimetry). In this perspective, due to the small dimensions of buildings and streets, the use of a non-intrusive and accurate measurement technique is highly recommended for urban models. Advantages and disadvantages of measurement techniques can be different when related to specific topics, however, giving an exhaustive overview is beyond the scope of this paragraph. Finally, since these models often extend beyond the fluid tunnel turntable, it is equally important to accurately define the monitored area of the models (where measurements will be performed) preferably far away from the lateral boundaries of the facility by avoiding any possible undesirable interference effects (Stathopoulos and Surry, 1983). #### 2.8.3 Recent advances and outlook Most advanced techniques like 3D printing and injection moulding can facilitate the manufacture of such complex urban models, improve the geometry resolution, pre-arrange during the design stage of the model for the sensor installation (e.g., pressure taps) to definitively reduce the discrepancies between reality and models. In that regard, the use of numerical techniques (e.g. CFD), as a complementary tool to support the fluid tunnel tests of ABL winds on realistic urban models, might help to gain a better understanding of (i) the size of the surrounding environment to be realised for fluid tunnel tests, (ii) the level of geometrical simplifications to be adopted for buildings and other urban features, (iii) the UBL and UCL development through/over the investigated area. Nevertheless, the climate change and the increasing number of extreme events, different from most ordinary ABL winds, are boosting efforts in detecting, testing, simulating and modelling thunderstorm outflows and tornadoes worldwide (Hangan et al., 2017, Solari et al., 2020). The use of new fluid tunnel typologies and/or dedicated tornado/downburst simulators able to host also extensive urban models has become reality. However, due to downscaling and similarity issues, most of the studies still focus on empty chambers or isolated structures/buildings. This trend is bound to change and most likely more experimental tests on realistic urban models will be carried out in the near future, in order to make buildings and cities more resilient and less costly against storm wind damages. In conclusion, if on one hand the main findings of realistic urban studies are difficult to generalize and be used for basic analytical formulations (as for idealized case studies), on the other hand it has to be acknowledged that probably this is one of the best "trade-off" between academicians and practitioners to gain a reliable understanding of the wind flow field around buildings amidst well-settled urban layouts and provide feasible solutions to actual problems. ## 3 Three challenges for fluid tunnel modelling of the urban climate ### Modelling of multi-physics Multi-physics processes take place in our living cities, such as solar radiation absorption and heat emission from urban materials, convective heat transport by wind, evapotranspiration of plants, evaporation of water bodies, release of anthropogenic heat, emission and dispersion of pollutants, etc. Physical modelling of these processes in fluid tunnels is extremely challenging, if not impossible. The challenges lie in (i) deriving scaling among the multi-physics processes, (ii) complex setups for generating these phenomena simultaneously, and (iii) limited capabilities for large-scale flow, temperature and humidity measurements. Scaling has been well established for isothermal airflow studies using fluid tunnels where Reynolds number-independency is often assumed (e.g. Chew et al., 2018; Shu et al., 2020). For buoyancy-involved or non-isothermal airflow, the Richardson number has been accepted as the proper characteristic dimensionless parameter (Aliabadi et al., 2019; Zhao et al., 2022). However, as to other physical processes, there are few well-established scaling criteria. As an example, scaling for urban surface heat budget that is dominated by shortwave and longwave radiation, convective heat transfer and heat storage by urban materials, has not been established for scaled-down experimental studies. Present studies of these physical processes usually prescribe constant surface temperature or heating capacity to mimic urban heat to some extent. Shading and transpirative cooling by vegetation and plants in urban areas, as another example, could be studied using fluid tunnel experiments based on thoughtful scaling analysis (Manickathan et al., 2022). In addition, complex experimental setups are needed to reproduce multi-physics processes in a fluid tunnel. Experimental setups in the field of urban climate are often designed to study a particular physical process, rather than to study coupled multiple processes. For studies of urban isothermal wind, fluid tunnel measurements based on scaled-down realistic neighbourhood models have been the norm. This could be the basis where other physical processes, such as heterogeneous urban surface heat budget, could be realised by introducing Figure 9: Wind-tunnel testing of parts of realistic urban cities: (_a_) Empire State Building and its immediate surroundings, USA, modified from (Harris, 1933); (_b_) Twin Towers of the New York Trade Center, USA, modified from (Plate, 1999); (_c_) district of Quartiere La Venezia, Livorno city, Italy (with permission of Dr. A. Ricci). an artificial solar radiation and selecting building models' materials thoughtfully to mimic heat absorption and emission. In a further step, cooling effects of living vegetation in complex urban streets may be physically modelled using small-sized pot plants. Last but not the least, substantial development of measurement techniques and advanced post-processing abilities is still much needed for fluid tunnel studies of urban climate. For isothermal airflow, planar- and stereo-PIV have been developed to an advanced level that accurate velocity fields can be obtained. The recent development of water tunnel (flume) PIV-LIF measurement technique provides a way to obtain velocity and temperature fields simultaneously and thus to better understand turbulent and convective heat transport processes (Zhao et al., 2022). However, for wind tunnel modelling, temperature field measurements still mainly rely on individual thermocouple measurements at certain locations. Instruments that allow air temperature and humidity measurements for a large FOV need to be developed for studies modelling coupled heat and moisture transport in complex urban sites. The development of these instruments should take mobility and flexibility into account to allow efficient measurements with multiple FOVs. ### Modelling of anthropogenic processes Urban climate involves complex interplay of many anthropogenic processes and synoptic climate (Kubilay et al., 2020, Masson et al., 2020). Those typical processes include, but not limited to, emission of anthropogenic heat through air-conditioners, transportation, industrial plants, etc (Mei and Yuan, 2021). The emission of pollutants in the form of aerosols may particularly affect the radiation in the lower atmosphere and even the formation of clouds and precipitation (Masson, 2006, Nazarian et al., 2018). To reproduce urban climate phenomena in fluid tunnels, stochastic and anthropogenic processes in different forms need to be taken into account. The challenges to reproducing anthropogenic processes in urban climate in fluid tunnels primarily lie in the mimicking of spatial- and temporal-inhomogeneous heat and pollutant sources. As an example, on one hand, to simulate the impacts of spatially inhomogeneous release of anthropogenic heat in different residential areas, accurately designed heating elements in a fluid tunnel are required, which should be capable of maintaining the desired spatially-varied surface temperatures. On the other hand, those controllable heating elements should be able to reproduce and mimic temporal variation of the release of anthropogenic heat, such as varying operational capacity of Heating, Ventilation, and Air Conditioning (HVAC) systems due to different cooling demands in a day. As another example, to study inhomogeneous air quality in cities or in a neighbourhood, spatial- and temporal- dependent anthropogenic pollutant emission processes should be modelled in fluid tunnels. How to couple these anthropogenic, stochastic processes to prevailing wind, heat, and moisture transport processes at realistic spatial and temporal scales remains another key challenge for fluid tunnel modelling. It has been well established to model meteorological conditions (e.g. wind) from the statistical point of view of prevailing characteristics. However, when it comes to highly time-dependent or fast evolving anthropogenic processes, multiple time scales have to be considered and realised in fluid tunnel modelling to both characterise prevailing (background) urban climate and timewise (superimposed) characteristics. Given the complexity in matching various spatiotemporal scales, not to mention the development of case-specific experimental setups, the best use of fluid tunnel modelling to understand stochastic anthropogenic processes may lie in providing benchmark experimental modelling data for validation and calibration of CFD models, which ultimately facilitates numerical studies of the full-scale urban climate. High-quality experimental data from fluid tunnels on anthropogenic heat or pollutant emission processes involving varying intensities of wind flow turbulence and buoyancy is much needed for CFD model development, in particular for turbulence modelling and correct treatment of the buoyancy term. ### Combined fluid tunnel, scaled outdoor and field measurements While full-scale field experiments (\(H\sim 10\) - 100 m) provide a reliable way to understand urban climate (Eliasson et al., 2006, Offerle et al., 2007), diurnal cycles of urban thermal environment, including solar radiation, thermal storage by urban materials, vegetation transpiration and others, are hard to model in fluid tunnels. Furthermore, urban geometric layouts and building surface materials in real cities are highly heterogeneous, and thermal boundaries are complicated and usually difficult to quantify. Scaled outdoor measurements (\(H\sim 1\) m) have been verified as a good option to obtain high-quality parametric experimental data under realistic meteorological conditions (e.g., Yee and Biltoft, 2004, Kawai and Kanda, 2010, Chen et al., 2020). One of the paramount advantages of scaled outdoor measurements is the representation of physical processes which may rely on nature, realistic conditions, such as solar radiation. Also, scaled outdoor measurements usually allow models one order of magnitude larger compared to fluid tunnel models, which facilitates the matching of characteristic dimensionless numbers. As an example, compared to fluid tunnel studies in which scaled-down models are in the range of \(H\sim 0.1\) m, scaled outdoor models are in the range of \(H\sim 1\) m. Building models of complex geometries or living vegetation of different species (e.g., trees and shrubs) may be realised in-situ at measurement site, without the need of additional setup. As it is difficult or even impossible to satisfy all similarity requirements for scaled experimental studies either in laboratory or outdoor setting, a promising and viable approach for investigating urban airflow and thermal environment may be to rely on the combination of numerical simulations and experiments at various scales, i.e., full-scale field measurements, scaled outdoor measurements and fluid tunnel experiments. Conducting measurements covering this wide range of scales also allows us to gain understanding on the effects of scaling and identify physical processes that are particularly sensitive to scaling. ## 4 Concluding Remarks In this paper, the capabilities of fluid (wind and water) tunnel for modelling of urban climate on eight important physical processes (i.e., transport of heat in airflow, evapotranspiration and aerodynamic effects of vegetation, solar radiation, thermal stratification of airflow, pollutant dispersion, indoor and outdoor ventilation, outdoor wind thermal comfort, and wind dynamics in complex urban settings) have been reviewed. Fundamental considerations, recommendations for design of experimental modelling, recent advances and outlook have been provided for each topic, which serve as a repository of the state-of-the-art of physical modelling using fluid tunnels. While substantial advances have been made in modelling of decoupled, individual physical processes, grand challenges ahead lie in (i) physical modelling of coupled physical processes, (ii) mimicking of stochastic anthropogenic heat and pollution processes, and (iii) scaling of the different multi-physics processes. Fluid tunnel modelling of multi-physics processes dominating urban climate, such as the joint effects of evapotranspiration of plants and wind flow in complex and realistic urban morphology, is much needed for understanding the physics and also for validation of advanced numerical models that solve multi-physics problems of urban climate. In addition to the challenges in realising those physical processes in fluid tunnels, establishing proper scaling for scaled-down multi-physics processes and full-scale scenarios is even more challenging, and deserves future research efforts, particularly theoretical analyses. To tackle these grant challenges, a research consortium that comprises experienced researchers working on different urban climate processes is imperative. The rich expertise of such a research consortium would facilitate the design of advanced experimental setups for generating and studying a combination or full set of those important physical processes. The realisation and outcome of fluid tunnel modelling of multi-physics urban processes would in parallel contribute to validation and enhancement of advanced numerical models for urban climate studies. ## Acknowledgements We acknowledge the funding support from the Swiss National Science Foundation (Grant 200021 169323), the Fundamental Research Funds for the Central Universities (K20220163). ## CRediT authorship contribution statement **Y. Zhao**: Conceptualization **Y. Zhao, LW Chew, Y. Fan, C. Gromke, J. Hang, Y. Yu, A. Ricci, Y. Zhang, Y. Xue, S. Fellini, PA. Mirzaei, N. Gao, M. Carpentieri**: writing - original draft, review & editing **P. Salizzoni, J. Niu, J. Carmeliet**: writing - review & editing ## Appendix A Table A1. The eight research areas covered in this paper and their requirements in fluid tunnel modelling. Y for yes, N for no, O for optional (depending on applications).
2310.15122
A Reactive Molecular Dynamics Model for Uranium/Hydrogen Containing Systems
Uranium-based materials are valuable assets in the energy, medical, and military industries. However, understanding their sensitivity to hydrogen embrittlement is particularly challenging due to the toxicity of uranium and computationally expensive nature of the quantum-based methods generally required to study such processes. In this regard, we have developed a Chebyshev Interaction Model for Efficient Simulation (ChIMES) model that can be employed to compute energies and forces of U and UH3 bulk structures with vacancies and hydrogen interstitials with similar accuracy to Density Functional Theory (DFT) while yielding linear scaling and orders of magnitude improvement in computational efficiency. We show that that the bulk structural parameters, uranium and hydrogen vacancy formation energies, and diffusion barriers predicted by the ChIMES potential are in strong agreement with the reference DFT data. We then use ChIMES to conduct molecular dynamics simulations of the temperature-dependent diffusion of a hydrogen interstitial and determine the corresponding diffusion activation energy. Our model has particular significance in studies of actinides and other high-Z materials, where there is a strong need for computationally efficient methods to bridge length and time scales between experiments and quantum theory.
A. Soshnikov, R. K. Lindsey, A. Kulkarni, N. Goldman
2023-10-23T17:26:41Z
http://arxiv.org/abs/2310.15122v1
# A Reactive Molecular Dynamics Model for Uranium/Hydrogen Containing Systems. ###### Abstract The dynamics of Uranium/Hydrogen containing systems is studied in the context of Uranium/Hydrogen containing systems. The system is modeled by a _single-dimensional_ system, with a _single-dimensional_ system, with a _single-dimensional_ system. The system is modeled by a _single-dimensional_ system, with a _single-dimensional_ system, with a _single-dimensional_ system. The system is modeled by a _single-dimensional_ system, with a _single-dimensional_ system, with a _single-dimensional_ system, with a _single-dimensional_ system. The system is modeled by a _single-dimensional ###### Abstract Uranium-based materials are valuable assets in the energy, medical, and military industries. However, understanding their sensitivity to hydrogen embrittlement is particularly challenging due to the toxicity of uranium and computationally expensive nature of the quantum-based methods generally required to study such processes. In this regard, we have developed a Chebyshev Interaction Model for Efficient Simulation (ChIMES) model that can be employed to compute energies and forces of U and UH\({}_{3}\) bulk structures with vacancies and hydrogen interstitials with similar accuracy to Density Functional Theory (DFT) while yielding linear scaling and orders of magnitude improvement in computational efficiency. We show that that the bulk structural parameters, uranium and hydrogen vacancy formation energies, and diffusion barriers predicted by the ChIMES potential are in strong agreement with the reference DFT data. We then use ChIMES to conduct molecular dynamics simulations of the temperature-dependent diffusion of a hydrogen interstitial and determine the corresponding diffusion activation energy. Our model has particular significance in studies of actinides and other high-Z materials, where there is a strong need for computationally efficient methods to bridge length and time scales between experiments and quantum theory. ## Introduction Uranium (U) is a unique material with a number of practical uses, such as nuclear fuel for electricity generation, radio-isotope sources for diagnosis and research in the medical industry, and as a power source for submarines and weaponry by the military.[2] The pure metal occurs in three solid polymorphs: \(\alpha\) (orthorhombic), \(\beta\) (tetragonal) and \(\gamma\) (body-centered cubic). The most prominent metal phase in nature is \(\alpha\)-U, shown in Figure 1a, which transforms to \(\beta\)-U at approximately 935 K and subsequently transforms to \(\gamma\)-U at approximately 1045 K.[3] However, uranium is also a highly reactive metal and under ambient conditions will react spontaneously with hydrogen gas to form a brittle hydride (UH\({}_{3}\)), causing disintegration of the underlying metal. UH\({}_{3}\) itself is pyrophoric, leading to operational hazards and making surface hydrogenation experiments highly challenging.[4] Uranium hydride can exist in two different phases: \(\alpha\)-UH\({}_{3}\) and \(\beta\)-UH\({}_{3}\). Both phases possess a cubic lattice with each U atom is surrounded by 12 H atoms, as shown in Figure 1(b and c). The \(\alpha\)-UH\({}_{3}\) phase is metastable at low temperature exhibits a face-centered cubic (fcc) symmetry, with one UH\({}_{3}\) formula unit per primitive cell or four formula units in the cubic unit cell. In contrast, the ground-state \(\beta\)-UH\({}_{3}\) phase exhibits lower symmetry with eight formula units per unit cell, though with cubic symmetry overall. The more compact \(\alpha\)-phase completely converts to the \(\beta\)-phase at approximately 375 K, generally below the operating conditions of many nuclear reactors.[5] Relatively little is known about the hydriding mechanism in pure uranium, in part due to the material's reactive nature. Future studies would thus greatly benefit from atomic-level simulations of the hydriding process, which can provide microscopic details about the hydrogen-uranium reaction and help guide future experimentation. In order to probe the formation of new material phases and the formation of covalent bonds, atomistic simulations frequently require use of Kohn-Sham Density Functional Theory (DFT), which has shown immense predictive capability for material phases over a wide range of thermodynamic conditions [6, 7, 8, 9]. However, DFT calculations require significant computational resources per simulation step, and therefore are generally limited to time scales on the order of picoseconds and system sizes of few hundreds of atoms. Small-scale U\(+\)H calculations are relatively tractable with DFT and can be used in dataset preparation and validation. However, DFT calculations are too computationally intensive to model polycrystalline regions, grain boundaries, and realistic defect concentrations that likely play a significant role in the hydriding process. In fact, for this study, approximately 70,000 CPU-hours was required to run a molecular dynamics (MD) trajectory of a small \(\alpha\)-UH\({}_{3}\) system (54U + 162H) for only 0.5 ps. In contrast, convergence of hydride initiation, nucleation, and growth studies could require simulation cells of tens of thousands of atoms or greater run for nanosecond timescales or longer [10]. Therefore, uranium hydriding atomistic simulations require an alternative fast, accurate, and computationally-inexpensive approach that can calculate large-scale effects that are computationally cost prohibitive to determine using DFT alone. A practical solution to ameliorate these system size and time scale limitations is the development of computationally efficient MD force fields that can approximate the underlying potential energy surface with accuracy comparable to DFT. In this respect, classical force field approaches [11, 12, 13] have traditionally shown outstanding computational efficiency in modeling materials. These empirical approaches, though, generally do not allow for reactive conditions where bond breaking and forming occurs. Development of reactive force field methods, such as ReaxFF [14] and COMB [15], incorporate both reactive and non-reactive terms with physically motivated bond-order forms, and allow for bond breaking and forming under realistic conditions. However, these methods frequently involve rigid functional forms that can require potentially complex optimizations of non-linear parameters. More recently, machine learning (ML) approaches for MD simulations have been developed that utilize many-body kernels in more abstract, highly flexible functional forms. Examples include Gaussian Approximation Potential (GAP) [16], which leverages Gaussian Process Regression, and DeepMD [17, 18], which leverages deep Figure 1: Crystal structure of (a) \(\alpha\)-U (b) \(\alpha\)-UH\({}_{3}\), and (c) \(\beta\)-UH\({}_{3}\) drawn using Vesta (version 3.0) [1]. neural networks. These ML approaches have shown high degree of accuracy and transferability for a number of systems [19, 20]. ML approaches generally require large training and validation datasets for training purposes as well significant training times due to their inherent non-linear optimization requirements. These issues can be of particular challenge for actinide containing systems, where existing DFT data can be limited, and training data can be difficult to generate due to the extreme computational effort associated with quantum calculations of high-Z materials. Machine-learned methods that rely on linear parameterization, such as the Chebyshev Interaction Model for Efficient Simulation [21, 22] (ChIMES), hold promise as an potentially easier to optimize model for accelerated MD simulations with a high degree of accuracy. ChIMES is a many-body reactive force field for MD simulation based on linear combinations of many-body Chebyshev polynomials. The use of linear parameterization allows for optimal fitting coefficients to be solved for directly in most cases as well as powerful regularization approaches which are not necessarily available to non-linear optimization problems. ChIMES is based on an N-body expansion of DFT energies and forces and thus allows for a physically motivated and highly flexible functional form. In addition, use of Chebyshev polynomials imparts several advantages, including: (1) Chebyshev polynomials of the first kind are orthogonal and can be generated recursively, forming a complete basis set, (2) the derivatives of Chebyshev polynomials of the first kind are related to Chebyshev polynomials of the second kind, which are also orthogonal and generated recursively, (3) higher-order polynomials tend to have decreasing expansion coefficient values due to their monic form, and (4) Chebyshev polynomials of the first kind are nearly optimal, which means that the error due to interpolation closely resembles a minimax polynomial. ChIMES model optimization can be determined relatively quickly (e.g. within minutes for each optimization step of our study). In addition, ChIMES models have been shown in some cases to have significantly smaller data requirements and numbers of parameters than some neural network approaches [23], making them ideal for the application space to be studied here. Numerous ChIMES models have been designed for complex systems, such as molten liquid carbon [21], water [24, 25], high-pressure C/O systems [26, 27], hydrazoic acid (NH3) [28], titanium hydride (TiH2) [29], and silicon [30]. In this work, we detail our efforts to create a ChIMES model for use in uranium hydriding studies. We start with a brief discussion of our DFT calculations as well as the ChIMES methodology. We then investigate different options for optimal values the ChIMES hyperparameters, including polynomials orders for different bodied interactions, the minimum and maximum interatomic distance cutoffs, and regularization parameters. We validate our model against computational and experimental results, including lattice constants and the bulk moduli of different U-H phases, as well as defect energies for single and multiple defects of uranium vacancies, hydrogen interstitials, and hydrogen vacancies in uranium hydride. Finally, we present results from simulations with our optimum model, including the kinetic properties for bulk hydrogen diffusion through bulk \(\alpha\)-U, and molecular dynamics simulation of diffusion coefficients as a function of temperature. In all cases, we find that ChIMES yields a high degree of accuracy relative to DFT calculations on smaller system sizes. ## Computational Methods ### **Dft** All of our DFT calculations were performed using the Vienna ab initio simulation package code (VASP) [31, 32, 33] with the projector augmented wave (PAW) pseudopotentials [34, 35] for U and H with the Perdew-Burke-Ernzerhof generalized gradient approximation (PBE) exchange-correlation functional [36]. In terms of model optimization, we choose to focus on the high symmetry \(\alpha\)-UH\({}_{3}\) phase, and leave results from \(\beta\)-UH\({}_{3}\) for validation. The energy cutoff for the planewave basis set was set to 500 eV based on convergence tests. Structural relaxations were performed until forces on all atoms were less than 0.01 eV/A. A k-point mesh of 4x4x4 generated by the Monkhorst-Pack [37] method for integration over the Brillouin zone was used to generate the training set for both \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\), discussed below. Our full set of reference data contains 792 snapshots from the following DFT calculations: (1) short (0.5-1 ps) molecular dynamics simulations of \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\) metallic systems at temperatures of 400 K and 1000 K and various hydrogen concentrations, (2) single-point calculations of isotropically distorted \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\) lattices, and (3) supercell structures with uranium vacancies and images from optimized minimum energy path calculations of hydrogen diffusion in \(\alpha\)-U. Pure uranium optimization calculations (atomic coordinates and lattice parameters) and DFT-MD simulations of interstitial-containing systems were performed on 4x2x3 supercell with initial lengths 11.209 x 11.687 x 14.711 A. The interstitial simulations contained 94-96 uranium atoms and various hydrogen concentrations of hydrogen atoms (1-10 H atoms). We used a cubic supercell for defect-free \(\alpha\)-UH\({}_{3}\), with a lattice vector length of 12.363 A and containing 54 U and 162 H atoms. All DFT-MD simulations were performed in the canonical ensemble (NVT) with timestep of 4.0 fs for pure systems of uranium and 0.20 fs for systems containing H, using Nose-Hoover thermostat chains [38, 39, 40] and periodic boundary conditions. For our training, uniformly spaced frames from the MD calculations were extracted every 150-200 fs in order to ensure that configurations were as statistically uncorrelated as possible. ### **Chimes** A detailed explanation of the ChIMES interaction model has been discussed elsewhere [29, 30, 41] and is summarized here, briefly. The design philosophy behind ChIMES comprises of mapping the DFT total energy onto linear combinations of many-body Chebyshev polynomials of the first kind. The ChIMES total energy is expressed as follows: \[E_{\text{ChIMES}}=\sum_{i_{1}}^{n_{\text{a}}} \leavevmode\ {}^{1}E_{i_{1}}+\sum_{i_{1}>i_{2}}^{n_{\text{a}}} \leavevmode\ {}^{2}E_{i_{1}i_{2}}+\sum_{i_{1}>i_{2}>i_{3}}^{n_{\text{a}}} \leavevmode\ {}^{3}E_{i_{1}i_{2}i_{3}}+\sum_{i_{1}>i_{2}>i_{3}>i_{4}}^{n_{\text{a}}} \leavevmode\ {}^{4}E_{i_{1}i_{2}i_{3}i_{4}}\] \[+\text{higher\leavevmode\nobreak\leavevmode\nobreak\leavevmode \nobreak\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\leavevmode\nobreak\ the number of atoms in the system. In ChIMES, each of the greater than one-body terms is expressed as a polynomial sum. For example, the two-body term \(\ {}^{2}E_{i_{1}i_{2}}\) is expressed as: \[\ {}^{2}E_{i_{1}i_{2}}=f_{\mathrm{p}}\big{(}r_{i_{1}i_{2}}\big{)}+f_{ \mathrm{c}}^{e_{i_{1}}e_{i_{2}}}\big{(}r_{i_{1}i_{2}}\big{)}\sum_{m=1}^{\sigma_{ 2}}C_{m}^{e_{i_{1}}e_{i_{2}}}T_{m}\left(s_{i_{1}i_{2}}^{e_{i_{1}}e_{i_{2}}} \right)\] The indices \(\big{\{}e_{i_{1}},e_{i_{2}}\big{\}}\) correspond to the element types of atoms \(i_{1}\) and \(i_{2}\) respectively. In this case, all pairwise distances \(r_{i_{1}i_{2}}\) are transformed over \(\big{[}r_{min}^{e_{i_{1}}e_{i_{2}}},r_{max}^{e_{i_{1}}e_{i_{2}}}\big{]}\) (e.g., the range of minimum and maximum values for a given element pair) to the scaled coordinate \(s_{i_{1}i_{2}}^{e_{i_{1}}e_{i_{2}}}\), which is restricted to the Chebyshev polynomial input variable range of [-1,1]. The function \(f_{\mathrm{c}}^{e_{i_{1}}e_{i_{2}}}\big{(}r_{i_{1}i_{2}}\big{)}\) assures smooth variation of the energy function at the maximum distance boundary. The function \(f_{\mathrm{p}}\big{(}r_{i_{1}i_{2}}\big{)}\) is a penalty function that adds extra repulsion for situations where \(r_{i_{1}i_{2}}<r_{min}^{e_{i_{1}}e_{i_{2}}}\), (i.e., when the pairwise distance falls below the range of allowable inputs for the Chebyshev polynomials). The two-body sum is performed over all \(m\) dimers that exist within the \(r_{max}^{e_{i_{1}}e_{i_{2}}}\) cutoff, and \(\big{\{}C_{m}^{e_{i_{1}}e_{i_{2}}}\big{\}}\) are the set of optimized fitting coefficients that are permutationally invariant for each pair of atom types. Higher bodied orthogonal polynomials for clusters greater than a dimer can be constructed by taking the tensorial product of the sum of the constituent \(\binom{m}{2}\) unique pairwise polynomials of a that cluster. For example, a triplet or three-body cluster with the set of indices of \(\{i_{1},i_{2},i_{3}\}\) contains \(\binom{3}{2}=3\) unique distances, namely \(r_{i_{1}i_{2}},r_{i_{1}i_{3}},r_{i_{2}i_{3}}\). Thus, a three-body polynomial of total order \(m+n+q\) is constructed by first applying the transforms \(r_{i_{1}i_{2}}\to s_{i_{1}i_{2}}^{e_{i_{1}}e_{i_{2}}},\ r_{i_{1}i_{3}} \to s_{i_{1}i_{3}}^{e_{i_{1}}e_{i_{3}}},\ r_{i_{2}i_{3}} \to s_{i_{2}i_{3}}^{e_{i_{2}}e_{i_{3}}}\) and then taking the product \(T_{m}\left(s_{i_{1}i_{2}}^{e_{i_{1}}e_{i_{2}}}\right)T_{p}\left(s_{i_{1}i_{3}} ^{e_{i_{1}}e_{i_{3}}}\right)T_{q}\left(s_{i_{2}i_{3}}^{e_{i_{2}}e_{i_{3}}}\right)\). The three-body energy \(E_{ijk}\) can then be computed as the following linear combination: \[\ {}^{3}E_{i_{1}i_{2}i_{3}}=f_{\mathrm{c}}^{e_{i_{1}}e_{i_{2}}}\big{(}r_{i_{1} i_{2}}\big{)}f_{\mathrm{c}}^{e_{i_{1}}e_{i_{3}}}\big{(}r_{i_{1}i_{3}}\big{)}f_{ \mathrm{c}}^{e_{i_{2}}e_{i_{3}}}\big{(}r_{i_{2}i_{3}}\big{)}\sum_{m=0}^{\sigma_ {3}}\sum_{p=0}^{\sigma_{3}}\sum_{q=0}^{\sigma_{3}}c_{mpq}^{e_{i_{1}}e_{i_{2}}e _{i_{3}}}T_{m}\left(s_{i_{1}i_{2}}^{e_{i_{1}}e_{i_{2}}}\right)T_{p}\left(s_{i_{ 1}i_{3}}^{e_{i_{1}}e_{i_{3}}}\right)T_{q}\left(s_{i_{2}i_{3}}^{e_{i_{2}}e_{i_{3 }}}\right)\ . \tag{3}\] In this case, the set of \(\left\{c_{mpq}^{e_{i_{1}}e_{i_{2}}e_{i_{3}}}\right\}\) correspond to the three-body fitting coefficients that are permutationally invariant to atom types in the set \(\big{\{}e_{i_{1}},e_{i_{2}},e_{i_{3}}\big{\}}\) as well as polynomial order. We also apply smoothly varying cutoff functions to the three-body interactions, though penalty functions are omitted in this case and only included in the two-body energies. Finally, the '\(\ast\)' in the sum in Equation (3) corresponds to the fact that we only include distinct triplets in the sum where \(r_{i_{1}i_{2}}\), \(r_{i_{1}i_{3}}\), and \(r_{i_{2}i_{3}}\) are all less than \(r_{max}^{e_{i_{1}}e_{i_{2}}}\). Greater than three-body terms in the ChIMES energy expression are included in an equivalent manner. In practice a maximum of four-body terms are used due to prevent creating a combinatorically large polynomial space and potential parameter explosion.[23, 29, 30] ChIMES bears some resemblance to other polynomial expansion methods such as the Atomic Cluster Expansion [42, 43] (ACE) and spectral neighbor analysis potential [44] (SNAP) methods. We note that the polynomial basis sets in these methods are atom-centered and are functionally different than the cluster-centered Chebyshev approach we employ here. Similar to other atomic interaction potentials, [45, 46, 47, 48], ChIMES models are trained through matching forces, energies, and stress tensor components. In general, training and validation data are generated through DFT optimized structures and MD simulations, though the possibility exists to include data from higher levels of theory. In addition, use of weights is frequently required due to the differing physical units of the forces, stresses, and energies and the number of data points per configuration. We can thus define an objective function for our optimization as follows: \[F_{obj} = \frac{1}{N_{d}}\sum_{\tau=1}^{M}\left(\sum_{i=1}^{N_{\tau}}\sum_ {\alpha=1}^{3}\left(w_{\text{F}}\Delta\text{F}_{\tau_{\alpha_{1}}}\right)^{2} +\sum_{\alpha=1}^{3}\sum_{\beta\leq\alpha}\left(w_{\sigma}\Delta\sigma_{\tau_{ \alpha\beta}}\right)^{2}+(w_{\text{E}}\Delta E_{\tau})^{2}\right).\] The index \(\tau\) corresponds to a specific training set configuration from the total set of \(N_{\tau}\) configurations, \(i\) is the atomic index, and \(\alpha\) and \(\beta\) are the cartesian directions. We use the index \(M\) to denote the total number of configurations in the training set, with \(N_{d}\) corresponding to the total number of data points (e.g., forces, stress tensor components, and energies). In addition, \(\Delta\text{F}_{\tau_{\alpha_{1}}}=\text{F}_{\tau_{\alpha_{1}}}^{\text{ChIMES} }-\text{F}_{\tau_{\alpha_{1}}}^{\text{DFT}}\), \(\Delta\sigma_{\tau_{\alpha\beta}}=\sigma_{\tau_{\alpha\beta}}^{\text{ChIMES} }-\sigma_{\tau_{\alpha\beta}}^{\text{DFT}}\), \(\Delta\text{E}_{\tau}=\text{E}_{\tau}^{\text{ChIMES}}-\text{E}_{\tau}^{\text{ DFT}}\). The value \(w_{\text{F}}\) is the weight for forces, \(w_{\sigma}\) for the stress tensor components, and \(w_{\text{E}}\) for the energies. **c) Optimization and Regularization** Regularization is an important concept that is utilized in order to avoid overfitting of the trained data, and we refer to previous publications for further details. [30] In this work, we use the Least Absolute Shrinkage and Selection Operator method (LASSO), an L1 regularization technique which adds a penalty proportional to the absolute value of the magnitude of the fitting coefficients. This promotes smaller valued coefficients to become zero and subsequently be eliminated from the model. In this case, the objective function \(F_{obj}\) is minimized with the following additional penalty on the sum of the absolute values of the fitting coefficients \(C_{i}\): \[F^{\prime}_{obj}=N_{d}F_{obj}+2\alpha^{\prime}\sum_{i=1}^{N_{p}}|C_{i}|\] where \(\alpha^{\prime}\) is the parameter that regularizes the magnitude of the fitting coefficients \(C_{i}\), and \(N_{p}\) is the total number of unique fitting parameters. In our work, we use LASSO as implemented within the Least-Angle Regression (LARS) optimization method, [49] which is discussed in more detail in Refs. [29, 30] ## 4 Finding optimal parameters ChIMES model development requires the definition of a number of _hyperparameters_, i.e., user-chosen model parameters. These include the two, three, and four-body polynomial orders, minimum and maximum atomic interaction distance cutoffs (r\({}_{\text{min}}\) and r\({}_{\text{max}}\)), and the regularization method and degree of regularization. Figure 2 shows a workflow diagram for ChIMES model optimization, which comprises exploring different combinations of the fitting parameters. Validation was determined through calculation of the root-mean-square (RMS) error of different physical properties that were not included in our fits, such as the bulk lattice constants, ground-state volumes for \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\) systems, single uranium vacancy formation energy in pure U, and single hydrogen absorption interstitials in an \(\alpha\)-U supercell. **1.1 Optimal interatomic interaction distances.** The \(r_{min}\) value for each atomic pair (H-H, U-H, and U-U) was determined by scanning the data and finding the minimum interatomic interaction distances in our dataset (0.50 A for H-H, 0.68 A for U-H, and 1.07 A for U-U pairs). The \(r_{max}\) atomic pair values were uniformly set to the maximum allowed distance sampled in our training set (5.5 A) in order to satisfy the minimum image convention (i.e., one half of the shortest box length). **1.2 Sweep of polynomial order.** Given our choice of \(r_{min}\) and \(r_{max}\), we have created multiple models with our training set by sampling a range of polynomial orders for the two-body (2B), three-body (3B), and four-body (4B) interactions. Here, we have looped over our workflow by sampling the 2B order a range of \(2\leq O_{\text{2B}}\leq 18\), the 3B order over a range of \(2\leq O_{\text{3B}}\leq 14\), each with a step size of two. We have also created a subset of potentials with the 4B order varied over range of \(1\leq O_{\text{4B}}\leq 4\), in steps of one. Our preliminary studies showed that models with 2B only interactions produced substantially diminished results for all validation tests. Therefore, in this work we discuss results from models with combinations of non-zero 2B/3B order values, only. The optimizations were performed with the LASSO regularization of \(\alpha=10^{-3}\), similar to previous work [29, 30]. In doing so, we are able to validate a large number of independent models and eventually down select to our optimal choice. In this section, we first present results for models with 2B/3B polynomial models, followed by studies with incorporation of the 4B terms. Figure 2: Flowchart for creation of ChIMES model. The results for the computed the root-mean-square (RMS) error in our training set for the atomic forces on hydrogen and uranium ions are shown in Figure 3. A more detailed RMS error results summary, which includes diagonal of the steps tensor and energies, are shown in the Supplementary Materials section. Unsurprisingly, the RMS error for forces of each fit decreases systematically with higher polynomial orders and higher bodied models. For example, a model with a set of {_O\({}_{2B}\)_=8, _O\({}_{3B}\)_=6} yields fitting errors of 0.489 and 1.971 eV/A for the forces acting on H and U, respectively. On the other hand, a model with a set of {_O\({}_{2B}\)_=18, _O\({}_{3B}\)_=12} produces substantially lower fitting errors of 0.197 and 0.342 eV/A. We notice similar trends for RMS errors of the diagonal of the stress tensor (e.g. 1.962 GPa for {_O\({}_{2B}\)_=8, _O\({}_{3B}\)_=6} compared to 0.696 GPa for {_O\({}_{2B}\)_=18, _O\({}_{3B}\)_=12} model). On the other hand, the opposite trends are observed for the total energy as increasing polynomial order yields larger RMS values (e.g. 0.283 eV/U atom for {_O\({}_{2B}\)_=8, _O\({}_{3B}\)_=6} and 0.421 eV/U atom for {_O\({}_{2B}\)_=18, _O\({}_{3B}\)_=12} sets). We also note that models with higher orders of polynomials require more computational resources for the optimization process (e.g., about ten times the computational effort for the {_O\({}_{2B}\)_=18, _O\({}_{3B}\)_=12} model discussed here), though the optimization remains relatively rapid (approximately 45 minutes on a single Intel Xeon E5-2695 processor for the above example). Figure 3: Results for training RMS errors using _O\({}_{2B}\)_ and _O\({}_{3B}\)_Chebyshev polynomials. RMS error in the forces in units of eV/Å for (a) on U atoms and (b) on H atoms. Incorporation of 4B interactions yields further training accuracy, where we have included terms up to \(O_{\text{4B}}\)=4, similar to previous work [23, 29]. As shown in Figure 4 and Table S1 (Supplementary Materials), we observe a monotonic decrease in RMS training error with inclusion of higher four-body order in our model. We see RMS force errors of 0.580 eV/A in U and 0.266 eV/ A in H for {\(O_{\text{2B}}\)=16, \(O_{\text{3B}}\)=8, \(O_{\text{4B}}\)=0}, 0.580 eV/A and 0.266 eV/A for {\(O_{\text{2B}}\)=16, \(O_{\text{3B}}\)=8, \(O_{\text{4B}}\)=1}, and 0.560 eV/ A and 0.263 eV/ A for {\(O_{\text{2B}}\)=16, \(O_{\text{3B}}\)=8, \(O_{\text{4B}}\)=2}. In this case, the RMS errors for the total energy diminish with increasing 4B polynomial order (e.g. 0.243 eV/U atom for \(O_{\text{4B}}\)=0, and 0.238 eV/U atom for \(O_{\text{4B}}\)=3 models), though these improvements are small. We notice that ChIMES model with \(O_{\text{4B}}\)=0 yields virtually identical results to those from \(O_{\text{4B}}\)=1, which occurs due to LASSO regularization setting these four-body parameters to zero. We observe some measurable reduction in the RMS errors for models with \(O_{\text{4B}}\)=3 or higher. We calculate the RMS force errors of 0.532 eV/ A in U and 0.254 eV/ A in H and the total energy error of 0.216 eV/atom for {\(O_{\text{2B}}\)=16, \(O_{\text{3B}}\)=8, \(O_{\text{4B}}\)=3} and 0.512 eV/ A, 0.239 eV/ A, and 0.191 eV/U atom for {\(O_{\text{2B}}\)=16, \(O_{\text{3B}}\)=8, \(O_{\text{4B}}\)=4}. The opposite trends are observed in RMS errors of the diagonal of the stress tensor with increasing four-body order (e.g., 0.771, 0.771, 0.778, 0.789, 0.877 GPa). Next, we perform a model down select through validation against a series of DFT-computed physical properties. These include the lattice constants, volume of the optimized unit cell of \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\), vacancy formation energy in \(\alpha\)-U, and hydrogen interstitial energy formation in \(\alpha\)-U. We note that \(\beta\)-UH\({}_{3}\) was not part of this initial validation test. As shown in Table S1, the complexity of the model in general reduces the absolute errors relative to DFT reference values. However, some validation tests are more sensitive than others. In addition, the lattice constants and equilibrium volume tests converge to sufficient accuracy without utilizing the four-body interactions. For example, a model with a set of {\(O_{\text{2B}}\)=6, \(O_{\text{3B}}\)=6} yields absolute errors for lattice constants for \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\) below 5.0%. Increasing the polynomial order above these values leads to the reduction of relative errors in lattice, e.g., a model set of {\(O_{\text{2B}}\)=18, \(O_{\text{3B}}\)=8} produces errors of 1.4% and 4.15% for \(\alpha\)-U and 1.83% and 5.4% for \(\alpha\)-UH\({}_{3}\) lattice and volume parameters. A similar trend to a certain extent is observed for the formation energy of the uranium vacancy in 4x2x3 \(\alpha\)-U supercell. Here, we define the uranium vacancy formation energy is defined as Figure 4: Results for training RMS error test with the inclusion of \(O_{\text{4B}}\) Chebyshev polynomials. \[E_{\nu}=E_{(n-1)}\nu\ -[\frac{n-1}{n}]E_{nU},\] where \(E_{(n-1)\nu}\), \(E_{nU}\), and n are the supercell energies for the defective and perfect systems, and n is the number of uranium atoms, respectively. Models with polynomials orders above 10 for 2B and 8 for the 3B interactions estimate vacancy energy with relative error of \(\pm\)0.4 eV (or 25% errors) or smaller, relative to the DFT computed value of 1.78 eV, with the best model {\(O_{\rm 2B}\)=12, \(O_{\rm 3B}\)=8} of 1.80 eV with relative error of 0.9%. We find that accurate prediction for the hydrogen interstitial in \(\alpha\)-U to be one of the most challenging properties in our initial validation set. The interstitial formation energy is defined as \[E_{f}=E_{U+nH}\ -\ E_{U}\ -1/2\ nE_{H_{2}},\] where \(E_{U+nH}\), \(E_{U}\), and n are the supercell energies for the defective and perfect systems, and \(n\) is the number of hydrogen atoms in the defective supercell, respectively. Here, we have calculated the hydrogen interstitial energy for the most stable site (square-pyramidal). Similar to other validation tests, increasing the polynomial order of 2B and 3B interactions improve the results. For example, a model with a set of {\(O_{\rm 2B}\)=6, \(O_{\rm 3B}\)=6} yields absolute errors of over 600%, while a model {\(O_{\rm 2B}\)=12, \(O_{\rm 3B}\)=8} estimates the interstitial energy of 0.22 eV (17% relative error). fairly large polynomial orders (above 12 for 2B and 8 for 3B) are required to achieve relative errors below 10% relative to the DFT computed value of 0.27 eV, with the best model {\(O_{\rm 2B}\)=18, \(O_{\rm 3B}\)=8} of 0.26 eV with relative error of 1.5%. Lastly, we looked at the effect of including higher-body interactions on our validation tests. As shown in Table S1, incorporation of the four-body polynomial to the model set significantly improves the lattice constants for both of \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\). For example, adding the 4B terms to the model with a set of {\(O_{\rm 2B}\)=16, \(O_{\rm 3B}\)=8} somewhat improves relative errors, where we observe values of 2.32% (\(\alpha\)-U) and 2.13% (\(\alpha\)-UH\({}_{3}\)) for {\(O_{\rm 4B}\)=0} and {\(O_{\rm 4B}\)=1}, 2.29% (\(\alpha\)-U) and 1.49% (\(\alpha\)-UH\({}_{3}\)) for {\(O_{\rm 4B}\)=2}, 2.23% (\(\alpha\)-U) and 1.15% (\(\alpha\)-UH\({}_{3}\)) for {\(O_{\rm 4B}\)=3}, and 1.8% (\(\alpha\)-U) and 1.43% (\(\alpha\)-UH\({}_{3}\)) for {\(O_{\rm 4B}\)=4}. However, the additional 4B complexity yields higher errors for other validation tests. Errors in uranium vacancy formation energies systematically increase from \(\sim\)0.2 eV (0.9%) to \(\sim\)0.4 eV (20.8%). Similar trends are observed for the hydrogen interstitial with errors of \(\sim\)0.05 eV (16.8%) for {\(O_{\rm 4B}\)=0} and {\(O_{\rm 4B}\)=1}, \(\sim\)0.07 eV (24.3%) for {\(O_{\rm 4B}\)=2}, \(\sim\)0.18 eV (67.2%) for {\(O_{\rm 4B}\)=3}, and \(\sim\)5.7 eV (\(>\)2000%) for {\(O_{\rm 4B}\)=4}. Overall, we find comparable accuracy between the {\(O_{\rm 2B}\)=16, \(O_{\rm 3B}\)=8, \(O_{\rm 4B}\)=0} model those with {\(O_{\rm 2B}\)=16, \(O_{\rm 3B}\)=8, \(O_{\rm 4B}\)\(\leq\) 2} for the solid phase lattice constants and point defect energies in this validation suite. We compute some loss of accuracy for ChIMES models with values of \(O_{\rm 4B}\) = 3 or 4 for defect formation energies. As a result, we choose to proceed with ChIMES models with 2B and 3B interactions only, though the inclusion of non-zero 4B interactions has proven essential for simulations of reactive materials over a broad range of thermodynamic conditions.[28] We now down select to a ChIMES model {\(O_{\rm 2B}\) = 16, \(O_{\rm 3B}\) = 8, \(O_{\rm 4B}\) = 0}. We find that this model achieves the correct balance of minimizing the RMS errors our training set while yielding accurate bulk parameters for \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\). In particular, we find that this model is able to achieve accurate results for point defect properties, including hydrogen interstitial and uranium vacancy formation. Hence, we choose to proceed with this model for the remainder of our study. **1.3 Test of Regularization methods.** Given our choice of the \(r_{min}^{H-H}=0.50\), \(r_{min}^{U-H}=0.68\), \(r_{min}^{U-U}=1.07\) and \(r_{max}^{H-H}=5.5\), \(r_{max}^{U-H}=5.5\), and \(r_{max}^{U-U}=5.5\) atomic interaction distance cutoffs and the ChIMES polynomial order {\(O_{2\text{B}}\)=16, \(O_{3\text{B}}\)=8, \(O_{4\text{B}}\)=0}, we also explored different options for determining the optimal LASSO \(\alpha\) parameter based on the validation errors (Figure 5). In this study, we varied \(\alpha\) over the range of \(10^{-5}\leq\alpha\leq 10^{-2}\). A more detailed summary of the results is shown in Table S2 in the Supplemental Materials section. In short, values of \(\alpha=10^{-2}\) and higher yields relatively high RMS errors in all of our validation tests, with RMS force errors (0.70 eV/A in U and 0.33 eV/A in H), lattice constants (-1.97% for \(\alpha\)-U and -2.65% for \(\alpha\)-UH\({}_{3}\)), volume (-5.77% for \(\alpha\)-U and -7.74% for \(\alpha\)-UH\({}_{3}\)), single uranium vacancy (-8.43%), single hydrogen interstitial (-100.37%). In contrast, the under-regularized value of \(\alpha=10^{-5}\) show considerable improvement with RMS force errors (0.52 eV/A in U and 0.24 eV/A in H), estimation of lattice parameters (-1.19% for \(\alpha\)-U and -1.63% for \(\alpha\)-UH\({}_{3}\)), volume (-3.51% for \(\alpha\)-U and -4.83% for \(\alpha\)-UH\({}_{3}\)), single uranium vacancy (20.17%), and single hydrogen interstitial (1.87%). However, as shown in Figure 5, the optimal balance of regularization and minimized training and validation occurs with \(\alpha=10^{-3}\). Here, compared to \(\alpha=10^{-5}\) model, we see slightly higher RMS force errors (0.52 vs 0.58 eV/A in U and 0.24 vs 0.27 eV/A in H), lattice (-1.19% vs -1.32% for \(\alpha\)-U and -1.63% vs -1.9% for \(\alpha\)-UH\({}_{3}\)) and volume (-3.51% vs -3.90% for \(\alpha\)-U and -4.83% vs -5.57% for \(\alpha\)-UH\({}_{3}\)) but significantly lower uranium vacancy formation energy errors (20.17% vs Figure 5: Results of validation tests using different values of LASSO/LARS parameter \(\alpha\). (a) Root mean square force errors (in eV/Å) on U and H atoms, (b) Percent deviation in volume of ChIMES estimated optimized unit cell from the DFT value in \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\), (c) Defect energy formation deviation (in eV) of U vacancy in \(\alpha\)-U 4x2x3 supercell, and (d) Defect energy formation deviation (in eV) of 1 H interstitial (square-pyramidal) in \(\alpha\)-U 4x2x3 supercell. 12.53%) and similar hydrogen interstitial energy errors (1.87% vs 2.61%). Therefore, we chose to proceed with LASSO/LARS optimization with \(\alpha=10^{-3}\) as the best choice for our U-H model. **1.4. Final hyperparameters.** Our final set of hyperparameters values includes \(\{r_{min}^{H-H}=0.50,r_{min}^{H-U}=0.68,r_{min}^{U-U}=1.07,r_{max}^{H-H}=5.5,r_{ max}^{H-U}=5.5\}\) and \(\{O_{2B}=16,O_{3B}=8,O_{4B}=0\}\), optimized with LASSO/LARS with a regularization of \(\alpha=10^{-3}\). This model yields the RMS force errors of 0.27 eV/A in hydrogen and 0.58 eV/A in uranium, the total energy of 0.24 eV/U atom, and the diagonal of the stress tensor of 0.77 GPa. For the remainder of our discussion, we will present results using this ChIMES model. ## Appendix B Bulk structural properties Using optimal parameters, we performed analysis of the pure \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\), and \(\beta\)-UH\({}_{3}\). The ChIMES potential predicted lattice parameters of these reference structures, listed in the Table 1, agree quite well with DFT results[50] and experimental data[51, 52]. The results for the \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\) bulk properties indicate that the ChIMES yields lattice constants with errors of only \(\sim\)1.19% and \(\sim\)1.7% from the DFT and the experimentally determined values, respectively. Volumetric (per unit U atom) of the optimized unit cell and the bulk modulus parameters are slightly lower than DFT-determined results, with errors of 3.9% and 12.8% for \(\alpha\)-U and 5.57 % and 21.4% for \(\alpha\)-UH\({}_{3}\). In each case, the bulk modulus was estimated from energy vs. volume curve computed over a range of -13 to 24 GPa, followed by regression to a Birch-Murnaghan[53] model. We note that \(\beta\)-UH\({}_{3}\) data was not included in the training set in order to benchmark the transferability of the ChIMES potential. Our ChIMES potential is in excellent agreement with DFT-calculated and experimental values for \(\beta\)-UH\({}_{3}\), with relative errors of 2.1% and 3.0% for the lattice constant, and 6.1% and 8.7% for the unit cell volume, respectively. We compute an error of 0.9% for the bulk modulus relative to DFT. In addition, ChIMES yields the correct energetic ordering of the uranium hydride phases, with \(\beta\)-UH\({}_{3}\) predicted to be 0.03 eV/U atom lower in energy than \(\alpha\)-UH\({}_{3}\), compared to the DFT computed result of 0.02 eV/U atom. ## Appendix C Additional point defect formation energies in \(\alpha\)-U. **1.1 Uranium vacancy energies as a function of system size.** In order to further validate our optimal ChIMES parameterization, we choose to compute the formation energies of vacancies in \begin{table} \begin{tabular}{c|c c c c c} \hline **Structure** & **Method** & **\(\alpha\)(A)** & **U (A)** & **\(\alpha\)(A)** & **V (A)** & **B (G)** \\ \hline \multirow{4}{*}{\(\alpha\)-U (**ortho**)} & PBE & 2.81 & 5.87 & 4.92 & 20.29 & 148 \\ \cline{2-6} & ChIMES & 2.78 & 5.79 & 4.86 & 19.53 & 156 \\ \cline{2-6} & Exp & 2.84 & 5.87 & 4.94 & 20.59 & 125 \\ \hline \multirow{4}{*}{\(\alpha\)-UH\({}_{3}\) (**cubic**)} & PBE & 4.12 & 4.12 & 4.12 & 34.97 & 106 \\ \cline{2-6} & ChIMES & 4.05 & 4.05 & 4.05 & 33.22 & 135 \\ \cline{2-6} & Exp & 4.16 & 4.16 & 4.16 & 36 & — \\ \hline \multirow{4}{*}{\(\beta\)-UH\({}_{3}\) (**cubic**)} & PBE & 6.59 & 6.59 & 6.59 & 35.77 & 106 \\ \cline{2-6} & ChIMES & 6.45 & 6.45 & 6.45 & 33.54 & 115 \\ \cline{1-1} \cline{2-6} & Exp & 6.65 & 6.65 & 6.65 & 36.76 & — \\ \hline \end{tabular} \end{table} Table 1: Lattice parameters, unit cell volumes (V) per U atom, and bulk moduli (B) for \(\alpha\)-U, \(\alpha\)-UH\({}_{3}\), and \(\beta\)-UH\({}_{3}\) predicted by the ChIMES potential. Present results are compared with experimental and DFT calculations. \(\alpha\)-U as a function of supercell size, thus estimating the effect of uranium vacancy concentration. As shown in Figure 6, the vacancy formation energy shows some dependence on the uranium vacancy concentration in the bulk structure. The DFT estimated vacancy values were found to be 2.03 eV for (2x1x1), 1.93 eV for (2x2x1), 1.92 eV for (2x2x2), 1.90 eV for (3x2x2), 1.81 eV for (4x2x2), and 1.78 eV for (4x2x3) supercells. The 0.25 eV decrease in the vacancy formation energy with the increase of the system size from (2x1x1) to (4x2x3) indicates that there are strain-induced interactions around the defect at higher concentrations. Our ChIMES calculated value in the dilute limit of 1.56 eV is 0.22 eV lower (\(\sim\)12% error) than our computed DFT value and about \(\sim\)0.13-0.39 eV lower than other published DFT results of 1.95 eV (Taylor[54]), 1.69 eV (Wirth[55]), and 1.86 eV (Beeler[56]). We found that underestimation of the U vacancy energy was typical for all ChIMES models created in our study. We also observe a weaker dependence on vacancy formation energy as a function of defect concentration, where the curve from ChIMES is relatively flat compared to DFT. This yields somewhat larger error between the DFT and those predicted by ChIMES potentials at higher vacancy concentrations. This could be attributed to the lack of training data in our ChIMES model for multiple vacancies. We note that by positron annihilation experimentally determined vacancy formation energy[57] of 1.20\(\pm\)0.25 eV is relatively lower than DFT or ChIMES. In addition, DFT calculations could vary depending on choice of functional and dispersion interaction model. **1.2 Multiple hydrogen interstitial formation energies in \(\alpha\)-U**. In this study, as shown in **Figure 7**, we have calculated the interstitial energies for (a) a low-energy interstitial hydrogen at the square-pyramidal (sq) position, a pair of hydrogen atoms at nearby sq sites in 2 different directions (b and c), and a hydrogen pair located \(\sim\)5 A apart (d). The square-pyramidal interstitial site occurs where the H atom is coordinated by five U atoms from the lattice (Figure 7a and Table 2). ChIMES predicts a formation energy for this site of 0.28 eV, which agrees within 0.01 of our result from DFT and is also in a good agreement with previously published results of 0.319 eV using 4x2x2 supercell (64 U atoms).[58] In addition, our ChIMES model predicts hydrogen interstitial formation energies that compare well to DFT for higher energy sites (not shown here), including the tetrahedral site with a value of 0.32 eV (\(\sim\)0.02 eV or 9.6% error) and the octahedral site with a value of 0.50 eV (\(\sim\)0.05 eV or 10.2% error). The tetrahedral and octahedral sites are relatively low energy and are likely thermally accessible under ambient conditions. Figure 6: Uranium vacancy formation energy for \(\alpha\)-U at various concentrations using ChIMES potential. We now examine our ChIMES model in terms of different double hydrogen interstitials in \(\alpha\)-U with a 96 atom 4x2x3 supercell (Table 2 and Figure 7). For the short-range double interstitial system (two H interstitials in nearest-neighbor sites, labeled sq_sq_1) contains hydrogen atoms about 1.5 A apart from each other. The ChIMES formation energy of 0.50 eV has an error of 0.14 eV, relative to DFT. On the other hand, the medium-range double interstitial system (H interstitials in next-nearest neighbor sites, labeled sq_sq_2) has an inter-hydrogen separation of 2.1 A. Here, ChIMES yields a formation energy of 0.58 eV, with an error of 0.07 relative to DFT. Finally, the formation energy for the longer-range system (H interstitials several lattice spacings apart, labeled sq_sq_3) is also in relative agreement with the DFT value of 0.65 eV (a relative error of 0.17 eV). Overall, these results indicate that ChIMES can yield accurate physical quantities related to bulk hydrogen absorption within \(\alpha\)-U lattices. ### Hydrogen vacancy in \(\alpha\)-U\({}_{3}\) as a function of concentration In addition to the uranium vacancy and hydrogen interstitials in \(\alpha\)-U, we have also computed the hydrogen vacancy formation energy \(E_{vac}\) in \(\alpha\)-UH\({}_{3}\), defined as \[E_{vac}=E_{def}+1/2\;E_{H_{2}}-\;E_{perf}\;,\] \begin{table} \begin{tabular}{|c|c|c|} \hline **System** & **DFT (eV)** & **ChIMES (eV)** \\ \hline sq (Figure 7a) & 0.27 & 0.28 (3.7\%) \\ \hline sq sq 1 (Figure 7b) & 0.64 & 0.50 (-21.8\%) \\ \hline sq sq 2 (Figure 7c) & 0.51 & 0.58 (13.7\%) \\ \hline sq sq 3 (Figure 7d) & 0.52 & 0.65 (25.0\%) \\ \hline \end{tabular} \end{table} Table 2: Hydrogen interstitial formation energies (in eV) in \(\alpha\)-U (4x2x3) supercell. The labeled systems are pictorially shown in Figure 7. The percent deviation relative to the DFT values are shown in parenthesis. Figure 7: Hydrogen interstitial systems (a) square pyramidal site, (b) sq_sq_1, (c) sq_sq_2, and (d) sq_sq_3 systems. where \(E_{def}\) and \(E_{perf}\) are the supercell energies for the defective and perfect systems, respectively, and \(E_{H_{2}}\) is the energy of the isolated hydrogen molecule. The hydrogen vacancy formation energies were not part of our validation test and have been evaluated at various concentrations. Here, we begin with \(\alpha\)-UH\({}_{3}\) supercell size of 3x3x3 (54U + 162H atoms), sequentially remove random hydrogen(s), and optimize the atomic positions. As shown in Figure **8**, the defect formation energy is relatively constant over the concentration range probed in our analysis. The calculated DFT \(E_{vac}\) value for the bulk \(\alpha\)-UH\({}_{3}\) phase is 0.90 eV, with a ChIMES predicted value of 0.85 eV However, we observe that both ChIMES and DFT results are relatively flat as a function of hydrogen vacancy concentration. All of the results presented in this subsection indicate that ChIMES exhibits a high degree of accuracy for different bulk uranium and UH\({}_{3}\) properties. In particular, our model yields accurate results for a number of validation that were not included in our training set. This includes the relative energetic ordering of the two UH\({}_{3}\) phases, uranium vacancy and hydrogen interstitial formation energies in \(\alpha\)-U as under varying system sizes and concentrations, and hydrogen vacancy formation over a range of concentrations in \(\alpha\)-UH\({}_{3}\). These all indicate that ChIMES can yield close to DFT accuracy for the energetics of U/H-containing systems under a variety of conditions, allowing us to use our model for kinetic and molecular dynamics calculations relevant to the hydriding process. ## 4 Hydrogen hopping barriers in \(\alpha\)-U bulk. We now compute kinetic parameters for hydrogen diffusion between square pyramidal sites within bulk \(\alpha\)-U. The atomic hydrogen diffusion minimum energy pathways (MEP) were calculated via the climbing image nudged elastic band (NEB) method [59]. Here, a chain of 3-5 linearly interpolated images along an initial pathway between initial and final sq absorption sites was relaxed to determine the MEP and its corresponding saddle point. Calculations were performed until the maximum residual forces on each atom was converged to less than 0.01 eV/A. The transition state was confirmed by presence of one imaginary vibrational frequency. Two main sq-sq pathways were identified: (1) along the \(<\)011\(>\) lattice direction, and (2) along the \(<\)001\(>\) direction, as illustrated in Figure 9. Results of this study (Figure 10) show that diffusion barrier and the jump distance depend on the pathway direction. For pathway 1, the hopping distance between sites for hydrogen migration is 1.5 A and the DFT barrier height is 0.14 eV. For pathway 2, the hopping distance is 2.1 A and the DFT estimated barrier is 0.38 eV. For comparison, ChIMES yields a calculated barrier is 0.09 eV (0.05 eV relative error) for Figure 8: Hydrogen vacancy formation energy for \(\alpha\)-UH\({}_{3}\) at various concentrations. pathway 1 and 0.39 eV (0.01 eV residual error) for pathway 2, indicative of a high degree of accuracy. These barrier values are in close proximity with the experimental result for bulk hydrogen diffusion of 0.280 eV[60], which likely represents an average quantity for an imperfect crystalline system. ## Appendix E Molecular dynamics validation. We now evaluate the reliability of our ChIMES model for molecular dynamic (MD) simulations by comparing energies with DFT along a computed trajectory. This was performed by first computing a short NVT MD simulation with ChIMES of 5x2x3 \(\alpha\)-U supercell (120 U atoms) at 400 K. The trajectory was computed for \(\sim\)20 picosecond with a time step of 4.0 femtoseconds. We then extracted images after every 100th MD step and calculated single point energies using DFT in order to determine errors in the resulting energies. Figure 11 shows the ChIMES predicted energies of these structures along the MD trajectory in comparison to their DFT reference values. We observe accurate system energies from ChIMES, with slight overestimations of the DFT values with relative errors up to 0.05 eV/atom. Similar MD studies were performed for \(\alpha\)-UH\({}_{3}\) (54U + 162H) and \(\beta\)-UH\({}_{3}\) (64U + 192H) systems, shown in Figure 12. In each of these systems, we computed trajectories for \(\sim\)5 ps with a time step of 0.2 fs and collecting images every 1000 steps. For both of these systems, we observe errors in the system energies 0.10 eV/U atom in \(\alpha\)-UH\({}_{3}\) and below 0.15 eV/U atom in \(\beta\)-UH\({}_{3}\). Figure 10: NEB predicted barrier for hydrogen diffusion from sqpy to another nearby sqpy interstitial site along (a) \(<\)011\(>\) direction and (b) \(<\)001\(>\). Figure 9: Pictorial representation of two potential pathways: (a) pathway along \(<\)011\(>\) direction, (b) pathway along \(<\)001\(>\). We also have used MD simulations with our ChIMES potential to investigate the frequency of sites the hydrogen interstitial hopping in bulk \(\alpha\)-U. For this study, we have performed ten 10 ps NVT MD simulations over a range of temperatures (100-1000K) of one atomic hydrogen atom in a large \(\alpha\)-U supercell (840U + 1H atoms) with a time step of 0.2 fs. Hydrogen location was monitored based on the number of U atoms surrounding interstitial H atom a given radius. In this case, a value of 2.85 A centered at each site was chosen based on the largest possible U-H distance, which is the out of plane distance along the stretched axis in the octahedral site (\(\sim\)2.65 A\({}^{*}\)). An additional 0.2 A was added in order to compensate for thermal fluctuations of lattice sites during the MD simulation. Hydrogen locations were determined at each step and residency time was monitored during the molecular dynamics simulation. As shown in Figure 13, hydrogen predominantly occupies the square-pyramidal interstitial site for all temperatures of our study with residence of over 50%. The square-pyramidal is the most stable interstitial site in pure uranium bulk, which is confirmed by our ChIMES potential and DFT studies and has the shortest average residency time of 1.7 fs. This short residency time is commensurate with the relatively low hopping barriers computed for both \(<\)011\(>\) and \(<\)001\(>\) diffusion pathways. As temperature increases, the fraction of the square-pyramidal site decreases from 0.67 (100 K) to 0.46 (1000 K), as the other sites - tetrahedral (+0.05 eV greater absorption energy) and octahedral (+0.22 eV greater than square-pyramidal) - become more energetically accessible. The second most stable interstitial tetrahedral site shows occupancy fraction of \(\sim\)0.30, which remains relatively constant for all range of temperatures, with an average residency time Figure 11: Comparison of energy predicted by the ChIMES and DFT along the same MD trajectory in \(\alpha\)-U. Figure 12: Comparison of energy predicted by the ChIMES and DFT along the same MD trajectory in (a) \(\alpha\)-UH\({}_{3}\) and (b) \(\beta\)-UH\({}_{3}\). of 4.44 fs. On the other hand, we observe the first occurrence of the octahedral site at a temperature of 300 K. As the system temperature increases, the frequency of occurrence of the octahedral site also increases, apparently at the expense of the square-pyramidal site. The octahedral was found to have the longest residency time of 8.39 fs. Overall, all residency times computed here are exceedingly short at ambient conditions. This indicates that hydrogen are highly mobile in the pure metal lattice, which likely has ramifications for hydriding initiation. ### Hydrogen diffusivity as a function of temperature. Finally, we have computed hydrogen diffusion coefficients in pristine \(\alpha\)-U as a function of temperature from MD simulations. During the constant temperature MD simulation, the mean square displacement (MSD) of hydrogen atoms was calculated and the diffusion coefficient was determined from the standard Einstein-Smoluchowski relation. Each MD simulation (840 U + 20 H atoms with supercell dimensions of 19.19 x 29.23 x 29.24 A, similar to \(\alpha\)-U at ambient density) was run for 10 ps with time step of 0.2 fs. The MSD was averaged over a time interval of at least 5ps for each calculation. In addition, the diffusivities were then estimated by linear regression of the MSDs over a temperature range of 300 K to 1000 K and then fit to the Arrhenius equation: \[D(T)=D_{0}\exp{\left(-\frac{E_{a}}{k_{B}T}\right)}.\] Here, \(D_{0}\) is a prefactor, \(k_{B}\) is the Boltzmann constant and \(E_{a}\) is a total hydrogen diffusion activation energy. The Arrhenius plot of hydrogen diffusivity as a function of temperature is shown in Figure 14. Our regression analysis yields a value of \(D_{0}\) of \(2.73\times 10^{-3}\) cm\({}^{2}\)/s, which is in very good agreement with the experimentally determined result of \(1.43\times 10^{-3}\) cm\({}^{2}\)/s.[60] Figure 13: Interstitial site type analysis. However, ChIMES yields an overall diffusion barrier of 0.13 eV, which a bit lower than experimental value of 0.28 eV (Mallett [60]). This could be attributed to the environmental conditions surrounding the metal during experimentation, such as presence of the surface cracks, grain-boundaries, and the passive layer of oxides, oxycarbides, and water, while our computational simulation currently probes a defect-free crystal. ## Conclusion In this work, we have utilized an optimzation workflow to create a ChIMES reactive force field for a variety of uranium-hydrogen containing systems. Optimal ChIMES parameters were determined extremely rapidly in a semi-automated approach by varying the minimum and maximum pairwise interaction radius, the polynomial order, and the type of the regularization. Overall, we find that our ChIMES model yields comparable accuracy to DFT for the U-H containing systems studied here. This includes thermodynamic data, such as the bulk structural parameters of \(\alpha\)-U, \(\alpha\)-UH\({}_{3}\), and \(\beta\)-UH\({}_{3}\), the relative energetic ordering of UH\({}_{3}\) phases, and the bulk moduli of these three materials. Our model also yields accurate defect formation energies in both \(\alpha\)-U and \(\alpha\)-UH\({}_{3}\) over a range of defect concentrations. Finally, ChIMES yields accurate kinetic data for hydrogen interstitial hops in an a-U lattice as well as bulk diffusivity over a broad temperature range. The linear scaling of ChIMES and computational efficiency relative to standard DFT allows for its use in large-scale MD simulation, which could include amorphous grain boundaries and hydride phase nucleation and growth. Future efforts will involve simulation of these hydriding phenomena in uranium, including formation of the hydride metal within the bulk around defect sites. Our efforts will allow us to make more direct contact with experiments, where atomic-level simulations can be valuable tools for elucidation and interpretation of results. Figure 14: Arrhenius plot of hydrogen diffusivity at temperatures ranging from 300 to 1000 K. ## Supplemental Materials \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Poly Type** & **suJata** & **suJatb** & **suJatc** & **suJatc** & **suJv** & **uHgJat** & **suJatc** & **suJv** & **DEF Vac** & **UDer IntIM** \\ \hline [MISSING_PAGE_POST] 55) & 12.704 (116.526) & 10.66 (116.55) & 20.266 (914.88) & 6.38 (54.85) & 129.28 (-17.216) & -16.48 **Table S2**: Summary of validation tests using different values of LASSO/LARS parameter \(\alpha\): a) RMS, b) Validation tests. \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**\(\alpha\) value**} & **rms H** & **rms U** & **rms Energy** & **rms P** \\ \cline{2-5} & **(eV/Å)** & **(eV/Å)** & **(eV)** & **(GPa)** \\ \hline 1.00E-05 & 0.238 & 0.518 & 0.249 & 0.674 \\ \hline 1.00E-04 & 0.243 & 0.532 & 0.250 & 0.684 \\ \hline 1.00E-03 & 0.266 & 0.58 & 0.243 & 0.771 \\ \hline 1.00E-02 & 0.326 & 0.702 & 0.253 & 1.234 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**a value**} & \multicolumn{4}{c|}{**a-U**} & \multicolumn{2}{c|}{**a-U**} \\ \cline{2-9} & **lat\_a (Å)** & **lat\_b (Å)** & **lat\_c (Å)** & **Vol. (Å)** & **Vac 1U (eV)** & **Int 1H (eV)** & **lat (Å)** & **Vol. (Å)** \\ \hline 1.00E-05 & 2.78 (-1.19\%) & 5.798 (-1.18\%) & 4.865 (-1.19\%) & 19.61 (-3.51\%) & 1.421 (20.17\%) & 0.263 (1.87\%) & 4.053 (-1.63\%) & 33.28 (-4.83\%) \\ \hline 1.00E-04 & 2.781 (-1.19\%) & 5.798 (-1.17\%) & 4.866 (-1.18\%) & 19.61 (-3.51\%) & 1.409 (20.84\%) & 0.223 (16.79\%) & 4.051 (-1.67\%) & 33.24 (-4.94\%) \\ \hline 1.00E-03 & 2.777 (-1.32\%) & 5.79 (-1.3\%) & 4.859 (-1.32\%) & 19.53 (-3.9\%) & 1.557 (12.53\%) & 0.275 (2.61\%) & 4.042 (-1.9\%) & 33.02 (-5.57\%) \\ \hline 1.00E-02 & 2.759 (-1.97\%) & 5.752 (-1.95\%) & 4.827 (-1.97\%) & 19.15 (-5.77\%) & 1.93 (8.43\%) & -0.001 (100.3\%) & 4.011 (-2.65\%) & 32.26 (-7.74\%) \\ \hline \end{tabular} **b)** ## Author Declarations ### Conflict of Interest The author has no conflicts to disclose. ### Author Contributions **Artem Soshnikov**: Data curation (lead); Investigation (supporting); Methodology (supporting); Software (equal); Supervision (equal); Writing - original draft (lead); Writing - review & editing (equal). **Rebecca K. Lindsey**: Software (equal); Writing - review & editing (supporting). **Ambarish Kulkarni**: Conceptualization (supporting); Methodology (supporting);. **Nir Goldman**: Conceptualization (lead); Data curation (supporting); Investigation (lead); Methodology (lead); Software (equal); Funding acquisition (lead); Writing - review & editing (equal). ### Data Availability The data that support the findings of this study are available within the article or from the corresponding author upon reasonable request.
2306.01869
Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix Factorization
We introduce efficient $(1+\varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem, where the inputs are a matrix $\mathbf{A}\in\{0,1\}^{n\times d}$, a rank parameter $k>0$, as well as an accuracy parameter $\varepsilon>0$, and the goal is to approximate $\mathbf{A}$ as a product of low-rank factors $\mathbf{U}\in\{0,1\}^{n\times k}$ and $\mathbf{V}\in\{0,1\}^{k\times d}$. Equivalently, we want to find $\mathbf{U}$ and $\mathbf{V}$ that minimize the Frobenius loss $\|\mathbf{U}\mathbf{V} - \mathbf{A}\|_F^2$. Before this work, the state-of-the-art for this problem was the approximation algorithm of Kumar et. al. [ICML 2019], which achieves a $C$-approximation for some constant $C\ge 576$. We give the first $(1+\varepsilon)$-approximation algorithm using running time singly exponential in $k$, where $k$ is typically a small integer. Our techniques generalize to other common variants of the BMF problem, admitting bicriteria $(1+\varepsilon)$-approximation algorithms for $L_p$ loss functions and the setting where matrix operations are performed in $\mathbb{F}_2$. Our approach can be implemented in standard big data models, such as the streaming or distributed models.
Ameya Velingker, Maximilian Vötsch, David P. Woodruff, Samson Zhou
2023-06-02T18:55:27Z
http://arxiv.org/abs/2306.01869v1
# Fast \((1+\varepsilon)\)-Approximation Algorithms for Binary Matrix Factorization ###### Abstract We introduce efficient \((1+\varepsilon)\)-approximation algorithms for the binary matrix factorization (BMF) problem, where the inputs are a matrix \(\mathbf{A}\in\{0,1\}^{n\times d}\), a rank parameter \(k>0\), as well as an accuracy parameter \(\varepsilon>0\), and the goal is to approximate \(\mathbf{A}\) as a product of low-rank factors \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\). Equivalently, we want to find \(\mathbf{U}\) and \(\mathbf{V}\) that minimize the Frobenius loss \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\). Before this work, the state-of-the-art for this problem was the approximation algorithm of Kumar _et al._[1], which achieves a \(C\)-approximation for some constant \(C\geq 576\). We give the first \((1+\varepsilon)\)-approximation algorithm using running time singly exponential in \(k\), where \(k\) is typically a small integer. Our techniques generalize to other common variants of the BMF problem, admitting bicriteria \((1+\varepsilon)\)-approximation algorithms for \(L_{p}\) loss functions and the setting where matrix operations are performed in \(\mathbb{F}_{2}\). Our approach can be implemented in standard big data models, such as the streaming or distributed models. ## 1 Introduction Low-rank approximation is a fundamental tool for factor analysis. The goal is to decompose several observed variables stored in the matrix \(\mathbf{A}\in\mathbb{R}^{n\times d}\) into a combination of \(k\) unobserved and uncorrelated variables called factors, represented by the matrices \(\mathbf{U}\in\mathbb{R}^{n\times k}\) and \(\mathbf{V}\in\mathbb{R}^{k\times d}\). In particular, we want to solve the problem \[\min_{\mathbf{U}\in\mathbb{R}^{n\times k},\mathbf{V}\in\mathbb{R}^{k\times d} }\|\mathbf{U}\mathbf{V}\ -\mathbf{A}\|,\] for some predetermined norm \(\|\cdot\|\). Identifying the factors can often decrease the number of relevant features in an observation and thus significantly improve interpretability. Another benefit is that low-rank matrices allow us to approximate the matrix \(\mathbf{A}\) with its factors \(\mathbf{U}\) and \(\mathbf{V}\) using only \((n+d)k\) parameters rather than the \(nd\) parameters needed to represent \(\mathbf{A}\). Moreover, for a vector \(\mathbf{x}\in\mathbb{R}^{d}\), we can approximate the matrix-vector multiplication \(\mathbf{A}\mathbf{x}\approx\mathbf{U}\mathbf{V}\mathbf{x}\) in time \((n+d)k\), while computing \(\mathbf{A}\mathbf{x}\) requires \(nd\) time. These benefits make low-rank approximation one of the most widely used tools in machine learning, recommender systems, data science, statistics, computer vision, and natural language processing. In many of these applications, discrete or categorical datasets are typical. In this case, restricting the underlying factors to a discrete domain for interpretability often makes sense. For example, [13] observed that nearly half of the data sets in the UCI repository [1] are categorical and thus can be represented as binary matrices, possibly using multiple binary variables to represent each category. In the binary matrix factorization (BMF) problem, the input matrix \(\mathbf{A}\in\{0,1\}^{n\times d}\) is binary. Additionally, we are given an integer range parameter \(k\), with \(0<k\ll n,d\). The goal is to approximate \(\mathbf{A}\) by the factors \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\) such that \(\mathbf{A}\approx\mathbf{U}\mathbf{V}\). The BMF problem restricts the general low-rank approximation problem to a discrete space, making finding good factors more challenging (see Section1.3). ### Our Contributions We present \((1+\varepsilon)\)-approximation algorithms for the binary low-rank matrix factorization problem for several standard loss functions used in the general low-rank approximation problem. Table1 summarizes our results. Binary matrix factorization.We first consider the minimization of the Frobenius norm, defined by \(\|\mathbf{A}-\mathbf{U}\mathbf{V}\|_{F}^{2}=\sum_{i\in[n]}\sum_{j\in d}| \mathbf{A}_{i,j}-(\mathbf{U}\mathbf{V})_{i,j}|^{2}\), where \([n]:=\{1,\ldots,n\}\) and \(\mathbf{A}_{i,j}\) denotes the entry in the \(i\)-th row and the \(j\)-th column of \(\mathbf{A}\). Intuitively, we can view this as finding a least-squares approximation of \(\mathbf{A}\). We introduce the first \((1+\varepsilon)\)-approximation algorithm for the BMF problem that runs in singly exponential time. That is, we present an algorithm that, for any \(\varepsilon>0\), returns \(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k},\mathbf{V}^{\prime}\in\{0,1\}^{k \times d}\) with \[\|\mathbf{A}-\mathbf{U}^{\prime}\mathbf{V}^{\prime}\|_{F}^{2}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{A}-\mathbf{U}\mathbf{V}\|_{F}^{2}.\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Reference & Approximation & Runtime & Other \\ \hline [13] & \(C\geq 576\) & \(2^{\mathcal{O}(k^{2})}\operatorname{poly}(n,d)\) & Frobenius loss \\ \hline [13] & \(1+\varepsilon\) & \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log^{2}\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\) & Frobenius loss \\ \hline Our work & \(1+\varepsilon\) & \(2^{\mathcal{O}\left(k^{2}/\varepsilon^{4}\right)}\operatorname{poly}(n,d)\) & Frobenius loss \\ \hline \hline [13] & \(C\geq 122^{2p-2}+2^{p-1}\) & \(2^{\operatorname{poly}(k)}\operatorname{poly}(n,d)\) & \(L_{p}\) loss, \(p\geq 1\) \\ \hline Here & \(1+\varepsilon\) & \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) & \(L_{p}\) loss, \(p\geq 1\), bicriteria \\ \hline \hline [13] & \(1+\varepsilon\) & \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log^{2}\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\) & Binary field \\ \hline [14] & \(1+\varepsilon\) & \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\) & Binary field \\ \hline [13] & \(C\geq 40001\) & \(2^{\operatorname{poly}(k)}\operatorname{poly}(n,d)\) & Binary field, bicriteria \\ \hline Our work & \(1+\varepsilon\) & \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) & Binary field, bicriteria \\ \hline \end{tabular} \end{table} Table 1: Summary of related work on binary matrix factorization For \(\varepsilon\in(0,1)\), our algorithm uses \(2^{\tilde{\mathcal{O}}\left(k^{2}/\varepsilon^{4}\right)}\operatorname{poly}(n,d)\) runtime and for \(\varepsilon\geq 1\), our algorithm uses \(2^{\tilde{\mathcal{O}}\left(k^{2}\right)}\operatorname{poly}(n,d)\) runtime, where \(\operatorname{poly}(n,d)\) denotes a polynomial in \(n\) and \(d\). By comparison, [13] gave a \(C\)-approximation algorithm for the BMF problem also using runtime \(2^{\tilde{\mathcal{O}}\left(k^{2}\right)}\operatorname{poly}(n,d)\), but for some constant \(C\geq 576\). Though they did not attempt to optimize for \(C\), their proofs employ multiple triangle inequalities that present a constant lower bound of at least \(2\) on \(C\). See Section1.2 for a more thorough discussion of the limitations of their approach. [12] introduced a \((1+\varepsilon)\)-approximation algorithm for the BMF problem with rank-\(k\) factors. However, their algorithm uses time doubly exponential in \(k\), specifically \(2^{\frac{2^{\tilde{\mathcal{O}}\left(k\right)}}{\varepsilon^{2}}\log^{2}\frac {1}{\varepsilon}}\operatorname{poly}(n,d)\), which [1] later improved to doubly exponential runtime \(2^{\frac{2^{\tilde{\mathcal{O}}\left(k\right)}}{\varepsilon^{2}}\log\frac{ 1}{\varepsilon}}\operatorname{poly}(n,d)\), while also showing that time \(2^{k^{\Omega(1)}}\) is necessary even for constant-factor approximation, under the Small Set Expansion Hypothesis and the Exponential Time Hypothesis. Bmf with \(L_{p}\) loss.We also consider the more general problem of minimizing for \(L_{p}\) loss for a given \(p\), defined as the optimization problem of minimizing \(\|\mathbf{A}-\mathbf{U}\mathbf{V}\|_{p}^{p}=\sum_{i\in[n]}\sum_{j\in d}| \mathbf{A}_{i,j}-(\mathbf{U}\mathbf{V})_{i,j}|^{p}\). Of particular interest is the case \(p=1\), which corresponds to robust principal component analysis, and which has been proposed as an alternative to Frobenius norm low-rank approximation that is more robust to outliers, i.e., values that are far away from the majority of the data points [11, 12, 13, 14, 15, 16, 17, 18, 19]. On the other hand, for \(p>2\), low-rank approximation with \(L_{p}\) error increasingly places higher priority on outliers, i.e., the larger entries of \(\mathbf{U}\mathbf{V}\). We present the first \((1+\varepsilon)\)-approximation algorithm for the BMF problem that runs in singly exponential time, albeit at the cost of incurring logarithmic increases in the rank \(k\), making it a bicriteria algorithm. Specifically, for any \(\varepsilon>0\), our algorithm returns \(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k^{\prime}},\mathbf{V}^{\prime}\in\{0, 1\}^{k^{\prime}\times d}\) with \[\|\mathbf{A}-\mathbf{U}^{\prime}\mathbf{V}^{\prime}\|_{p}^{p}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{A}-\mathbf{U}\mathbf{V}\|_{p}^{p},\] where \(k^{\prime}=\mathcal{O}\left(\frac{k\log^{2}n}{\varepsilon^{2}}\right)\). For \(\varepsilon\in(0,1)\), our algorithm uses \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) runtime and for \(\varepsilon\geq 1\), our algorithm uses \(2^{\operatorname{poly}(k)}\operatorname{poly}(n,d)\) runtime. Previous work [13] gave a \(C\)-approxmiation algorithm for this problem, using singly exponential runtime \(2^{\operatorname{poly}(k)}\operatorname{poly}(n,d)\), without incurring a bicriteria loss in the rank \(k\). However, their constant \(C\geq 122^{2p-2}+2^{p-1}\) is large and depends on \(p\). Again, their use of multiple triangle inequalities in their argument bars this approach from being able to achieve a \((1+\varepsilon)\)-approximation. To our knowledge, no prior works achieved \((1+\varepsilon)\)-approximation to BMF with \(L_{p}\) loss in singly exponential time. Bmf on binary fields.Finally, we consider the case where all arithmetic operations are performed modulo two, i.e., in the finite field \(\mathbb{F}_{2}\). Specifically, the \((i,j)\)-th entry of \(\mathbf{U}\mathbf{V}\) is the inner product \(\langle\mathbf{U}_{i},\mathbf{V}^{(j)}\rangle\) of the \(i\)-th row of \(\mathbf{U}\) and the \(j\)-th column of \(\mathbf{V}\), taken over \(\mathbb{F}_{2}\). This model has been frequently used for dimensionality reduction for high-dimensional data with binary attributes [11, 12, 13] and independent component analysis, especially in the context of signal processing [10, 12, 13, 14]. This problem is also known as bipartite clique cover, the discrete basis problem, or minimal noise role mining and has been well-studied in applications to association rule mining, database tiling, and topic modeling [14, 15, 16, 17, 18, 19, 20]. We introduce the first bicriteria \((1+\varepsilon)\)-approximation algorithm for the BMF problem on binary fields that runs in singly exponential time. Specifically, for any \(\varepsilon>0\), our algorithm returns \(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k^{\prime}},\mathbf{V}^{\prime}\in\{0,1 \}^{k^{\prime}\times d}\) with \[\|\mathbf{A}-\mathbf{U}^{\prime}\mathbf{V}^{\prime}\|_{p}^{p}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{A}-\mathbf{U}\mathbf{V}\|_{p}^{p},\] where \(k^{\prime}=\mathcal{O}\left(\frac{k\log n}{\varepsilon}\right)\) and all arithmetic operations are performed in \(\mathbb{F}_{2}\). For \(\varepsilon\in(0,1)\), our algorithm has running time \(2^{\mathrm{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) and for \(\varepsilon\geq 1\), our algorithm has running time \(2^{\mathrm{poly}(k)}\operatorname{poly}(n,d)\). By comparison, [13] gave a bicriteria \(C\)-approximation algorithm for the BMF problem on binary fields with running time \(2^{\mathrm{poly}(k)}\operatorname{poly}(n,d)\), for some constant \(C\geq 40001\). Even though their algorithm also gives a bicriteria guarantee, their approach, once again, inherently cannot achieve \((1+\varepsilon)\)-approximation. On the other hand, [13] achieved a \((1+\varepsilon)\)-approximation without a bicriteria guarantee, but their algorithm uses doubly exponential running time \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log^{2}\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\), which [1] later improved to doubly exponential running time \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\), while also showing that running time doubly exponential in \(k\) is necessary for \((1+\varepsilon)\)-approximation on \(\mathbb{F}_{2}\). Applications to big data models.We remark that our algorithms are conducive to big data models. Specifically, our algorithmic ideas facilitate a two-pass algorithm in the streaming model, where either the rows or the columns of the input matrix arrive sequentially, and the goal is to perform binary low-rank approximation while using space sublinear in the size of the input matrix. Similarly, our approach can be used to achieve a two-round protocol in the distributed model, where either the rows or the columns of the input matrix are partitioned among several players, and the goal is to perform binary low-rank approximation while using total communication sublinear in the size of the input matrix. See Section5 for a formal description of the problem settings and additional details. ### Overview of Our Techniques This section briefly overviews our approaches to achieving \((1+\varepsilon)\)-approximation to the BMF problem. Alongside our techniques, we discuss why prior approaches for BMF fail to achieve \((1+\varepsilon)\)-approximation. The BMF problem under the Frobenius norm is stated as follows: Let \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}^{*}\in\{0,1\}^{k\times d}\) be optimal low-rank factors, so that \[\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}=\min_{\mathbf{U}\in\{0,1 \}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}- \mathbf{A}\|_{F}^{2}. \tag{1}\] Our approach relies on the sketch-and-solve paradigm, and we ask of our sketch matrix \(\mathbf{S}\) that it is an _affine embedding_, that is, given \(\mathbf{U}^{*}\) and \(\mathbf{A}\), for all \(\mathbf{V}\in\{0,1\}^{k\times d}\), \[(1-\varepsilon)\|\mathbf{U}^{*}\mathbf{V}-\mathbf{A}\|_{F}^{2}\leq\|\mathbf{ SU}^{*}\mathbf{V}-\mathbf{S}\mathbf{A}\|_{F}^{2}\leq(1+\varepsilon)\|\mathbf{U}^{*} \mathbf{V}-\mathbf{A}\|_{F}^{2}.\] Observe that if \(\mathbf{S}\) is an affine embedding, then we obtain a \((1+\varepsilon)\)-approximation by solving for the minimizer \(\mathbf{V}^{*}\) in the sketched space. That is, given \(\mathbf{S}\) and \(\mathbf{U}^{*}\), instead of solving Equation 1 for \(\mathbf{V}^{*}\), it suffices to solve \[\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{SU}^{*} \mathbf{V}-\mathbf{SA}\|_{F}^{2}.\] Guessing the sketch matrix \(\mathbf{S}\).A general approach taken by [11, 12, 13] for various low-rank approximation problems is first to choose \(\mathbf{S}\) in a way so that there are not too many possibilities for the matrices \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\) and then find the minimizer \(\mathbf{V}^{*}\) for all guesses of \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\). Note that this approach is delicate because it depends on the choice of the sketch matrix \(\mathbf{S}\). For example, if we chose \(\mathbf{S}\) to be a dense matrix with random Gaussian entries, then since there are \(2^{nk}\) possibilities for the matrix \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\), we cannot enumerate the possible matrices \(\mathbf{SU}^{*}\). Prior work [11, 12, 13] made the key observation that if \(\mathbf{A}\) (and thus \(\mathbf{U}^{*}\)) has a small number of unique rows, then a matrix \(\mathbf{S}\) that samples a small number of rows of \(\mathbf{A}\) has only a small number of possibilities for \(\mathbf{SA}\). To ensure that \(\mathbf{A}\) has a small number of unique rows for the BMF problem, [12] first find a \(2^{k}\)-means clustering solution \(\widetilde{\mathbf{A}}\) for the rows of \(\mathbf{A}\). Instead of solving the problem on \(\mathbf{A}\), they then solve BMF on the matrix \(\widetilde{\mathbf{A}}\), where each row is replaced by the center the point is assigned to, yielding at most \(2^{k}\) unique rows. Finally, they note that \(\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}\) is at least the \(2^{k}\)-means cost, as \(\mathbf{U}^{*}\mathbf{V}^{*}\) has at most \(2^{k}\) unique rows. Now that \(\widetilde{\mathbf{A}}\) has \(2^{k}\) unique rows, they can make all possible guesses for both \(\mathbf{SU}^{*}\) and \(\mathbf{S}\widetilde{\mathbf{A}}\) in time \(2^{\tilde{\mathcal{O}}\left(k^{2}\right)}\). By using an algorithm of [10] that achieves roughly a \(9\)-approximation to \(k\)-means clustering, [12] ultimately obtain a \(C\)-approximation to the BMF problem, for some \(C\geq 576\). Shortcomings of previous work for \((1+\varepsilon)\)-approximation.While [12] do not optimize for \(C\), their approach fundamentally cannot achieve \((1+\varepsilon)\)-approximation for BMF for the following reasons. First, they use a \(k\)-means clustering subroutine [10], (achieving roughly a \(9\)-approximation) which due to hardness-of-approximation results [13, 11] can never achieve \((1+\varepsilon)\)-approximation, as there cannot exist a \(1.07\)-approximation algorithm for \(k\)-means clustering unless P=NP. Moreover, even if a \((1+\varepsilon)\)-approximate \(k\)-means clustering could be found, there is no guarantee that the cluster centers obtained by this algorithm are binary. That is, while \(\mathbf{UV}\) has a specific form induced by the requirement that each factor must be binary, a solution to \(k\)-means clustering offers no such guarantee and may return Steiner points. Finally, [12] achieves a matrix \(\mathbf{S}\) that roughly preserves \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\). By generalizations of the triangle inequality, one can show that \(\|\mathbf{SU}^{*}\mathbf{V}^{*}-\mathbf{SA}\|_{F}^{2}\) preserves a constant factor approximation to \(\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}\), but not necessarily a \((1+\varepsilon)\)-approximation. Another related work, [13], reduces instances of BMF to constrained \(k\)-means clustering instances, where the constraints demand that the selected centers are linear combinations of binary vectors. The core part of their work is to design a sampling-based algorithm for solving binary-constrained clustering instances, and the result on BMF is a corollary. Constrained clustering is a harder problem than BMF with Frobenius loss, so it is unclear how one might improve the doubly exponential running time using this approach. Our approach: computing a strong coreset.We first reduce the number of unique rows in \(\mathbf{A}\) by computing a strong coreset \(\widetilde{\mathbf{A}}\) for \(\mathbf{A}\). The strong coreset has the property that for any choices of \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\), there exists \(\mathbf{X}\in\{0,1\}^{n\times k}\) such that \[(1-\varepsilon)\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\leq\|\mathbf{X} \mathbf{V}-\widetilde{\mathbf{A}}\|_{F}^{2}\leq(1+\varepsilon)\|\mathbf{U} \mathbf{V}-\mathbf{A}\|_{F}^{2}.\] Therefore, we instead first solve the low-rank approximation problem on \(\widetilde{\mathbf{A}}\) first. Crucially, we choose \(\widetilde{\mathbf{A}}\) to have \(2^{\mathrm{poly}(k/\varepsilon)}\) unique rows so then for a matrix \(\mathbf{S}\) that samples \(\mathrm{poly}(k/\varepsilon)\) rows, there are \(2^{\mathrm{poly}(k/\varepsilon)}\) possibilities for \(\widetilde{\mathbf{A}}\), so we can make all possible guesses for both \(\mathbf{SU}^{*}\) and \(\widetilde{\mathbf{A}}\). Unfortunately, we still have the problem that \(\|\mathbf{SU}^{*}\mathbf{V}^{*}-\mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}\) does not even necessarily give a \((1+\varepsilon)\)-approximation to \(\|\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2}\). Binary matrix factorization.To that end, we show that when \(\mathbf{S}\) is a leverage score sampling matrix, then \(\mathbf{S}\) also satisfies an approximate matrix multiplication property. Therefore \(\mathbf{S}\) can effectively be used for an affine embedding. That is, the minimizer to \(\|\mathbf{SU}^{*}\mathbf{V}^{*}-\mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}\) produces an \((1+\varepsilon)\)-approximation to the cost of the optimal factors \(\|\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2}\). Thus, we can then solve \[\mathbf{V}^{\prime} =\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d}}\| \mathbf{SU}^{*}\mathbf{V}-\mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}\] \[\mathbf{U}^{\prime} =\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{n\times k}}\| \mathbf{U}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2},\] where the latter optimization problem can be solved by iteratively optimizing over each row so that the total computation time is \(\mathcal{O}\left(2^{k}n\right)\) rather than \(2^{kn}\). BMF on binary fields.We again form the matrix \(\widetilde{\mathbf{A}}\) by taking a strong coreset of \(\mathbf{A}\), constructed using an algorithm that gives integer weight \(w_{i}\) to each point, and then duplicating the rows to form \(\widetilde{\mathbf{A}}\). That is, if the \(i\)-th row \(\mathbf{A}_{i}\) of \(\mathbf{A}\) is sampled with weight \(w_{i}\) in the coreset, then \(\widetilde{\mathbf{A}}\) will contain \(w_{i}\) repetitions of the row \(\mathbf{A}_{i}\). We want to use the same approach for binary fields to make guesses for \(\mathbf{SU}^{*}\) and \(\mathbf{S}\widetilde{\mathbf{A}}\). However, it is no longer true that \(\mathbf{S}\) will provide an affine embedding over \(\mathbb{F}_{2}\), in part because the subspace embedding property of \(\mathbf{S}\) computes leverage scores of each row of \(\mathbf{U}^{*}\) and \(\mathbf{A}\) with respect to general integers. Hence, we require a different approach for matrix operations over \(\mathbb{F}_{2}\). Instead, we group the rows of \(\widetilde{\mathbf{A}}\) by their number of repetitions, so that group \(\mathbf{G}_{j}\) consists of the rows of \(\widetilde{\mathbf{A}}\) that are repeated \([(1+\varepsilon)^{j},(1+\varepsilon)^{j+1})\) times. That is, if \(\mathbf{A}_{i}\) appears \(w_{i}\) times in \(\widetilde{\mathbf{A}}\), then it appears a single time in group \(\mathbf{G}_{j}\) for \(j=\lfloor\log w_{i}\rfloor\). We then perform entrywise \(L_{0}\) low-rank approximation over \(\mathbb{F}_{2}\) for each of the groups \(\mathbf{G}_{j}\), which gives low-rank factors \(\mathbf{U}^{(j)}\) and \(\mathbf{V}^{(j)}\). We then compute \(\widetilde{\mathbf{U}^{(j)}}\) by duplicating rows appropriately so that if \(\mathbf{A}_{i}\) is in \(\mathbf{G}_{j}\), then we place the row of \(\mathbf{U}^{(j)}\) corresponding to \(\mathbf{A}_{i}\) into the \(i\)-th row of \(\widetilde{\mathbf{U}^{(j)}}\), for all \(i\in[n]\). Otherwise if \(\mathbf{A}_{i}\) is not in \(\mathbf{G}_{j}\), then we set \(i\)-th row of \(\widetilde{\mathbf{U}^{(j)}}\) to be the all zeros row. We compute \(\mathbf{V}^{(j)}\) by padding accordingly and then collect \[\widetilde{\mathbf{U}}=\left[\widetilde{\mathbf{U}^{(0)}}|\ldots|\widetilde{ \mathbf{U}^{(\ell)}}\right],\qquad\widetilde{\mathbf{V}}\leftarrow\widetilde{ \mathbf{V}^{(0)}}\circ\ldots\circ\widetilde{\mathbf{V}^{(i)}},\] where \(\left[\widetilde{\mathbf{U}^{(0)}}|\ldots|\widetilde{\mathbf{U}^{(\ell)}}\right]\) denotes horizontal concatenation of matrices and \(\widetilde{\mathbf{V}^{(0)}}\circ\ldots\circ\widetilde{\mathbf{V}^{(i)}}\) denotes vertical concatenation (stacking) of matrices, to achieve bicriteria low-rank approximations \(\widetilde{\mathbf{U}}\) and \(\widetilde{\mathbf{V}}\) to \(\widetilde{\mathbf{A}}\). Finally, to achieve bicriteria factors \(\mathbf{U}^{\prime}\) and \(\mathbf{V}^{\prime}\) to \(\mathbf{A}\), we ensure that \(\mathbf{U}^{\prime}\) achieves the same block structure as \(\widetilde{\mathbf{U}}\). BMF with \(L_{p}\) loss.We would again like to use the same approach as our \((1+\varepsilon)\)-approximation algorithm for BMF with Frobenius loss. To that end, we observe that a coreset construction for clustering under \(L_{p}\) metrics rather than Euclidean distance is known, which we can use to construct \(\widetilde{\mathbf{A}}\). However, the challenge is that no known sampling matrix \(\mathbf{S}\) guarantees an affine embedding. One might hope that recent results on active \(L_{p}\) regression [14, 15, 16, 17, 18] can provide such a tool. Unfortunately, adapting these techniques would still require taking a union bound over a number of columns, which would result in the sampling matrix having too many rows for our desired runtime. Instead, we invoke the coreset construction on the rows and the columns so that \(\widetilde{\mathbf{A}}\) has a small number of distinct rows and columns. We again partition the rows of \(\widetilde{\mathbf{A}}\) into groups based on their frequency, but now we further partition the groups based on the frequency of the columns. Thus, it remains to solve BMF with \(L_{p}\) loss on the partition, each part of which has a small number of rows and columns. Since the contribution of each row toward the overall loss is small (because there is a small number of columns), we show that there exists a matrix that samples \(\text{poly}(k/\varepsilon)\) rows of each partition that finally achieves the desired affine embedding. Therefore, we can solve the problem on each partition, pad the factors accordingly, and build the bicriteria factors as in the binary field case. ### Motivation and Related Work Low-rank approximation is one of the fundamental problems of machine learning and data science. Therefore, it has received extensive attention, e.g., see the surveys [13, 17, 18]. When the underlying loss function is the Frobenius norm, the low-rank approximation problem can be optimally solved via singular value decomposition (SVD). However, when we restrict both the observed input \(\mathbf{A}\) and the factors \(\mathbf{U},\mathbf{V}\) to binary matrices, the SVD no longer guarantees optimal factors. In fact, many restricted variants of low-rank approximation are NP-hard [11, 12, 13, 14, 15, 16, 17]. Motivation and background for BMF.The BMF problem has applications to graph partitioning [10], low-density parity-check codes [15], and optimizing passive organic LED (OLED) displays [14]. Observe that we can use \(\mathbf{A}\) to encode the incidence matrix of the bipartite graph with \(n\) vertices on the left side of the bipartition and \(d\) vertices on the right side so that \(\mathbf{A}_{i,j}=1\) if and only if there exists an edge connecting the \(i\)-th vertex on the left side with the \(j\)-th vertex on the right side. Then \(\mathbf{UV}\) can be written as the sum of \(k\) rank-1 matrices, each encoding a different bipartite clique of the graph, i.e., a subset of vertices on the left and a subset of vertices on the right such that there exists an edge between every vertex on the left and every vertex on the right. It then follows that the BMF problem solves the bipartite clique partition problem [16, 17, 18], in which the goal is to find the smallest integer \(k\) such that the graph can be represented as a union of \(k\) bipartite cliques. [14] also present the following motivation for the BMF problem to improve the performance of passive OLED displays, which rapidly and sequentially illuminate rows of lights to render an image in a manner so that the human eye integrates this sequence of lights into a complete image. However, [14] observed that passive OLED displays could illuminate many rows simultaneously, provided the image being shown is a rank-1 matrix and that the apparent brightness of an image is inversely proportional to the rank of the decomposition. Thus [14] notes that BMF can be used to not only find a low-rank decomposition that illuminates pixels in a way that seems brighter to the viewer but also achieves binary restrictions on the decomposition in order to use simple and inexpensive voltage drivers on the rows and columns, rather than a more expensive bank of video-rate digital to analog-to-digital converters. BMF with Frobenius loss.[13] first gave a constant factor approximation algorithm for the BMF problem using runtime \(2^{\tilde{\mathcal{O}}(k^{2})}\operatorname{poly}(n,d)\), i.e., singly exponential time. [12] introduced a \((1+\varepsilon)\)-approximation to the BMF problem with rank-\(k\) factors, but their algorithm uses doubly exponential time, specifically runtime \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log^{2}\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\), which was later improved to doubly exponential runtime \(2^{\frac{2^{\tilde{\mathcal{O}}(k)}}{\varepsilon^{2}}\log\frac{1}{\varepsilon} }\operatorname{poly}(n,d)\) by [1], who also showed that \(2^{k^{\Omega(1)}}\) runtime is necessary even for constant-factor approximation, under the Small Set Expansion Hypothesis and the Exponential Time Hypothesis. By introducing sparsity constraints on the rows of \(\mathbf{U}\) and \(\mathbf{V}\), [10] provide an alternate parametrization of the runtime, though, at the cost of runtime quasipolynomial in \(n\) and \(d\). BMF on binary fields.Binary matrix factorization is particularly suited for datasets involving binary data. Thus, the problem is well-motivated for binary fields when performing dimensionality reduction on high-dimension datasets [14]. To this end, many heuristics have been developed for this problem [14, 15, 16], due to its NP-hardness [13, 15]. For the special case of \(k=1\), [15] first gave a \(2\)-approximation algorithm that uses polynomial time through a relaxation of integer linear programming. Subsequently, [12] produced a simpler approach, and [1] introduced a sublinear time algorithm. For general \(k\), [13] gave a constant factor approximation algorithm using runtime \(2^{\operatorname{poly}(k)}\operatorname{poly}(n,d)\), i.e., singly exponential time, at the expense of a bicriteria solution, i.e., factors with rank \(k^{\prime}=\mathcal{O}\left(k\log n\right)\). [12] introduced a \((1+\varepsilon)\)-approximation to the BMF problem with rank-\(k\) factors, but their algorithm uses doubly exponential time, specifically runtime \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log^{2}\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\), which was later improved to doubly exponential runtime \(2^{\frac{2^{\mathcal{O}(k)}}{\varepsilon^{2}}\log\frac{1}{\varepsilon}} \operatorname{poly}(n,d)\) by [1], who also showed that doubly exponential runtime is necessary for \((1+\varepsilon)\)-approximation without bicriteria relaxation under the Exponential Time Hypothesis. BMF with \(L_{p}\) loss.Using more general \(L_{p}\) loss functions can result in drastically different behaviors of the optimal low-rank factors for the BMF problem. For example, the low-rank factors for \(p>2\) are penalized more when the corresponding entries of \(\mathbf{UV}\) are large, and thus may choose to prioritize a larger number of small entries that do not match \(\mathbf{A}\) rather than a single large entry. On the other hand, \(p=1\) corresponds to robust principal component analysis, which yields factors that are more robust to outliers in the data [17, 18, 19, 2, 1, 2, 10, 11, 12]. The first approximation algorithm with provable guarantees for \(L_{1}\) low-rank approximation on the reals was given by [12]. They achieved \(\operatorname{poly}(k)\cdot\log d\)-approximation in roughly \(\mathcal{O}\left(nd\right)\) time. For constant \(k\), [12] further achieved constant-factor approximation in polynomial time. When we restrict the inputs and factors to be binary, [13] observed that \(p=1\) corresponds to minimizing the number of edges in the symmetric difference between an unweighted bipartite graph \(G\) and its approximation \(H\), which is the multiset union of \(k\) bicliques. Here we represent the graph \(G\) with \(n\) and \(d\) vertices on the bipartition's left- and right-hand side, respectively, through its edge incidence matrix \(\mathbf{A}\). Similarly, we have \(\mathbf{U}_{i,j}=1\) if and only if the \(i\)-th vertex on the left bipartition is in the \(j\)-th biclique and \(\mathbf{V}_{i,j}=1\) if and only if the \(j\)-th vertex on the right bipartition is in the \(i\)-th biclique. Then we have \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{1}=|E(G)\triangle E(H)|\). [10] showed how to solve the exact version of the problem, i.e., to recover \(\mathbf{U},\mathbf{V}\) under the promise that \(\mathbf{A}=\mathbf{U}\mathbf{V}\), using \(2^{\mathcal{O}(k^{2})}\operatorname{poly}(n,d)\) time. [11] recently gave the first constant-factor approximation algorithm for this problem, achieving a \(C\)-approximation using \(2^{\operatorname{poly}(k)}\operatorname{poly}(n,d)\) time, for some constant \(C\geq 122^{2p-2}+2^{p-1}\). ### Preliminaries For an integer \(n>0\), we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). We use \(\operatorname{poly}(n)\) to represent a fixed polynomial in \(n\) and more generally, \(\operatorname{poly}(n_{1},\ldots,n_{k})\) to represent a fixed multivariate polynomial in \(n_{1},\ldots,n_{k}\). For a function \(f(n_{1},\ldots,n_{k})\), we use \(\tilde{\mathcal{O}}\left(f(n_{1},\ldots,n_{k})\right)\) to denote \(f(n_{1},\ldots,n_{k})\cdot\operatorname{poly}(\log f(n_{1},\ldots,n_{k}))\). We generally use bold-font variables to denote matrices. For a matrix \(\mathbf{A}\in\mathbb{R}^{n\times d}\), we use \(\mathbf{A}_{i}\) to denote the \(i\)-th row of \(\mathbf{A}\) and \(\mathbf{A}^{(j)}\) to denote the \(j\)-th column of \(\mathbf{A}\). We use \(A_{i,j}\) to denote the entry in the \(i\)-th row and \(j\)-th column of \(\mathbf{A}\). For \(p\geq 1\), we write the entrywise \(L_{p}\) norm of \(\mathbf{A}\) as \[\|\mathbf{A}\|_{p}=\left(\sum_{i\in[n]}\sum_{j\in[d]}A_{i,j}^{p}\right)^{1/p}.\] The Frobenius norm of \(\mathbf{A}\), denoted \(\|\mathbf{A}\|_{F}\) is simply the entrywise \(L_{2}\) norm of \(\mathbf{A}\): \[\|\mathbf{A}\|_{F}=\left(\sum_{i\in[n]}\sum_{j\in[d]}A_{i,j}^{2}\right)^{1/2}.\] The entrywise \(L_{0}\) norm of \(\mathbf{A}\) is \[\|\mathbf{A}\|_{0}=\left|\{(i,j)\,\mid\,i\in[n],j\in[d]:A_{i,j}\neq 0\}\right|.\] We use \(\circ\) to denote vertical stacking of matrices, so that \[\mathbf{A}^{(1)}\circ\ldots\circ\mathbf{A}^{(m)}=\begin{bmatrix}\mathbf{A}^{ (1)}\\ \vdots\\ \mathbf{A}^{(m)}\end{bmatrix}.\] For a set \(X\) of \(n\) points in \(\mathbb{R}^{d}\) weighted by a function \(w\), the \(k\)-means clustering cost of \(X\) with respect to a set \(S\) of \(k\) centers is defined as \[\mathsf{Cost}(X,S,w):=\sum_{x\in X}w(x)\cdot\min_{s\in S}\|x-s\|_{2}^{2}.\] When the weights \(w\) are uniformly unit across all points in \(X\), we simply write \(\mathsf{Cost}(X,S)=\mathsf{Cost}(X,S,w)\). One of the core ingredients for avoiding the triangle inequality and achieving \((1+\varepsilon)\)-approximation is our use of coresets for \(k\)-means clustering: **Definition 1.1** (Strong coreset).: _Given an accuracy parameter \(\varepsilon>0\) and a set \(X\) of \(n\) points in \(\mathbb{R}^{d}\), we say that a subset \(C\) of \(X\) with weights \(w\) is a strong \(\varepsilon\)-coreset of \(X\) for the \(k\)-means clustering problem if for any set \(S\) of \(k\) points in \(\mathbb{R}^{d}\), we have_ \[(1-\varepsilon)\mathsf{Cost}(X,S)\leq\mathsf{Cost}(C,S,w)\leq(1+\varepsilon) \mathsf{Cost}(X,S).\] Many coreset construction exist in the literature, and the goal is to minimize \(|C|\), the size of the coreset, while preserving \((1\pm\varepsilon)\)-approximate cost for all sets of \(k\) centers. If the points lie in \(\mathbb{R}^{d}\), we can find coresets of size \(\tilde{\mathcal{O}}\left(\mathrm{poly}(k,d,\epsilon^{-1})\right)\), i.e., the size is independent of \(n\). Leverage scores.Finally, we recall the notion of a leverage score sampling matrix. For a matrix \(\mathbf{A}\in\mathbb{R}^{n\times d}\), the leverage score of row \(\mathbf{a}_{i}\) with \(i\in[n]\) is defined as \(\mathbf{a}_{i}(\mathbf{A}^{\top}\mathbf{A})^{-1}\mathbf{a}_{i}^{\top}\). We can use the leverage scores to generate a random leverage score sampling matrix as follows: **Theorem 1.2** (Leverage score sampling matrix).: _[_1, 1, 14_]_ _Let \(C>1\) be a universal constant and \(\alpha>1\) be a parameter. Given a matrix \(\mathbf{A}\in\mathbb{R}^{n\times d}\), let \(\ell_{i}\) be the leverage score of the \(i\)-th row of \(\mathbf{A}\). Suppose \(p_{i}\in\left[\min\left(1,\frac{C\ell_{i}\log k}{\varepsilon^{2}}\right),\min \left(1,\frac{C\alpha\ell_{i}\log k}{\varepsilon^{2}}\right)\right]\) for all \(i\in[n]\)._ _For \(m:=\mathcal{O}\left(\frac{\alpha}{\varepsilon^{2}}\,d\log d\right)\), let \(\mathbf{S}\in\mathbb{R}^{m\times n}\) be generated so that each row of \(\mathbf{S}\) randomly selects row \(j\in[n]\) with probability proportional to \(p_{j}\) and rescales the row by \(\frac{1}{\sqrt{mp_{j}}}\). Then with probability at least \(0.99\), we have that simultaneously for all vectors \(\mathbf{x}\in\mathbb{R}^{d}\),_ \[(1-\varepsilon)\|\mathbf{A}\mathbf{x}\|_{2}\leq\|\mathbf{SA}\mathbf{x}\|_{2} \leq(1+\varepsilon)\|\mathbf{A}\mathbf{x}\|_{2}.\] The main point of Theorem1.2 is that given constant-factor approximations \(p_{i}\) to the leverage scores \(\ell_{i}\), it suffices to sample \(\mathcal{O}\left(d\log d\right)\) rows of \(\mathbf{A}\) to achieve a constant-factor subspace embedding of \(\mathbf{A}\), and similar bounds can be achieved for \((1+\varepsilon)\)-approximate subspace embeddings. Finally, we remark that \(\mathbf{S}\) can be decomposed as the product of matrices \(\mathbf{DT}\), where \(\mathbf{T}\in\mathbb{R}^{m\times n}\) is a sparse matrix with a single one per row, denoting the selection of a row for the purposes of leverage score sampling and \(\mathbf{D}\) is the diagonal matrix with the corresponding scaling factor, i.e., the \(i\)-th diagonal entry of \(\mathbf{D}\) is set to \(\frac{1}{\sqrt{mp_{j}}}\) if the \(j\)-th row of \(\mathbf{A}\) is selected for the \(i\)-th sample. ## 2 Binary Low-Rank Approximation In this section, we present a \((1+\varepsilon)\)-approximation algorithm for binary low-rank approximation with Frobenius norm loss, where to goal is to find matrices \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\) to minimize \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\). Suppose optimal low-rank factors are \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}^{*}\in\{0,1\}^{k\times d}\), so that \[\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}=\min_{\mathbf{U}\in\{0,1 \}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}- \mathbf{A}\|_{F}^{2}.\] Observe that if we knew matrices \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\) so that for all \(\mathbf{V}\in\{0,1\}^{k\times d}\), \[(1-\varepsilon)\|\mathbf{U}^{*}\mathbf{V}-\mathbf{A}\|_{F}^{2}\leq\|\mathbf{ SU}^{*}\mathbf{V}-\mathbf{SA}\|_{F}^{2}\leq(1+\varepsilon)\|\mathbf{U}^{*} \mathbf{V}-\mathbf{A}\|_{F}^{2},\] then we could find a \((1+\varepsilon)\)-approximate solution for \(\mathbf{V}^{*}\) by solving the problem \[\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{SU}^{*} \mathbf{V}-\mathbf{SA}\|_{F}^{2}\] instead. We would like to make guesses for the matrices \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\), but first we must ensure there are not too many possibilities for these matrices. For example, if we chose \(\mathbf{S}\) to be a dense matrix with random gaussian entries, then \(\mathbf{SU}^{*}\) could have too many possibilities because without additional information, there are \(2^{nk}\) possibilities for the matrix \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\). We can instead choose \(\mathbf{S}\) to be a leverage score sampling matrix, which samples rows from \(\mathbf{U}^{*}\) and \(\mathbf{A}\). Since each row of \(\mathbf{U}^{*}\) has dimension \(k\), then there are at most \(2^{k}\) distinct possibilities for each of the rows of \(\mathbf{U}^{*}\). On the other hand, \(\mathbf{A}\in\{0,1\}^{n\times d}\), so there may be \(2^{d}\) distinct possibilities for the rows of \(\mathbf{A}\), which is too many to guess. Thus we first reduce the number of unique rows in \(\mathbf{A}\) by computing a strong coreset \(\widetilde{\mathbf{A}}\) for \(\mathbf{A}\). The strong coreset has the property that for any choices of \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\), there exists \(\mathbf{X}\in\{0,1\}^{n\times k}\) such that \[(1-\varepsilon)\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\leq\|\mathbf{X} \mathbf{V}-\widetilde{\mathbf{A}}\|_{F}^{2}\leq(1+\varepsilon)\|\mathbf{U} \mathbf{V}-\mathbf{A}\|_{F}^{2}.\] Therefore, we instead first solve the low-rank approximation problem on \(\widetilde{\mathbf{A}}\) first. Crucially, \(\widetilde{\mathbf{A}}\) has \(2^{\text{poly}(k/\varepsilon)}\) unique rows so then for a matrix \(\mathbf{S}\) that samples \(\text{poly}(k/\varepsilon)\) rows, there are \(\binom{2^{\text{poly}(k/\varepsilon)}}{\text{poly}(k/\varepsilon)}=2^{\text{ poly}(k/\varepsilon)}\) possible choices of \(\widetilde{\mathbf{A}}\), so we can enumerate all of them for both \(\mathbf{SU}^{*}\) and \(\widetilde{\mathbf{A}}\). We can then solve \[\mathbf{V}^{\prime}=\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d}} \|\mathbf{SU}^{*}\mathbf{V}-\mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}\] and \[\mathbf{U}^{\prime}=\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{n\times k}} \|\mathbf{U}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2},\] where the latter optimization problem can be solved by iteratively optimizing over each row, so that the total computation time is \(\mathcal{O}\left(2^{k}n\right)\) rather than \(2^{kn}\). We give the full algorithm in Algorithm4 and the subroutine for optimizing with respect to \(\widetilde{\mathbf{A}}\) in Algorithm3. We give the subroutines for solving for \(\mathbf{V}^{\prime}\) and \(\mathbf{U}^{\prime}\) in Algorithm1 and Algorithm2, respectively. ``` 0:\(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\), \(\mathbf{U}\in\{0,1\}^{N\times k}\) 0:\(\mathbf{V}^{\prime}=\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d }}\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{F}\) 1:for\(i=1\) to \(i=d\)do\(\triangleright\)Optimize for each column individually 2: Set \(\mathbf{V}^{\prime(i)}=\operatorname*{argmin}_{\mathbf{V}^{(i)}\in\{0,1\}^{k \times 1}}\|\mathbf{U}\mathbf{V}^{(i)}-\widetilde{\mathbf{A}}^{(i)}\|_{2}\)\(\triangleright\)Enumerate over all \(2^{k}\) possible binary vectors 3:return\(\mathbf{V}^{\prime}=\left[\mathbf{V}^{\prime(1)}|\ldots|\mathbf{V}^{\prime(d)}\right]\) ``` **Algorithm 1** Algorithm for computing optimal \(\mathbf{V}\) given \(\mathbf{U}\) First, we recall that leverage score sampling matrices preserve approximate matrix multiplication. **Lemma 2.1** (Lemma 32 in [11]).: _Let \(\mathbf{U}\in\mathbb{R}^{N\times k}\) have orthonormal columns, \(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\), and \(\mathbf{S}\in\mathbb{R}^{m\times N}\) be a leverage score sampling matrix for \(\mathbf{U}\) with \(m=\mathcal{O}\left(\frac{1}{\varepsilon^{2}}\right)\) rows. Then,_ \[\mathbf{Pr}\left[\|\mathbf{U}^{\top}\mathbf{S}^{\top}\mathbf{S}\widetilde{ \mathbf{A}}-\mathbf{U}^{\top}\widetilde{\mathbf{A}}\|_{F}^{2}<\varepsilon^{2} \|\mathbf{U}\|_{F}^{2}\|\widetilde{\mathbf{A}}\|_{F}^{2}\right]\geq 0.99.\] Next, we recall that leverage score sampling matrices give subspace embeddings. **Theorem 2.2** (Theorem 42 in [14]).: _For \(\mathbf{U}\in\mathbb{R}^{N\times k}\), let \(\mathbf{S}\in\mathbb{R}^{m\times N}\) be a leverage score sampling matrix for \(\mathbf{U}\in\{0,1\}^{N\times k}\) with \(m=\mathcal{O}\left(\frac{k\log k}{\varepsilon^{2}}\right)\) rows. Then with probability at least \(0.99\), we have for all \(\mathbf{V}\in\mathbb{R}^{k\times d}\),_ \[(1-\varepsilon)\|\mathbf{U}\mathbf{V}\|_{F}^{2}\leq\|\mathbf{S}\mathbf{U} \mathbf{V}\|_{F}^{2}\leq(1+\varepsilon)\|\mathbf{U}\mathbf{V}\|_{F}^{2}.\] Finally, we recall that approximate matrix multiplication and leverage score sampling suffices to achieve an affine embedding. **Theorem 2.3** (Theorem 39 in [14]).: _Let \(\mathbf{U}\in\mathbb{R}^{N\times k}\) have orthonormal columns. Let \(\mathbf{S}\) be a sampling matrix that satisfies Lemma 2.1 with error parameter \(\frac{\varepsilon}{\sqrt{k}}\) and also let \(\mathbf{S}\) be a subspace embedding for \(\mathbf{U}\) with error parameter \(\varepsilon\). Let \(\mathbf{V}^{*}=\operatorname*{argmin}_{\mathbf{V}}\|\mathbf{U}\mathbf{V}- \widetilde{\mathbf{A}}\|_{F}\) and \(\mathbf{X}=\mathbf{U}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\). Then for all \(\mathbf{V}\in\mathbb{R}^{k\times d}\),_ \[(1-2\varepsilon)\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{F}^{2}-\| \mathbf{X}\|_{F}^{2}\leq\|\mathbf{S}\mathbf{U}\mathbf{V}-\mathbf{S}\widetilde {\mathbf{A}}\|_{F}^{2}-\|\mathbf{S}\mathbf{X}\|_{F}^{2}\leq(1+2\varepsilon)\| \mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{F}^{2}-\|\mathbf{X}\|_{F}^{2}.\] We first show that Algorithm 3 achieves a good approximation to the optimal low-rank factors for the coreset \(\widetilde{\mathbf{A}}\). **Lemma 2.4**.: _Suppose \(\varepsilon<\frac{1}{10}\). Then with probability at least \(0.97\), the output of Algorithm 3 satisfies_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\widetilde{\mathbf{A}}\|_{F}^{2}\leq (1+6\varepsilon)\|\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2}.\] Proof.: Let \(\mathbf{V}^{\prime\prime}=\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d}} \|\mathbf{S}\mathbf{U}^{*}\mathbf{V}-\widetilde{\mathbf{A}}\|_{F}^{2}\) and let \(\mathbf{U}^{\prime\prime}=\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{N \times k}}\|\mathbf{S}\mathbf{U}\mathbf{V}^{\prime\prime}-\widetilde{\mathbf{A }}\|_{F}^{2}\). Since the algorithm chooses \(\mathbf{U}^{\prime}\) and \(\mathbf{V}^{\prime}\) over \(\mathbf{U}^{\prime\prime}\) and \(\mathbf{V}^{\prime\prime}\), then \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\widetilde{\mathbf{A}}\|_{F}^{2}\leq \|\mathbf{U}^{\prime\prime}\mathbf{V}^{\prime\prime}-\widetilde{\mathbf{A}}\|_ {F}^{2}.\] Due to the optimality of \(\mathbf{U}^{\prime\prime}\), \[\|\mathbf{U}^{\prime\prime}\mathbf{V}^{\prime\prime}-\widetilde{\mathbf{A}}\| _{F}^{2}\leq\|\mathbf{U}^{*}\mathbf{V}^{\prime\prime}-\widetilde{\mathbf{A}}\| _{F}^{2}.\] Let \(\mathbf{X}=\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\). Note that since \(\mathbf{U}^{*}\) has orthonormal columns, then by Lemma2.1, the leverage score sampling matrix \(\mathbf{S}\) achieves approximate matrix multiplication with probability at least \(0.99\). By Theorem2.2, the matrix \(\mathbf{S}\) also is a subspace embedding for \(\mathbf{U}\). Thus, \(\mathbf{S}\) meets the criteria for applying Theorem2.3. Then for the correct guess \(\mathbf{DT}\) of matrix \(\mathbf{S}\) corresponding to \(\mathbf{U}^{*}\) and conditioning on the correctness of \(\mathbf{S}\) in Theorem2.3, \[\|\mathbf{U}^{*}\mathbf{V}^{\prime\prime}-\widetilde{\mathbf{A}}\|_{F}^{2}\leq \frac{1}{1-2\varepsilon}[\|\mathbf{S}\mathbf{U}^{*}\mathbf{V}^{\prime\prime}- \mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}-\|\mathbf{S}\mathbf{X}\|_{F}^{2}+ \|\mathbf{X}\|_{F}^{2}.]\] Due to the optimality of \(\mathbf{V}^{\prime\prime}\), \[\frac{1}{1-2\varepsilon}[\|\mathbf{S}\mathbf{U}^{*}\mathbf{V}^{\prime\prime}- \mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}-\|\mathbf{S}\mathbf{X}\|_{F}^{2}+ \|\mathbf{X}\|_{F}^{2}]\leq\frac{1}{1-2\varepsilon}[\|\mathbf{S}\mathbf{U}^{* }\mathbf{V}^{*}-\mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}-\|\mathbf{S} \mathbf{X}\|_{F}^{2}+\|\mathbf{X}\|_{F}^{2}].\] Then again conditioning on the correctness of \(\mathbf{S}\), \[\frac{1}{1-2\varepsilon}[\|\mathbf{S}\mathbf{U}^{*}\mathbf{V}^{*} -\mathbf{S}\widetilde{\mathbf{A}}\|_{F}^{2}-\|\mathbf{S}\mathbf{X} \|_{F}^{2}+\|\mathbf{X}\|_{F}^{2}]\] \[\leq\frac{1}{1-2\varepsilon}[(1+2\varepsilon)\|\mathbf{U}^{*} \mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2}+\|\mathbf{S}\mathbf{X}\|_{F}^{ 2}-\|\mathbf{X}\|_{F}^{2}-\|\mathbf{S}\mathbf{X}\|_{F}^{2}+\|\mathbf{X}\|_{F} ^{2}]\] \[\leq(1+6\varepsilon)\|\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{ \mathbf{A}}\|_{F}^{2},\] for sufficiently small \(\varepsilon\), e.g., \(\varepsilon<\frac{1}{10}\). Thus, putting things together, we have that conditioned on the correctness of \(\mathbf{S}\) in Theorem2.3, \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\widetilde{\mathbf{A}}\|_{F}^{2} \leq(1+6\varepsilon)\|\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{ F}^{2}.\] Since the approximate matrix multiplication property of Lemma2.1, the subspace embedding property of Theorem2.2, and the affine embedding property of Theorem2.3 all fail with probability at most \(0.01\), then by a union bound, \(\mathbf{S}\) succeeds with probability at least \(0.97\). We now analyze the runtime of the subroutine Algorithm3. **Lemma 2.5**.: _Algorithm3 uses \(2^{\mathcal{O}\left(m^{2}+m\log t\right)}\operatorname*{poly}(N,d)\) runtime for \(m=\mathcal{O}\left(\frac{k\log k}{\varepsilon^{2}}\right)\)._ Proof.: We analyze the number of possible guesses \(\mathbf{D}\) and \(\mathbf{T}\) corresponding to guesses of \(\mathbf{S}\widetilde{\mathbf{A}}\) (see the remark after Theorem1.2). There are at most \(\binom{t}{m}=2^{\mathcal{O}\left(m\log t\right)}\) distinct subsets of \(m=\mathcal{O}\left(\frac{k\log k}{\varepsilon^{2}}\right)\) rows of \(\widetilde{\mathbf{A}}\). Thus there are \(2^{\mathcal{O}\left(m\log t\right)}\) possible matrices \(\mathbf{T}\) that selects \(m\) rows of \(\widetilde{\mathbf{A}}\), for the purposes of leverage score sampling. Assuming the leverage score sampling matrix does not sample any rows with leverage score less than \(\frac{1}{\operatorname*{poly}(N)}\), then there are \(\mathcal{O}\left(\log N\right)^{m}=2^{\mathcal{O}\left(m\log\log N\right)}\) total guesses for the matrix \(\mathbf{D}\). Note that \(\log n\leq 2^{m}\) implies that \(2^{\mathcal{O}(m\log\log N)}\leq 2^{\mathcal{O}(m^{2})}\) while \(\log N>2^{m}\) implies that \(2^{\mathcal{O}(m\log\log N)}\leq 2^{\mathcal{O}(\log^{2}\log N)}\leq N\). Therefore, there are at most \(2^{\mathcal{O}\left(m^{2}+m\log t\right)}N\) total guesses for all combinations of \(\mathbf{T}\) and \(\mathbf{D}\), corresponding to all guesses of \(\mathbf{S}\widetilde{\mathbf{A}}\). For each guess of \(\mathbf{S}\) and \(\mathbf{S}\widetilde{\mathbf{A}}\), we also need to guess \(\mathbf{SU}^{*}\). Since \(\mathbf{U}^{*}\in\{0,1\}^{N\times k}\) is binary and \(\mathbf{T}\) samples \(m\) rows before weighting each row with one of \(\mathcal{O}\left(\log N\right)\) possible weights, the number of total guesses for \(\mathbf{SU}^{*}\) is \((2\cdot\mathcal{O}\left(\log N\right))^{mk}\). Given guesses for \(\mathbf{SA}\) and \(\mathbf{SU}^{*}\), we can then compute \(\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{SU}^{*} \mathbf{V}-\mathbf{SA}\|_{F}^{2}\) using \(\mathcal{O}\left(2^{k}d\right)\) time through the subroutine Algorithm 1, which enumerates through all possible \(2^{k}\) binary vectors for each column. For a fixed \(\mathbf{V}\), we can then compute \(\mathbf{U}_{\mathbf{V}}=\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{N\times k }}\|\mathbf{UV}-\mathbf{A}\|_{F}^{2}\) using \(\mathcal{O}\left(2^{k}N\right)\) time through the subroutine Algorithm 2, which enumerates through all possible \(2^{k}\) binary vectors for each row of \(\mathbf{U}_{\mathbf{V}}\). Therefore, the total runtime of Algorithm 3 is \(2^{\mathcal{O}\left(m^{2}+m\log t\right)}\operatorname*{poly}(N,d)\). We recall the following construction for a strong \(\varepsilon\)-coreset for \(k\)-means clustering. **Theorem 2.6** (Theorem 36 in [13]).: _Let \(X\subset\mathbb{R}^{d}\) be a subset of \(n\) points, \(\varepsilon\in(0,1)\) be an accuracy parameter, and let \(t=\mathcal{O}\left(\frac{k^{3}\log^{2}k}{\varepsilon^{4}}\right)\). There exists an algorithm that uses \(\mathcal{O}\left(nd^{2}+n^{2}d+\frac{nkd}{\varepsilon^{2}}+\frac{nk^{2}}{ \varepsilon^{2}}\right)\) time and outputs a set of \(t\) weighted points that is a strong \(\varepsilon\)-coreset for \(k\)-means clustering with probability at least \(0.99\). Moreover, each point has an integer weight that is at most \(\operatorname*{poly}(n)\)._ ``` 0:\(\mathbf{A}\in\{0,1\}^{n\times d}\), rank parameter \(k\), accuracy parameter \(\varepsilon>0\) 0:\(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k},\mathbf{V}^{\prime}\in\{0,1\}^{k \times d}\) satisfying the property that \(\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{UV}-\mathbf{A}\|_{F}^{2}\) 1:\(t\leftarrow\mathcal{O}\left(\frac{2^{3k}k^{2}}{\varepsilon^{4}}\right)\)\(\triangleright\)Theorem 2.6 for \(2^{k}\)-means clustering 2:Compute a strong coreset \(C\) for \(2^{k}\)-means clustering of \(\mathbf{A}\), with size \(t\) and total weight \(N=\operatorname*{poly}(n)\) 3:Let \(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\) be the matrix representation of \(C\), where weighted points are duplicated appropriately 4:Let \((\widetilde{\mathbf{U}},\widetilde{\mathbf{V}})\) be the output of Algorithm 3 on input \(\widetilde{\mathbf{A}}\) 5:\(\mathbf{U}^{\prime}\leftarrow\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{n \times k}}\|\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\mathbf{A}\|_{F}^{2}\), \(\mathbf{V}^{\prime}\leftarrow\widetilde{\mathbf{V}}\)\(\triangleright\)Algorithm 2 6:return\((\mathbf{U}^{\prime},\mathbf{V}^{\prime})\) ``` **Algorithm 4** Low-rank approximation for matrix \(\mathbf{A}\) We now justify the correctness of Algorithm 4. **Lemma 2.7**.: _With probability at least \(0.95\), Algorithm 4 returns \(\mathbf{U}^{\prime},\mathbf{V}^{\prime}\) such that_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{UV}-\mathbf{A}\|_{F}^{2}.\] Proof.: Let \(\widetilde{\mathbf{M}}\) be the indicator matrix that selects a row of \(\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}=\widetilde{\mathbf{U}}\mathbf{V}^ {\prime}\) to match to each row of \(\mathbf{A}\), so that by the optimality of \(\mathbf{U}^{\prime}\), \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq\|\widetilde {\mathbf{M}}\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\mathbf{A}\|_{F}^{2}.\] Note that any \(\mathbf{V}\) is a set of \(k\) points in \(\{0,1\}^{d}\) and so each row \(\mathbf{U}_{i}\) of \(\mathbf{U}\) induces one of at most \(2^{k}\) possible points \(\mathbf{U}_{i}\mathbf{V}\in\{0,1\}^{d}\). Hence \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\) is the objective value of a constrained \(2^{k}\)-means clustering problem. Thus by the choice of \(t\) in Theorem2.6, we have that \(\widetilde{\mathbf{A}}\) is a strong coreset, so that \[\|\widetilde{\mathbf{M}}\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\mathbf{ A}\|_{F}^{2}\leq(1+\varepsilon)\|\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}- \widetilde{\mathbf{A}}\|_{F}^{2}.\] Let \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}^{*}\in\{0,1\}^{k\times d}\) such that \[\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}=\min_{\mathbf{U}\in\{0,1 \}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}- \mathbf{A}\|_{F}^{2}.\] Let \(\mathbf{M}^{*}\) be the indicator matrix that selects a row of \(\mathbf{U}^{*}\mathbf{V}^{*}\) to match to each row of \(\widetilde{\mathbf{A}}\), so that by Lemma2.4, \[(1+\varepsilon)\|\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\widetilde{ \mathbf{A}}\|_{F}^{2}\leq(1+\varepsilon)^{2}\|\mathbf{M}^{*}\mathbf{U}^{*} \mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2}.\] Then by the choice of \(t\) in Theorem2.6, we have that \[(1+\varepsilon)^{2}\|\mathbf{M}^{*}\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{ \mathbf{A}}\|_{F}^{2}\leq(1+\varepsilon)^{3}\|\mathbf{U}^{*}\mathbf{V}^{*}- \mathbf{A}\|_{F}^{2}.\] The desired claim then follows from rescaling \(\varepsilon\). We now analyze the runtime of Algorithm4. **Lemma 2.8**.: Algorithm4 _uses \(2^{\tilde{\mathcal{O}}\left(k^{2}/\varepsilon^{4}\right)}\operatorname{poly}( n,d)\) runtime._ Proof.: By Theorem2.6, it follows that Algorithm4 uses \(\mathcal{O}\left(nd^{2}+n^{2}d+\frac{nkd}{\varepsilon^{2}}+\frac{nk^{2}}{ \varepsilon^{2}}\right)\) time to compute \(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\) with \(N=\operatorname{poly}(n)\). By Lemma2.5, it follows that Algorithm3 on input \(\widetilde{\mathbf{A}}\) thus uses runtime \(2^{\mathcal{O}\left(m^{2}+m\log t\right)}\operatorname{poly}(N,d)\) for \(m=\mathcal{O}\left(\frac{k\log k}{\varepsilon^{2}}\right)\) and \(t=\mathcal{O}\left(\frac{2^{3k}k^{2}}{\varepsilon^{4}}\right)\). Finally, computing \(\mathbf{U}^{\prime}\) via Algorithm2 takes \(\mathcal{O}\left(2^{k}n\right)\) time after enumerating through all possible \(2^{k}\) binary vectors for each row of \(\mathbf{U}^{\prime}\). Therefore, the total runtime of Algorithm4 is \(2^{\tilde{\mathcal{O}}\left(\frac{k^{2}\log^{2}k}{\varepsilon^{4}}\right)} \operatorname{poly}(n,d)=2^{\tilde{\mathcal{O}}\left(k^{2}/\varepsilon^{4} \right)}\operatorname{poly}(n,d)\). Combining Lemma2.7 and Lemma2.8, we have: **Theorem 2.9**.: _There exists an algorithm that uses \(2^{\tilde{\mathcal{O}}\left(k^{2}/\varepsilon^{4}\right)}\operatorname{poly}( n,d)\) runtime and with probability at least \(\frac{2}{3}\), outputs \(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}^{\prime}\in\{0,1\}^{k\times d}\) such that_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}.\] ## 3 \(\mathbb{F}_{2}\) Low-Rank Approximation In this section, we present a \((1+\varepsilon)\)-approximation algorithm for binary low-rank approximation on \(\mathbb{F}_{2}\), where to goal is to find matrices \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\) to minimize the Frobenius norm loss \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\), but now all operations are performed in \(\mathbb{F}_{2}\). We would like to use the same approach as in Section2, i.e., to make guesses for the matrices \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\) while ensuring there are not too many possibilities for these matrices. To do so for matrix operations over general integers, we chose \(\mathbf{S}\) to be a leverage score sampling matrix that samples rows from \(\mathbf{U}^{*}\) and \(\mathbf{A}\). We then used the approximate matrix multiplication property in Lemma2.1 and the subspace embedding property in Theorem2.2 to show that \(\mathbf{S}\) provides an affine embedding in Theorem2.3 over general integers. However, it no longer necessarily seems true that \(\mathbf{S}\) will provide an affine embedding over \(\mathbb{F}_{2}\), in part because the subspace embedding property of \(\mathbf{S}\) computes leverage scores of each row of \(\mathbf{U}^{*}\) and \(\mathbf{A}\) with respect to general integers. Thus we require an alternate approach for matrix operations over \(\mathbb{F}_{2}\). Instead, we form the matrix \(\widetilde{\mathbf{A}}\) by taking a strong coreset of \(\mathbf{A}\) and then duplicating the rows according to their weight \(w_{i}\) to form \(\widetilde{\mathbf{A}}\). That is, if the \(i\)-th row \(\mathbf{A}_{i}\) of \(\mathbf{A}\) is sampled with weight \(w_{i}\) in the coreset, then \(\widetilde{\mathbf{A}}\) will contain \(w_{i}\) repetitions of the row \(\mathbf{A}_{i}\), where we note that \(w_{i}\) is an integer. We then group the rows of \(\widetilde{\mathbf{A}}\) by their repetitions, so that group \(\mathbf{G}_{j}\) consists of the rows of \(\widetilde{\mathbf{A}}\) that are repeated \([(1+\varepsilon)^{j},(1+\varepsilon)^{j+1})\) times. Thus if \(\mathbf{A}_{i}\) appears \(w_{i}\) times in \(\widetilde{\mathbf{A}}\), then it appears a single time in group \(\mathbf{G}_{j}\) for \(j=\lfloor\log w_{i}\rfloor\). We perform entrywise \(L_{0}\) low-rank approximation over \(\mathbb{F}_{2}\) for each of the groups \(\mathbf{G}_{j}\), which gives low-rank factors \(\mathbf{U}^{(j)}\) and \(\mathbf{V}^{(j)}\). We then compute \(\widetilde{\mathbf{U}^{(j)}}\in\mathbb{R}^{n\times d}\) from \(\mathbf{U}^{(j)}\) by following procedure. If \(\mathbf{A}_{i}\) is in \(\mathbf{G}_{j}\), then we place the row of \(\mathbf{U}^{(j)}\) corresponding to \(\mathbf{A}_{i}\) into the \(i\)-th row of \(\widetilde{\mathbf{U}^{(j)}}\), for all \(i\in[n]\). Note that the row of \(\mathbf{U}^{(j)}\) corresponding to \(\mathbf{A}_{i}\) may not be the \(i\)-th row of \(\mathbf{U}^{(j)}\), e.g., since \(\mathbf{A}_{i}\) will appear only once in \(\mathbf{G}_{j}\) even though it appears \(w_{i}\in[(1+\varepsilon)^{j},(1+\varepsilon)^{j+1})\) times in \(\mathbf{A}\). Otherwise if \(\mathbf{A}_{i}\) is not in \(\mathbf{G}_{j}\), then we set \(i\)-th row of \(\widetilde{\mathbf{U}^{(j)}}\) to be the all zeros row. We then achieve \(\mathbf{V}^{(j)}\) by padding accordingly. Finally, we collect \[\widetilde{\mathbf{U}}=\left[\widetilde{\mathbf{U}^{(0)}}|\ldots|\widetilde{ \mathbf{U}^{(\ell)}}\right],\qquad\widetilde{\mathbf{V}}\leftarrow\widetilde{ \mathbf{V}^{(0)}}\circ\ldots\circ\widetilde{\mathbf{V}^{(i)}}\] to achieve bicriteria low-rank approximations \(\widetilde{\mathbf{U}}\) and \(\widetilde{\mathbf{V}}\) to \(\widetilde{\mathbf{A}}\). Finally, to achieve bicriteria low-rank approximations \(\mathbf{U}^{\prime}\) and \(\mathbf{V}^{\prime}\) to \(\mathbf{A}\), we require that \(\mathbf{U}^{\prime}\) achieves the same block structure as \(\widetilde{\mathbf{U}}\). We describe this subroutine in Algorithm5 and we give the full low-rank approximation bicriteria algorithm in Algorithm6. We first recall the following subroutine to achieve entrywise \(L_{0}\) low-rank approximation over \(\mathbb{F}_{2}\). Note that for matrix operations over \(\mathbb{F}_{2}\), we have that the entrywise \(L_{0}\) norm is the same as the entrywise \(L_{p}\) norm for all \(p\). **Lemma 3.1** (Theorem 3 in [1]).: _For \(\varepsilon\in(0,1)\), there exists a \((1+\varepsilon)\)-approximation algorithm to entrywise \(L_{0}\) rank-\(k\) approximation over \(\mathbb{F}_{2}\) running in \(d\cdot n^{\mathrm{poly}(k/\varepsilon)}\) time._ ``` 0:\(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\), \(\mathbf{V}^{(1)},\ldots,\mathbf{V}^{(\ell)}\in\{0,1\}^{k\times d}\) 0:\(\mathbf{U}^{\prime}=\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{N\times \ell k}}\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{F}\), where \(\mathbf{U}\) is restricted to one nonzero block of \(k\) coordinates 1:for\(i=1\) to \(i=N\)do 2: Set \((\mathbf{U}^{\prime}_{i},j^{\prime})=\operatorname*{argmin}_{\mathbf{U}_{i}\in \{0,1\}^{1\times k},j\in[\ell]}\|\mathbf{U}_{i}\mathbf{V}^{(j)}-\widetilde{ \mathbf{A}}_{i}\|_{2}\)\(\triangleright\)Enumerate over all \(2^{k}\) possible binary vectors, all \(\ell\) indices 3: Pad \(\mathbf{U}^{\prime}_{i}\) with length \(\ell k\), as the \(j^{\prime}\)-th block of \(k\) coordinates 4:return\(\mathbf{U}^{\prime}=\mathbf{U}^{\prime}_{1}\circ\ldots\circ\mathbf{U}^{ \prime}_{N}\) ``` **Algorithm 5** Algorithm for computing optimal \(\mathbf{U}\) given \(\mathbf{V}^{(1)},\ldots,\mathbf{V}^{(\ell)}\) We first justify the correctness of Algorithm6. **Lemma 3.2**.: _With probability at least \(0.95\), Algorithm 6 returns \(\mathbf{U}^{\prime},\mathbf{V}^{\prime}\) such that_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq(1+\varepsilon) \min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\| \mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2},\] _where all matrix operations are performed in \(\mathbb{F}_{2}\)._ Proof.: Let \(\widetilde{\mathbf{U}}\leftarrow\left[\widetilde{\mathbf{U}^{(0)}}|\ldots| \widetilde{\mathbf{U}^{(\ell)}}\right]\) in Algorithm 6. Let \(\widetilde{\mathbf{M}}\) be the indicator matrix that selects a row of \(\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}=\widetilde{\mathbf{U}}\mathbf{ V}^{\prime}\) to match to each row of \(\mathbf{A}\), so that by the optimality of \(\mathbf{U}^{\prime}\), \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq\| \widetilde{\mathbf{M}}\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\mathbf{ A}\|_{F}^{2}.\] Since \(\mathbf{V}\) is a set of \(k\) points in \(\{0,1\}^{d}\) and each row \(\mathbf{U}_{i}\) of \(\mathbf{U}\) induces one of at most \(2^{k}\) possible points \(\mathbf{U}_{i}\mathbf{V}\in\{0,1\}^{d}\), then \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2}\) is the objective value of a constrained \(2^{k}\)-means clustering problem, even when all operations performed are on \(\mathbb{F}_{2}\). Similarly, \(\mathbf{V}^{(j)}\) is a set of \(k\) points in \(\{0,1\}^{d}\) for each \(j\in[\ell]\). Each row \(\mathbf{U}_{i}\) of \(\mathbf{U}\) induces one of at most \(2^{k}\) possible points \(\mathbf{U}_{i}\mathbf{V}^{(j)}\in\{0,1\}^{d}\) for a fixed \(j\in[\ell]\), so that \(\|\mathbf{U}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\) is the objective value of a constrained \(2^{k}\ell\)-means clustering problem, even when all operations performed are on \(\mathbb{F}_{2}\). Hence by the choice of \(t\) in Theorem 2.6, it follows that \(\widetilde{\mathbf{A}}\) is a strong coreset, and so \[\|\widetilde{\mathbf{M}}\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\mathbf{ A}\|_{F}^{2}\leq(1+\varepsilon)\|\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}- \widetilde{\mathbf{A}}\|_{F}^{2}.\] We decompose the rows of \(\widetilde{\mathbf{A}}\) into \(\mathbf{G}^{(0)},\ldots,\mathbf{G}^{(\ell)}\) for \(\ell=\mathcal{O}\left(\frac{\log n}{\varepsilon}\right)\). Let \(G_{i}\) be the corresponding indices in \([n]\) so that \(j\in G_{i}\) if and only if \(\widetilde{\mathbf{A}_{j}}\) is nonzero in \(\mathbf{G}_{i}\). Then we have \[\|\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\widetilde{\mathbf{A}}\|_{F}^{ 2}=\sum_{i\in[\ell]}\sum_{j\in G_{i}}\|\mathbf{U}_{j}^{\prime}\mathbf{V}^{ \prime}-\widetilde{\mathbf{A}_{j}}\|_{F}^{2}.\] Since each row in \(G_{i}\) is repeated a number of times in \([(1+\varepsilon)^{i},(1+\varepsilon)^{i+1})\), then \[\sum_{j\in G_{i}}\|\mathbf{U}^{\prime}_{j}\mathbf{V}^{\prime}-\widetilde{\mathbf{ A}_{j}}\|_{F}^{2}\leq(1+\varepsilon)^{2}\min_{\mathbf{U}^{(i)}\in\{0,1\}^{n\times k}, \mathbf{V}^{(i)}\in\{0,1\}^{\times k}\times d}\|\mathbf{U}^{(i)}\mathbf{V}^{( i)}-\mathbf{G}^{(i)}\|_{F}^{2},\] where the first factor of \((1+\varepsilon)\) is from the \((1+\varepsilon)\)-approximation guarantee of \(\mathbf{U}^{(i)}\) and \(\mathbf{V}^{(i)}\) by Lemma3.1 and the second factor of \((1+\varepsilon)\) is from the number of each row in \(\mathbf{G}^{(i)}\) varying by at most a \((1+\varepsilon)\) factor. Therefore, \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2} \leq(1+\varepsilon)^{3}\sum_{i\in[d]}\min_{\mathbf{U}^{(i)}\in \{0,1\}^{n\times k},\mathbf{V}^{(i)}\in\{0,1\}^{k\times d}}\|\mathbf{U}^{(i) }\mathbf{V}^{(i)}-\mathbf{G}^{(i)}\|_{F}^{2}\] \[\leq(1+\varepsilon)^{3}\min_{\mathbf{U}\in\{0,1\}^{n\times k}, \mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{ A}}\|_{F}^{2}.\] Let \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}^{*}\in\{0,1\}^{k\times d}\) such that \[\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}=\min_{\mathbf{U}\in\{0,1 \}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}- \mathbf{A}\|_{F}^{2},\] where all operations are performed in \(\mathbb{F}_{2}\). Let \(\mathbf{M}^{*}\) be the indicator matrix that selects a row of \(\mathbf{U}^{*}\mathbf{V}^{*}\) to match to each row of \(\widetilde{\mathbf{A}}\), so that by Lemma2.4, \[\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\| \mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{F}^{2}\leq(1+\varepsilon)\| \mathbf{M}^{*}\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2}.\] Then by the choice of \(t\) in Theorem2.6 so that \(\widetilde{\mathbf{A}}\) is a strong coreset of \(\mathbf{A}\), \[\|\mathbf{M}^{*}\mathbf{U}^{*}\mathbf{V}^{*}-\widetilde{\mathbf{A}}\|_{F}^{2} \leq(1+\varepsilon)\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}.\] Therefore, we have \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq(1+ \varepsilon)^{5}\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{F}^{2}\] and the desired claim then follows from rescaling \(\varepsilon\). It remains to analyze the runtime of Algorithm6. **Lemma 3.3**.: Algorithm6 _uses \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) runtime._ Proof.: By Theorem2.6, we have that Algorithm6 uses \(\mathcal{O}\left(nd^{2}+n^{2}d+\frac{nkd}{\varepsilon^{2}}+\frac{nk^{2}}{ \varepsilon^{2}}\right)\) time to compute \(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\) with \(N=\operatorname{poly}(n)\). By Lemma3.1, it takes \(d\cdot(2^{k})^{\operatorname{poly}(k/eps)}\) time to compute \(\widetilde{\mathbf{U}^{(i)}},\widetilde{\mathbf{V}^{(i)}}\) for each \(i\in[\ell]\) for \(\ell=\mathcal{O}\left(\frac{\log n}{\varepsilon}\right)\). Hence, it takes \(2^{\operatorname{poly}(k/eps)}\operatorname{poly}(n,d)\) runtime to compute \(\widetilde{\mathbf{U}}\) and \(\widetilde{\mathbf{V}}\). Finally, computing \(\mathbf{U}^{\prime}\) via Algorithm5 takes \(\mathcal{O}\left(2^{k^{\prime}}n\right)\) time after enumerating through all possible \(2^{k}\ell\) binary vectors for each row of \(\mathbf{U}^{\prime}\). Therefore, the total runtime of Algorithm4 is \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\). By Lemma3.2 and Lemma3.3, we thus have: **Theorem 3.4**.: _There exists an algorithm that uses \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) runtime and with probability at least \(\frac{2}{3}\), outputs \(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k^{\prime}}\) and \(\mathbf{V}^{\prime}\in\{0,1\}^{k^{\prime}\times d}\) such that_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{F}^{2}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F}^{2},\] _where \(k^{\prime}=\mathcal{O}\left(\frac{k\log k}{\varepsilon}\right)\)._ \(L_{p}\) Low-Rank Approximation In this section, we present a \((1+\varepsilon)\)-approximation algorithm for binary low-rank approximation with \(L_{p}\) loss, where to goal is to find matrices \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\) to minimize \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{p}^{p}\). We would like to use the same approach as in Section2, where we first compute a weighted matrix \(\widetilde{\mathbf{A}}\) from a strong coreset for \(\mathbf{A}\), and then we make guesses for the matrices \(\mathbf{S}\mathbf{U}^{*}\) and \(\mathbf{S}\mathbf{A}\) and solve for \(\min_{\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{S}\mathbf{U}^{*}\mathbf{V}- \mathbf{S}\mathbf{A}\|_{F}^{2}\) while ensuring there are not too many possibilities for the matrices \(\mathbf{S}\mathbf{U}^{*}\) and \(\mathbf{S}\mathbf{A}\). Thus to adapt this approach to \(L_{p}\) loss, we first require the following strong coreset construction for discrete metrics: **Theorem 4.1** (Theorem 1 in [13]).: _Let \(X\subset\mathbb{R}^{d}\) be a subset of \(n\) points, \(\varepsilon\in(0,1)\) be an accuracy parameter, \(p\geq 1\) be a constant, and let_ \[t=\mathcal{O}\left(\min(\varepsilon^{-2}+\varepsilon^{-p},k\varepsilon^{-2}) \cdot k\log n\right).\] _There exists an algorithm that uses \(\operatorname{poly}(n,d,k)\) runtime and outputs a set of \(t\) weighted points that is a strong \(\varepsilon\)-coreset for \(k\)-clustering on discrete \(L_{p}\) metrics with probability at least \(0.99\). Moreover, each point has an integer weight that is at most \(\operatorname{poly}(n)\)._ For Frobenius error, we crucially require the affine embedding property that \[(1-\varepsilon)\|\mathbf{U}^{*}\mathbf{V}-\mathbf{A}\|_{F}^{2}\leq\|\mathbf{ S}\mathbf{U}^{*}\mathbf{V}-\mathbf{S}\mathbf{A}\|_{F}^{2}\leq(1+\varepsilon)\| \mathbf{U}^{*}\mathbf{V}-\mathbf{A}\|_{F}^{2},\] for all \(\mathbf{V}\in\{0,1\}^{k\times d}\). Unfortunately, it is not known whether there exists an efficient sampling-based affine embedding for \(L_{p}\) loss. We instead invoke the coreset construction of Theorem4.1 on the rows and the columns so that \(\widetilde{\mathbf{A}}\) has a small number of distinct rows and columns. We again use the idea from Section3 to partition the rows of \(\widetilde{\mathbf{A}}\) into groups based on their frequency, but now we further partition the groups based on the frequency of the columns. It then remains to solve BMF with \(L_{p}\) loss on the partition, each part of which has a small number of rows and columns. Because the contribution of each row toward the overall loss is small (because there is a small number of columns), it turns out that there exists a matrix that samples \(\operatorname{poly}(k/\varepsilon)\) rows of each partition that finally achieves the desired affine embedding. Thus, we can solve the problem on each partition, pad the factors accordingly, and build the bicriteria factors as in the binary field case. The algorithm appears in full in Algorithm9, with subroutines appearing in Algorithm7 and Algorithm8. ``` 0:\(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times d}\), \(\mathbf{V}^{(1)},\ldots,\mathbf{V}^{(\ell)}\in\{0,1\}^{k\times d}\) 0:\(\mathbf{U}^{\prime}=\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{N\times \ell k}}\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{p}^{p}\), where \(\mathbf{U}\) is restricted to one nonzero block of \(k\) coordinates 1:for\(i=1\) to \(i=N\)do 2: Set \((\mathbf{U}_{i}^{\prime},j^{\prime})=\operatorname*{argmin}_{\mathbf{U}_{i} \in\{0,1\}^{1\times k},j\in[\ell]}\|(\mathbf{U}_{i}\mathbf{V}^{(j)}-\widetilde {\mathbf{A}}_{i}\|_{p}^{p}\)\(\triangleright\)Enumerate over all \(2^{k}\) possible binary vectors, all \(\ell\) indices 3: Pad \(\mathbf{U}_{i}^{\prime}\) with length \(\ell k\), as the \(j^{\prime}\)-th block of \(k\) coordinates 4:return\(\mathbf{U}^{\prime}=\mathbf{U}_{1}^{\prime}\circ\ldots\circ\mathbf{U}_{N}^{\prime}\) ``` **Algorithm 7** Algorithm for computing optimal \(\mathbf{U}\) given \(\mathbf{V}^{(1)},\ldots,\mathbf{V}^{(\ell)}\) We first justify the correctness of Algorithm8 by showing the existence of an \(L_{0}\) sampling matrix \(\mathbf{S}\) that achieves a subspace embedding for binary inputs. ``` 0:\(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times D}\) with at most \(t\) distinct rows and \(r\) distinct columns 0:\(\mathbf{U}^{\prime},\mathbf{V}^{\prime}\) with \(\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{p}\leq(1+\varepsilon)\min_{ \mathbf{U}\in\{0,1\}^{N\times k},\mathbf{V}\in\{0,1\}^{k\times D}}\|\mathbf{U} \mathbf{V}-\widetilde{\mathbf{A}}\|_{p}\) 1:\(V\leftarrow\emptyset\) 2:for each guess of \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\), where \(\mathbf{S}\) is a \(L_{0}\) sampling matrix with \(m=\mathcal{O}\left(\frac{k^{p+1}}{\varepsilon^{2}}\log r\right)\) rows with weights that are powers of two up to \(\mathrm{poly}(N)\)do 3:\(V\gets V\cup\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k\times D}}\| \mathbf{SU}^{*}\mathbf{V}-\mathbf{SA}\|_{p}^{p}\)\(\triangleright\)Algorithm 1 with \(L_{p}\) loss 4:for each \(\mathbf{V}\in V\)do 5: Let \(\mathbf{U}_{\mathbf{V}}=\operatorname*{argmin}_{\mathbf{U}\in\{0,1\}^{N \times k}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{p}^{p}\)\(\triangleright\)Algorithm 2 with \(L_{p}\) loss 6:\(\mathbf{V}^{\prime}\leftarrow\operatorname*{argmin}_{\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{SU}_{\mathbf{V}}\mathbf{V}-\mathbf{SA}\|_{p}^{p}\) 7:\(\mathbf{U}^{\prime}\leftarrow\mathbf{U}_{\mathbf{V}^{\prime}}\) 8:return\((\mathbf{U}^{\prime},\mathbf{V}^{\prime})\) ``` **Algorithm 8** Low-rank approximation for matrix \(\widetilde{\mathbf{A}}\) with \(t\) distinct rows and \(t^{\prime}\) distinct columns ``` 0:\(\mathbf{A}\in\{0,1\}^{n\times d}\), rank parameter \(k\), accuracy parameter \(\varepsilon>0\) 0:\(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k},\mathbf{V}^{\prime}\in\{0,1\}^{k \times d}\) satisfying the property that \(\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{p}^{p}\) 1:\(t\leftarrow\mathcal{O}\left(\min(\varepsilon^{-2}+\varepsilon^{-p},k \varepsilon^{-2})\cdot k\log n\right)\)\(\triangleright\)Theorem 4.1 2:\(\ell\leftarrow\mathcal{O}\left(\frac{\log n}{\varepsilon}\right)\), \(k^{\prime}\leftarrow\ell k\) 3: Compute a strong coreset \(C\) for \(2^{k}\)-means clustering of \(\mathbf{A}\), with \(t\) rows, with weights \(N=\mathrm{poly}(n)\) 4: Compute a strong coreset \(C^{\prime}\) for \(2^{k}\)-means clustering of \(C\), with \(t\) rows and columns, with weights \(N,D=\mathrm{poly}(n)\) 5: Let \(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times D}\) be the matrix representation of \(C\), where weighted points are duplicated appropriately 6: For \(i\in[\ell]\), let \(\mathbf{G}^{(i)}\) be the group of rows (removing multiplicity) of \(\widetilde{\mathbf{A}}\) with frequency \([(1+\varepsilon)^{i},(1+\varepsilon)^{i+1})\) 7: For \(i,j\in[\ell]\), let \(\mathbf{G}^{(i,j)}\) be the group of columns (removing multiplicity) of \(\mathbf{G}^{(i,j)}\) with frequency \([(1+\varepsilon)^{j},(1+\varepsilon)^{j+1})\) 8: Compute the low-rank minimizers \((\widetilde{\mathbf{U}^{(i,j)}},\widetilde{\mathbf{V}^{(i,j)}})\) on input \(\mathbf{G}^{(i,j)}\) using Algorithm 8, padded to \(\mathbb{R}^{n\times k}\) and \(\mathbb{R}^{k\times D}\), respectively 9:\(\widetilde{\mathbf{U}}\leftarrow\left[\widetilde{\mathbf{U}^{(0,0)}}| \widetilde{\mathbf{U}^{(1,0)}}|\ldots|\widetilde{\mathbf{U}^{(\ell,\ell)}}| \right]\), \(\widetilde{\mathbf{V}}\leftarrow\widetilde{\mathbf{V}^{(0,0)}}\circ\widetilde{ \mathbf{V}^{(1,0)}}\ldots\circ\widetilde{\mathbf{V}^{(\ell,\ell)}}\) 10: Use Algorithm 7 with \(\widetilde{\mathbf{U}^{(0,0)}},\widetilde{\mathbf{U}^{(1,0)}}\ldots,\widetilde {\mathbf{U}^{(\ell,\ell)}}\) and \(C\) to find \(\mathbf{V}^{\prime}\) 11: Use \(\mathbf{V}^{\prime}\) and \(\mathbf{A}\) to find \(\mathbf{U}^{\prime}\), i.e., Algorithm 2 with dimension \(k^{\prime}\) and \(L_{p}\) loss 12:return\((\mathbf{U}^{\prime},\mathbf{V}^{\prime})\) ``` **Algorithm 9** Bicriteria low-rank approximation with \(L_{p}\) loss for matrix \(\mathbf{A}\) **Lemma 4.2**.: _Given matrices \(\mathbf{A}\in\{0,1\}^{n\times k}\) and \(\mathbf{B}\in\{0,1\}^{n\times r}\), there exists a matrix \(\mathbf{S}\in\mathbb{R}^{m\times n}\) with \(m=\mathcal{O}\left(\frac{k^{p+1}}{\varepsilon^{2}}\log r\right)\) such that with probability at least \(0.99\), we have that simultaneously for all \(\mathbf{X}\in\{0,1\}^{k\times r}\),_ \[(1-\varepsilon)\|\mathbf{A}\mathbf{X}-\mathbf{B}\|_{p}^{p}\leq\|\mathbf{SA} \mathbf{X}-\mathbf{S}\mathbf{B}\|_{p}^{p}\leq(1+\varepsilon)\|\mathbf{A} \mathbf{X}-\mathbf{B}\|_{p}^{p}.\] Proof.: Let \(\mathbf{M}\in\{0,1,\ldots,k\}^{n\times 1}\) be an arbitrary matrix and let \(S\) be a set that contains the nonzero rows of \(\mathbf{M}\) and has cardinality that is a power of two. That is, \(|S|=2^{i}\) for some integer \(i\geq 0\). Let \(\mathbf{z}\) be a random element of \(S\), i.e., a random non-zero row of \(\mathbf{M}\), so that we have \[\mathbb{E}\left[2^{i}\cdot\|\mathbf{z}\|_{p}^{p}\right]=\|\mathbf{M}\|_{p}^{p}.\] Similarly, we have \[\operatorname{Var}(2^{i}\cdot\|\mathbf{z}\|_{p}^{p})\leq 2^{i}k^{p}\leq 2k^{p} \|\mathbf{M}\|_{p}^{p}.\] Hence if we repeat take the mean of \(\mathcal{O}\left(\frac{k^{p}}{\varepsilon^{2}}\right)\) estimators, we have that with probability at least \(0.99\), \[(1-\varepsilon)\|\mathbf{M}\|_{p}^{p}\leq\|\mathbf{S}\mathbf{M}\|_{p}^{p}\leq (1+\varepsilon)\|\mathbf{M}\|_{p}^{p}.\] We can further improve the probability of success to \(1-\delta\) for \(\delta\in(0,1)\) by repeating \(\mathcal{O}\left(\log\frac{1}{\delta}\right)\) times. By setting \(\mathbf{M}=\mathbf{A}\mathbf{x}-\mathbf{B}^{(i)}\) for fixed \(\mathbf{A}\in\{0,1\}^{n\times k}\), \(\mathbf{x}\in\{0,1\}^{k}\), and \(\mathbf{B}\in\{0,1\}^{n\times r}\) with \(i\in[r]\), we have that the sketch matrix gives a \((1+\varepsilon)\)-approximation to \(\|\mathbf{A}\mathbf{x}-\mathbf{B}^{(i)}\|_{p}^{p}\). The result then follows from setting \(\delta=\frac{1}{2^{k}r}\), taking a union bound over all \(\mathbf{x}\in\{0,1\}^{k}\), and then a union bound over all \(i\in[r]\). We then justify the correctness of Algorithm 9. **Lemma 4.3**.: _With probability at least \(0.95\), Algorithm 9 returns \(\mathbf{U}^{\prime},\mathbf{V}^{\prime}\) such that_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{p}^{p}.\] Proof.: Let \(\mathbf{M}_{1}\) and \(\mathbf{M}_{2}\) be the sampling and rescaling matrices used to acquire \(\widetilde{\mathbf{A}}\in\mathbb{R}^{N\times D}\), so that by the optimality of \(\mathbf{U}^{\prime}\), \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p}\leq\|\mathbf{M} _{1}\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}\mathbf{M}_{2}-\mathbf{A}\|_{ p}^{p}.\] Observe that \(\mathbf{V}\) is a set of \(k\) points in \(\{0,1\}^{d}\). Thus, each row \(\mathbf{U}_{i}\) of \(\mathbf{U}\) induces one of at most \(2^{k}\) possible points \(\mathbf{U}_{i}\mathbf{V}\in\{0,1\}^{d}\). Hence, \(\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{p}^{p}\) is the objective value of a constrained \(2^{k}\)-clustering problem under the \(L_{p}\) metric. Similarly, since \(\mathbf{V}^{(j)}\) is a set of \(k\) points in \(\{0,1\}^{d}\) for each \(j\in[\ell]\), then each row \(\mathbf{U}_{i}\) of \(\mathbf{U}\) induces one of at most \(2^{k}\) possible points \(\mathbf{U}_{i}\mathbf{V}^{(j)}\in\{0,1\}^{d}\) for a fixed \(j\in[\ell]\). Therefore, \(\|\mathbf{U}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p}\) is the objective value of a constrained \(2^{k}\ell\)-clustering problem under the \(L_{p}\) metric. By the choice of \(t\) in Theorem 4.1, \(\widetilde{\mathbf{A}}\) is a strong coreset, and so \[\|\mathbf{M}_{1}\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}\mathbf{M}_{2}- \mathbf{A}\|_{F}^{2}\leq(1+\varepsilon)\|\widetilde{\mathbf{U}}\widetilde{ \mathbf{V}}-\widetilde{\mathbf{A}}\|_{F}^{2}.\] We decompose the rows of \(\widetilde{\mathbf{A}}\) into groups \(\mathbf{G}^{(0)},\ldots,\mathbf{G}^{(\ell)}\) for \(\ell=\mathcal{O}\left(\frac{\log n}{\varepsilon}\right)\). For each group \(\mathbf{G}^{(i)}\), we decompose the columns of \(\mathbf{G}^{(i)}\) into groups \(\mathbf{G}^{(i,0)},\ldots,\mathbf{G}^{(i,\ell)}\) for \(\ell=\mathcal{O}\left(\frac{\log n}{\varepsilon}\right)\). Let \(G_{i}\) be the indices in \([n]\) corresponding to the rows in \(\mathbf{G}^{(i)}\) and let \(G_{i,j}\) be the indices in \([n]\) corresponding to the columns in \(\mathbf{G}^{(i,j)}\). Then \[\|\widetilde{\mathbf{U}}\widetilde{\mathbf{V}}-\widetilde{\mathbf{A}}\|_{p}^{p} =\sum_{i\in[\ell]}\sum_{a\in G_{i}}\sum_{j\in[\ell]}\sum_{b\in G_{i,j}}\left|( \mathbf{U}^{\prime}\mathbf{V}^{\prime})_{a,b}-\widetilde{\mathbf{A}}_{a,b} \right|^{p}.\] Since each row in \(G_{i}\) is repeated a number of times in \([(1+\varepsilon)^{i},(1+\varepsilon)^{i+1})\) and each column in \(G_{i,j}\) is repeated a number of times in \([(1+\varepsilon)^{i},(1+\varepsilon)^{i+1})\), then \[\sum_{a\in G_{i}}\sum_{b\in G_{i,j}}\left|(\mathbf{U}^{\prime}\mathbf{V}^{ \prime})_{a,b}-\widetilde{\mathbf{A}}_{a,b}\right|^{p}\leq(1+\varepsilon)^{3} \min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{\times k}\times d }\sum_{a\in G_{i}}\sum_{b\in G_{i,j}}\left|(\mathbf{U}\mathbf{V})_{a,b}- \widetilde{\mathbf{A}}_{a,b}\right|^{p},\] where the first factor of \((1+\varepsilon)\) is from the \((1+\varepsilon)\)-approximation guarantee of \(\mathbf{U}^{(i)}\) and \(\mathbf{V}^{(i)}\) by Lemma3.1 and the second and third factors of \((1+\varepsilon)\) is from the number of each row and each column in \(\mathbf{G}^{(i,j)}\) varying by at most \((1+\varepsilon)\) factor. Therefore, \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p} \leq(1+\varepsilon)\sum_{i\in[\ell]}\sum_{a\in G_{i}}\sum_{j\in[ \ell]}\sum_{b\in G_{i,j}}\left|(\mathbf{U}^{\prime}\mathbf{V}^{\prime})_{a,b}- \widetilde{\mathbf{A}}_{a,b}\right|^{p}\] \[\leq(1+\varepsilon)^{4}\min_{\mathbf{U}\in\{0,1\}^{n\times k}, \mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}-\widetilde{\mathbf{A} }\|_{p}^{p}\] Let \(\mathbf{U}^{*}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}^{*}\in\{0,1\}^{k\times d}\) be minimizers to the binary \(L_{p}\) low-rank approximation problem, so that \[\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{p}^{p}=\min_{\mathbf{U}\in\{0,1 \}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\|\mathbf{U}\mathbf{V}- \mathbf{A}\|_{p}^{p}.\] Let \(\mathbf{M}_{3}\) and \(\mathbf{M}_{4}\) be the indicator matrices that select rows and columns of \(\mathbf{U}^{*}\mathbf{V}^{*}\) to match to each row of \(\widetilde{\mathbf{A}}\), so that by Lemma2.4, \[\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k\times d}}\| \mathbf{U}\mathbf{V}-\widetilde{\mathbf{A}}\|_{p}^{p}\leq(1+\varepsilon)\| \mathbf{M}_{3}\mathbf{U}^{*}\mathbf{V}^{*}\mathbf{M}_{4}-\widetilde{\mathbf{A} }\|_{p}^{p}.\] Then by the choice of \(t\) in Theorem4.1 so that \(\widetilde{\mathbf{A}}\) is a strong coreset of \(\mathbf{A}\), \[\|\mathbf{M}_{3}\mathbf{U}^{*}\mathbf{V}^{*}\mathbf{M}_{4}-\widetilde{\mathbf{ A}}\|_{p}^{p}\leq(1+\varepsilon)\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{p}^{p}.\] Therefore, \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p}\leq(1+ \varepsilon)^{6}\|\mathbf{U}^{*}\mathbf{V}^{*}-\mathbf{A}\|_{p}^{p}\] and the desired claim then follows from rescaling \(\varepsilon\). We now analyze the runtime of Algorithm9. **Lemma 4.4**.: _For any constant \(p\geq 1\), Algorithm9 uses \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) runtime._ Proof.: By Theorem4.1, we have that Algorithm9 uses \(2^{\mathcal{O}(k)}\cdot\operatorname{poly}(n,d)\) time to compute \(\widetilde{\mathbf{A}}\in\{0,1\}^{N\times D}\) with \(N,D=\operatorname{poly}(n)\). We now consider the time to compute \(\widetilde{\mathbf{U}^{(i,j)}},\widetilde{\mathbf{V}^{(i,j)}}\) for each \(i,j\in[\ell]\) for \(\ell=\mathcal{O}\left(\frac{\log n}{\varepsilon}\right)\). For each \(i,j\), we make guesses for \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\) in Since \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\) have \(m=\mathcal{O}\left(\frac{k^{p+1}\log r}{\varepsilon^{2}}\right)\) rows, then there are \(\binom{t}{m}\) possible choices for \(\mathbf{SU}^{*}\) and \(\binom{t}{m}\) choices for \(\mathbf{SA}\), where \(t=\frac{2^{k}\log n}{\varepsilon^{p}}\). Hence, there are \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) possible guesses for \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\). For each guess of \(\mathbf{SU}^{*}\) and \(\mathbf{SA}\), Algorithm8 iterates through the columns of \(\widetilde{\mathbf{V}^{(i,j)}}\), which uses \(2^{\mathcal{O}(k)}\cdot\operatorname{poly}(n,d)\) time. Similarly, the computation of \(\widetilde{\mathbf{U}^{(i,j)}}\), \(\mathbf{U}^{\prime}\), and \(\mathbf{V}^{\prime}\) all take \(2^{\mathcal{O}(k)}\cdot\operatorname{poly}(n,d)\) time. Therefore, the total runtime of Algorithm9 is \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\). By Lemma 4.3 and Lemma 4.4, we thus have: **Theorem 4.5**.: _For any constant \(p\geq 1\), there exists an algorithm that uses \(2^{\operatorname{poly}(k/\varepsilon)}\operatorname{poly}(n,d)\) runtime and with probability at least \(\frac{2}{3}\), outputs \(\mathbf{U}^{\prime}\in\{0,1\}^{n\times k^{\prime}}\) and \(\mathbf{V}^{\prime}\in\{0,1\}^{k^{\prime}\times d}\) such that_ \[\|\mathbf{U}^{\prime}\mathbf{V}^{\prime}-\mathbf{A}\|_{p}^{p}\leq(1+ \varepsilon)\min_{\mathbf{U}\in\{0,1\}^{n\times k},\mathbf{V}\in\{0,1\}^{k \times d}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{p}^{p},\] _where \(k^{\prime}=\mathcal{O}\left(\frac{k\log^{2}k}{\varepsilon^{2}}\right)\)._ We note here that the \(\operatorname{poly}(k/\varepsilon)\) term in the exponent hides a \(k^{p}\) factor, as we assume \(p\) to be a (small) constant. ## 5 Applications to Big Data Models This section describes how we can generalize our techniques to big data models such as streaming or distributed models. Algorithmic modularization.To adapt our algorithm to the streaming model or the distributed model, we first present a high-level modularization of our algorithm across all applications, i.e., Frobenius binary low-rank approximation, binary low-rank approximation over \(\mathbb{F}_{2}\), and binary low-rank approximation with \(L_{p}\) loss. We are given the input matrix \(\mathbf{A}\in\{0,1\}^{n\times d}\) in each of these settings. We first construct a weighted coreset \(\widetilde{\mathbf{A}}\) for \(\mathbf{A}\). We then perform a number of operations on \(\widetilde{\mathbf{A}}\) to obtain low-rank factors \(\widetilde{\mathbf{U}}\) and \(\widetilde{\mathbf{V}}\) for \(\widetilde{\mathbf{A}}\). Setting \(\mathbf{V}^{\prime}=\widetilde{\mathbf{V}}\), our algorithms finally use \(\mathbf{A}\) and \(\mathbf{V}^{\prime}\) to construct the optimal factor \(\mathbf{U}^{\prime}\) to match \(\mathbf{V}^{\prime}\). ### Streaming Model We can adapt our approach to the streaming model, where either the rows or columns of the input matrix arrive sequentially. For brevity, we shall only discuss the setting where the rows of the input matrix arrive sequentially; the setting where the columns of the input matrix arrive sequentially is symmetric. Formal streaming model definition.We consider the two-pass row-arrival variant of the streaming model. In this setting, the rank parameter \(k\) and the accuracy parameter \(\varepsilon>0\) are given to the algorithm before the data stream. The input matrix \(\mathbf{A}\in\{0,1\}^{n\times d}\) is then defined through the sequence of row-arrivals, \(\mathbf{A}_{1},\ldots,\mathbf{A}_{n}\in\{0,1\}^{d}\), so that the \(i\)-th row that arrives in the data stream is \(\mathbf{A}_{i}\). The algorithm passes over the data twice so that in the first pass, it can store some sketch \(S\) that uses space sublinear in the input size, i.e., using \(o(nd)\) space. After the first pass, the algorithm can perform some post-processing on \(S\) and then must output factors \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\) after being given another pass over the data, i.e., the rows \(\mathbf{A}_{1},\ldots,\mathbf{A}_{n}\in\{0,1\}^{d}\). Two-pass streaming algorithm.To adapt our algorithm to the two-pass streaming model, recall the high-level modularization of our algorithm described at the beginning of Section 5. The first step is constructing a coreset \(\widetilde{\mathbf{A}}\) of \(\mathbf{A}\). Whereas our previous coreset constructions were offline, we now require a streaming algorithm to produce the coreset \(\widetilde{\mathbf{A}}\). To that end, we use the following well-known merge-and-reduce paradigm for converting an offline coreset construction to a coreset construction in the streaming model. **Theorem 5.1**.: _Suppose there exists an algorithm that, with probability \(1-\frac{1}{\mathrm{poly}(n)}\), produces an offline coreset construction that uses \(f(n,\varepsilon)\) space, suppressing dependencies on other input parameters, such as \(k\) and \(p\). Then there exists a one-pass streaming algorithm that, with probability \(1-\frac{1}{\mathrm{poly}(n)}\), produces a coreset that uses \(f(n,\varepsilon^{\prime})\cdot\mathcal{O}\left(\log n\right)\) space, where \(\varepsilon^{\prime}=\frac{\varepsilon}{\log n}\)._ In the first pass of the stream, we can use Theorem5.1 to construct a strong coreset \(C\) of \(\mathbf{A}\) with accuracy \(\mathcal{O}\left(\varepsilon\right)\). However, \(C\) will have \(2^{\mathrm{poly}(k)}\cdot\mathrm{poly}\left(\frac{1}{\varepsilon},\log n\right)\) rows, and thus, we cannot immediately duplicate the rows of \(C\) to form \(\widetilde{\mathbf{A}}\) because we cannot have \(\log n\) dependencies in the number of rows of \(\widetilde{\mathbf{A}}\). After the first pass of the stream, we further apply the respective offline coreset construction, i.e., Theorem2.6 or Theorem4.1 to \(C\) to obtain a coreset \(C^{\prime}\) with accuracy \(\varepsilon\) and a number of rows independent of \(\log n\). We then use \(C^{\prime}\) to form \(\widetilde{\mathbf{A}}\) and perform a number of operations on \(\widetilde{\mathbf{A}}\) to obtain low-rank factors \(\widetilde{\mathbf{U}}\) and \(\widetilde{\mathbf{V}}\) for \(\widetilde{\mathbf{A}}\). Setting \(\mathbf{V}^{\prime}=\widetilde{\mathbf{V}}\), we can finally use the second pass of the data stream over \(\mathbf{A}\), along with \(\mathbf{V}^{\prime}\), to construct the optimal factor \(\mathbf{U}^{\prime}\) to match \(\mathbf{V}^{\prime}\). Thus the two-pass streaming algorithm uses \(2^{\mathrm{poly}(k)}\cdot d\cdot\mathrm{poly}\left(\frac{1}{\varepsilon},\log n\right)\) total space in the row-arrival model. For the column-arrival model, the two-pass streaming algorithm uses \(2^{\mathrm{poly}(k)}\cdot n\cdot\mathrm{poly}\left(\frac{1}{\varepsilon},\log d\right)\) total space. ### Two-round distributed algorithm. Our approach can also be adapted to the distributed model, where the rows or columns of the input matrix are partitioned across multiple users. For brevity, we again discuss the setting where the rows of the input matrix are partitioned; the setting where the columns of the input matrix are partitioned is symmetric. Formal distributed model definition.We consider the two-round distributed model, where the rank parameter \(k\) and the accuracy parameter \(\varepsilon>0\) are known in advance to all users. The input matrix \(\mathbf{A}\in\{0,1\}^{n\times d}\) is then defined arbitrarily through the union of rows, \(\mathbf{A}_{1},\ldots,\mathbf{A}_{n}\in\{0,1\}^{d}\), where each row \(\mathbf{A}_{i}\) may be given to any of \(\gamma\) users. An additional central coordinator sends and receives messages from the users. The protocol is then permitted to use two rounds of communication so that in the first round, the protocol can send \(o(nd)\) bits of communication. The coordinator can process the communication to form some sketch \(S\), perform some post-processing on \(S\), and then request additional information from each user, possibly using \(o(nd)\) communication to specify the information demanded from each user. After the users again use \(o(nd)\) bits of communication in the second round of the protocol, the central coordinator must output factors \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\). Two-round distributed algorithm.To adapt our algorithm to the two-round distributed model, again recall the high-level modularization of our algorithm described at the beginning of Section5. The first step is constructing a coreset \(\widetilde{\mathbf{A}}\) of \(\mathbf{A}\). Whereas our previous coreset constructions were offline, we now require a distributed algorithm to produce the coreset \(\widetilde{\mathbf{A}}\). To that end, we request that each of the \(t\) users send a coreset with accuracy \(\mathcal{O}\left(\varepsilon\right)\) of their respective rows. Note that each user can construct the coreset locally without requiring any communication since the coreset is only a summary of the rows held by the user. Thus the total communication in the first round is just the offline coreset size times the number of players, i.e., \(\gamma\cdot 2^{\mathrm{poly}(k)}\cdot\mathrm{poly}\left(\frac{1}{\varepsilon}, \log n\right)\) rows. Given the union \(C\) of the coresets sent by all users, the central coordinator then constructs a coreset \(C^{\prime}\) of \(\mathbf{A}\) with accuracy \(\varepsilon\), again using an offline coreset construction. The coordinator then uses \(C^{\prime}\) to form \(\widetilde{\mathbf{A}}\) and performs the required operations on \(\widetilde{\mathbf{A}}\) to obtain low-rank factors \(\widetilde{\mathbf{U}}\) and \(\widetilde{\mathbf{V}}\) for \(\widetilde{\mathbf{A}}\). The coordinator can then send \(\mathbf{V}^{\prime}\) to all players, using \(\mathbf{V}^{\prime}\) and their local subset rows of \(\mathbf{A}\) to construct \(\mathbf{U}^{\prime}\) collectively. The users then send the rows of \(\mathbf{U}^{\prime}\) corresponding to the rows of \(\mathbf{A}\) local to the user back to the central coordinator, who can then construct \(\mathbf{U}^{\prime}\). Thus the second round of the protocol uses \(\tilde{\mathcal{O}}\left(nk+kd\right)\cdot\mathrm{poly}\left(\frac{1}{ \varepsilon}\right)\) bits of communication. Hence, the total communication of the protocol is \(d\gamma\cdot 2^{\mathrm{poly}(k)}\cdot\mathrm{poly}\left(\frac{1}{\varepsilon}, \log n\right)+\tilde{\mathcal{O}}\left(nk+kd\right)\cdot\mathrm{poly}\left( \frac{1}{\varepsilon}\right)\) in the two-round row-partitioned distributed model. For the two-round column-partitioned distributed model, the total communication of the protocol is \(n\gamma\cdot 2^{\mathrm{poly}(k)}\cdot\mathrm{poly}\left(\frac{1}{\varepsilon}, \log d\right)+\tilde{\mathcal{O}}\left(nk+kd\right)\cdot\mathrm{poly}\left( \frac{1}{\varepsilon}\right)\). ## 6 Experiments In this section, we aim to evaluate the feasibility of the algorithmic ideas of our paper against existing algorithms for binary matrix factorization from previous literature. The running time of our full algorithms for BMF is prohibitively expensive, even for small \(k\), so our algorithm will be based on the idea of [13], who only run their algorithms in part, obtaining weaker theoretical guarantees. Indeed, by simply performing \(k\)-means clustering, they obtained a simple algorithm that outperformed more sophisticated heuristics in practice. We perform two main types of experiments, first comparing the algorithm presented in the next section against existing baselines and then showing the feasibility of using coresets in the BMF setting. Baseline and algorithm.We compare several algorithms for binary matrix factorization that have implementations available online, namely the algorithm by Zhang et al. [13], which has been implemented in the NIMFA library [13], the message passing algorithm of Ravanbakhsh et al. [12], as well as our implementation of the algorithm used in the experiments of [13]. We refer to these algorithms as Zh, MP, and kBMF, respectively. We choose the default parameters provided by the implementations. We chose the maximum number of rounds for the iterative methods so that the runtime does not exceed 20 seconds, as all methods besides [13] are iterative. However, in our experiments, the algorithms usually converged to a solution below the maximum number of rounds. We let every algorithm use the matrix operations over the preferred semiring, i.e. boolean, integer, or and-or matrix multiplication, in order to achieve the best approximation. We additionally found a binary matrix factorization algorithm for sparse matrices based on subgradient descent and random sampling1 that is not covered in the literature. This algorithm was excluded from our experiments as it did not produce binary factors in our experiments. Specifically, we found that it produces real-valued \(\mathbf{U}\) and \(\mathbf{V}\), and requires binarizing the product \(\mathbf{UV}\) after multiplication, therefore not guaranteeing that the binary matrix is of rank \(k\). Motivated by the idea of partially executing a more complicated algorithm with strong theoretical guarantees, we build upon the idea of finding a \(k\)-means clustering solution as a first approximation and mapping the Steiner points to their closest neighbors in \(\mathbf{A}\), giving us a matrix \(\mathbf{V}\) of \(k\) binary points, and a matrix \(\mathbf{U}\) of assignments of the points of \(\mathbf{A}\) to their nearest neighbors. This solution restricts \(\mathbf{U}\) to have a single non-zero entry per row. Instead of outputting this \(\mathbf{U}\) as [11] did, we solve the minimization problem \(\min_{\mathbf{U}\in\{0,1\}^{n\times k}}\|\mathbf{U}\mathbf{V}-\mathbf{A}\|_{F} ^{2}\) exactly at a cost of \(2^{k}\) per row, which is affordable for small \(k\). For a qualitative example of how this step improves the solution quality, see Figure 1. We call this algorithm kBMF+. Using \(k\)-means as the first step in a binary matrix factorization algorithm is well-motivated by the theoretical and experimental results of [11], but does not guarantee a \((1+\varepsilon)\)-approximation. However, as we do not run the full algorithm, we are not guaranteed a \((1+\varepsilon)\)-approximation either way, as unfortunately, guessing the optimal matrix \(\mathbf{V}\) is very time-consuming. We would first have to solve the sketched problem \(\|\mathbf{S}\widetilde{\mathbf{A}}-\mathbf{SU}\mathbf{V}\|_{F}^{2}\) for all guesses of \(\mathbf{SA}\) and \(\mathbf{SU}\). We implement our algorithm and the one of [11] in Python 3.10 and numpy. For solving \(k\)-means, we use the implementation of Lloyd's algorithm with \(k\)-means++ seeding provided by the scikit-learn library [12]. All experiments were performed on a Linux notebook with a 3.9 GHz 12th generation Intel Core i7 six-core processor with 32 gigabytes of RAM. Figure 1: A demonstration of the improved approximation of our algorithm over the algorithm used in the experiments of [11]. In the first column, we show the first 50 rows of the congress data set, where purple indicates 0 and yellow indicates 1. The next columns show the approximation of [11], and our algorithm’s approximation, both with \(k=10\). The second row indicates the entries in which the respective approximations differ from the original dataset in yellow. Our experiments found that the number of wrongly reconstructed entries almost halved from the kBMF to the kBMF+ algorithm on this dataset for \(k=10\). Datasets.We use both real and synthetic data for our experiments. We choose two datasets from the UCI Machine Learning Repository [1], namely the voting record of the 98th Congress, consisting of 435 rows of 16 binary features representing each congressperson's vote on one of 16 bills, and the Thyroid dataset2, of 9371 patient data comprising 31 features. We restricted ourselves to only binary features, leaving us with 21 columns. Finally, we use the ORL dataset of faces, which we binarize using a threshold of 0.33, as in [11]. Footnote 2: [https://www.kaggle.com/datasets/emmanuelfwerr/thyroid-disease-data](https://www.kaggle.com/datasets/emmanuelfwerr/thyroid-disease-data) For our synthetic data, we generate random matrices, where each entry is set to be 1 independently with probability \(p\), at two different sparsity levels of \(p\in\{0.1,0.5\}\). Additionally, we generate low-rank matrices by generating \(\mathbf{U}\in\{0,1\}^{n\times k}\) and \(\mathbf{V}\in\{0,1\}^{k\times d}\) and multiplying them together in \(\mathbb{F}_{2}\). We generate \(\mathbf{U}\) and \(\mathbf{V}\) at different sparsity levels of 0.5 and 0.1, for \(k\in\{5,10,15\}\). Finally, we also use these matrices with added noise, where after multiplying, each bit is flipped with probability \(p_{e}\in\{0.01,0.001\}\). We generate 25 matrices of size \(250\times 50\) for each configuration. These classes are named, in order of introduction: full, lr, and noisy. Limitations.We opted to use only binary datasets, thus limiting the available datasets for our experiments. Because of this, our largest dataset's size is less than 10000. Our algorithms are practical for these sizes and the parameters \(k\) we have chosen. Investigating the feasibility of algorithms for binary matrix factorization for large datasets may be an interesting direction for future research. ### Comparing Algorithms for BMF Synthetic data.For each algorithm, Table 2 shows the mean Frobenius norm error (i.e. \(\operatorname{err}_{\mathbf{A}}(\mathbf{U},\mathbf{V})=\|\mathbf{U}\mathbf{V}- \mathbf{A}\|_{F}\)) across 10 runs of each algorithm and the mean runtime in milliseconds for the synthetic datasets described above. For our choices of parameters, we find that all algorithms terminate in under a second, with Zhang's algorithm and BMF being the fastest and the message-passing algorithm generally being the slowest. This is, of course, also influenced by the fact that the algorithms' implementations use different technologies, which limits the conclusions we can draw from the data. We find that the kBMF+ algorithm slows down by a factor of 1.5 for small \(k\in\{2,3,5\}\), and 15 when \(k=15\), compared to the kBMF algorithm. This is offset by the improved error, where our algorithm kBMF+ generally achieves the best approximation for dense matrices, being able to sometimes find a perfect factorization, for example, in the case of a rank 5 matrix, when using \(k\in\{10,15\}\). Even when the perfect factorization is not found, we see that the Frobenius norm error is 2-10 times lower. On spare matrices, we find that Zhang's and the message-passing algorithms outperform kBMF+, yielding solutions that are about 2 times better in the worst case (matrix of rank 5, with sparsity 0.1 and \(k=5\)). The kBMF algorithm generally performs the worst across datasets, which is surprising considering the results of [11]. Another point of note is that Zhang's algorithm is tuned for sparse matrices, sometimes converging to factors that yield real-valued matrices. If so, we attempted to round the matrix as best we could. Real data.As before, Table 3 shows the algorithms' average Frobenius norm error and average running time. We observe, that all algorithms are fairly close in Frobenius norm error, with the best and worst factorizations' error differing by about up to a factor of 3 across parameters and datasets. Zhang's algorithm performs best on the Congress dataset, while the message-passing algorithm performs best on the ORL and Thyroid datasets. The kBMF algorithm generally does worst, but the additional processing we do in kBMF+ can improve the solution considerably, putting it on par with the other heuristics. On the Congress dataset, kBMF+ is about 1.1-2 times worse than Zhang's, while on the ORL dataset, it is about 10-30% worse than the message-passing algorithm. Finally, the Thyroid dataset's error is about 10-20% worse than competing heuristics. We note that on the Thyroid datasets, which has almost 10000 rows, Zhang's algorithm slows considerably, about 10 times slower than kBMF and even slower than kBMF+ for \(k=15\). This suggests that for large matrices and small to moderate \(k\), the kBMF+ algorithm may actually run faster than other heuristics while providing comparable results. The message-passing algorithm slows tremendously, being almost three orders of magnitude slower than kBMF, but we believe this could be improved with another implementation. Discussion.In our experiments, we found that on dense synthetic data, the algorithm kBMF+ outperforms other algorithms for the BMF problem. Additionally, we found that is competitive for sparse synthetic data and real datasets. One inherent benefit of the kBMF and kBMF+ algorithms is that they are very easily adapted to different norms and matrix products, as the clustering step, nearest neighbor search, and enumeration steps are all easily adapted to the setting we want. A benefit is that the factors are guaranteed to be either 0 or 1, which is not true for Zhang's heuristic, which does not always converge. None of the existing heuristics consider minimization of \(L_{p}\) norms, so we omitted experimental data for this setting, but we note here that the results are qualitatively similar, with our algorithm performing best on dense matrices, and the heuristics performing well on sparse data. ### Using Coresets with our Algorithm Motivated by our theoretical use of strong coresets for \(k\)-means clustering, we perform experiments to evaluate the increase in error using them. To this end, we run the BMF+ algorithm on either the entire dataset, a coreset constructed via importance sampling [1, 1], or a lightweight \begin{table} \begin{tabular}{l l|c c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{Error [Frobenius norm]} & \multicolumn{3}{c}{Time [ms]} \\ & Alg & kBMF & kBMF+ & MP & Zh & kBMF & kBMF+ & \multirow{2}{*}{MP} & \multirow{2}{*}{Zh} \\ Dataset & k & & & & & & & & \\ \hline Congress & 2 & 40.0 & 38.8 & 38.8 & **36.4** & 2.0 & 3.3 & 280.7 & 6.9 \\ & 3 & 38.4 & 36.6 & 35.9 & **32.7** & 2.3 & 4.1 & 311.2 & 13.6 \\ & 5 & 35.7 & 32.7 & 31.1 & **27.7** & 4.6 & 5.2 & 332.9 & 16.2 \\ & 10 & 32.7 & 23.9 & 22.5 & **18.4** & 3.2 & 16.9 & 407.1 & 22.6 \\ & 15 & 30.9 & 14.8 & 15.5 & **9.6** & 7.4 & 246.7 & 480.5 & 27.5 \\ \hline ORL & 2 & 39.4 & 37.8 & 35.9 & **33.5** & 2.0 & 2.9 & 203.7 & 11.6 \\ & 3 & 35.7 & 34.6 & 32.2 & **29.7** & 2.9 & 4.7 & 241.6 & 13.1 \\ & 5 & 31.7 & 30.7 & 27.7 & **25.6** & 3.8 & 5.8 & 289.4 & 15.4 \\ & 10 & 26.4 & 25.7 & 21.6 & **21.4** & 4.3 & 22.3 & 415.7 & 19.1 \\ & 15 & 23.4 & 22.8 & **17.8** & 19.7 & 6.1 & 318.0 & 575.5 & 22.2 \\ \hline Thyroid & 2 & 106.6 & 98.6 & **90.5** & 91.6 & 12.6 & 14.2 & 7063.6 & 44.3 \\ & 3 & 94.5 & 90.5 & 75.5 & **73.9** & 14.4 & 18.7 & 7822.0 & 92.9 \\ & 5 & 82.7 & 80.4 & 78.5 & **61.8** & 31.8 & 25.2 & 8860.2 & 132.1 \\ & 10 & 66.0 & 55.4 & 54.0 & **52.9** & 28.9 & 59.6 & 12686.3 & 241.4 \\ & 15 & 57.6 & **38.9** & 39.2 & 46.7 & 26.7 & 313.4 & 16237.7 & 432.7 \\ \hline \hline \end{tabular} \end{table} Table 3: The average running time and error for different Binary Matrix Factorization algorithms on real datasets, minimum frobenius norm error highlighted in bold. coreset [1]. Both of these algorithms were implemented in Python. The datasets in this experiment are a synthetic low-rank dataset with additional noise (size \(5000\times 50\), rank 5 and \(0.0005\) probability of flipping a bit), the congress, and thyroid datasets. We construct coresets of size \(rn\) for each \(r\in\{0.001,0.005,0.01,0.02,0.05,0.1,0.2,\ldots,0.9\}\). We sample 10 coresets at every size and use them when finding \(\mathbf{V}\) in our BMF+ algorithm. Theory suggests that the quality of the coreset depends only on \(k\) and the dimension of the points \(d\), which is why in Figure 2, we observe a worse approximation for a given size of coreset for larger \(k\). We find that the BMF+ algorithm performs just as well on lightweight coresets as the one utilizing the sensitivity sampling framework. This is expected in the binary setting, as the additive error in the weaker guarantee provided by lightweight coresets depends on the dataset's diameter. Thus, the faster, lightweight coreset construction appears superior in this setting. We observe that using coreset increases the Frobenius norm error we observe by about 35%, but curiously, on the low-rank dataset, the average error decreased after using coresets. This may be due to coreset constructions not sampling the noisy outliers that are not in the low-dimensional subspace spanned by the non-noisy low-rank matrix, letting the algorithm better reconstruct the original factors instead. Our datasets are comparatively small, none exceeding 1000 points, which is why, in combination with the fact that the coreset constructions are not optimized, we observe no speedup compared to the algorithm without coresets. However, even though constructing the coreset takes additional time, the running time between variants remained comparable. We expect to observe significant speedups for large datasets using an optimized implementation of the coreset algorithms. Using _off the shelf_ coresets provides a large advantage to this algorithm's feasibility compared to the iterative methods when handling large datasets. Figure 2: A plot of the effect of different relative coreset sizes on the results of our algorithm. Conclusion In this paper, we introduced the first \((1+\varepsilon)\)-approximation algorithms for binary matrix factorization with a singly exponential dependence on the low-rank factor \(k\), which is often a small parameter. We consider optimization with respect to the Frobenius loss, finite fields, and \(L_{p}\) loss. Our algorithms extend naturally to big data models and perform well in practice. Indeed, we conduct empirical evaluations demonstrating the practical effectiveness of our algorithms. For future research, we leave open the question for \((1+\varepsilon)\)-approximation algorithms for \(L_{p}\) loss without bicriteria requirements.
2308.09999
Elementary Proofs of Congruences for POND and PEND Partitions
Recently, Ballantine and Welch considered various generalizations and refinements of POD and PED partitions. These are integer partitions wherein the odd parts must be distinct (in the case of POD partitions) or the even parts must be distinct (in the case of PED partitions). In the process, they were led to consider two classes of integer partitions which are, in some sense, the ``opposite'' of POD and PED partitions. They labeled these POND and PEND partitions, which are integer partitions wherein the odd parts cannot be distinct (in the case of POND partitions) or the even parts cannot be distinct (in the case of PEND partitions). In this work, we study these two types of partitions from an arithmetic perspective. Along the way, we are led to prove the following two infinite families of Ramanujan--like congruences: For all $\alpha \geq 1$ and all $n\geq 0,$ \begin{align*} pond\left(3^{2\alpha +1}n+\frac{23\cdot 3^{2\alpha}+1}{8}\right) &\equiv 0 \pmod{3}, \textrm{\ \ \ and} \\ pend\left(3^{2\alpha +1}n+\frac{17\cdot 3^{2\alpha}-1}{8}\right) &\equiv 0 \pmod{3} \end{align*} where $pond(n)$ counts the number of POND partitions of weight $n$ and $pend(n)$ counts the number of PEND partitions of weight $n$. All of the proof techniques used herein are elementary, relying on classical $q$-series identities and generating function manipulations, along with mathematical induction.
James A. Sellers
2023-08-19T12:32:28Z
http://arxiv.org/abs/2308.09999v1
# Elementary proofs of congruences for POD and PEND partitions ###### Abstract. Recently, Ballantine and Welch considered various generalizations and refinements of POD and PED partitions. These are integer partitions wherein the odd parts must be distinct (in the case of POD partitions) or the even parts must be distinct (in the case of PED partitions). In the process, they were led to consider two classes of integer partitions which are, in some sense, the "opposite" of POD and PED partitions. They labeled these POND and PEND partitions, which are integer partitions wherein the odd parts cannot be distinct (in the case of POND partitions) or the even parts cannot be distinct (in the case of PEND partitions). In this work, we study these two types of partitions from an arithmetic perspective. Along the way, we are led to prove the following two infinite families of Ramanujan-like congruences: For all \(\alpha\geq 1\) and all \(n\geq 0\), \[pond\left(3^{2\alpha+1}n+\frac{23\cdot 3^{2\alpha}+1}{8}\right) \equiv 0\pmod{3},\quad\text{and}\] \[pend\left(3^{2\alpha+1}n+\frac{17\cdot 3^{2\alpha}-1}{8}\right) \equiv 0\pmod{3}\] where \(pond(n)\) counts the number of POND partitions of weight \(n\) and \(pend(n)\) counts the number of PEND partitions of weight \(n\). All of the proof techniques used herein are elementary, relying on classical \(q\)-series identities and generating function manipulations, along with mathematical induction. Key words and phrases:partitions, congruences, generating functions, dissections 2010 Mathematics Subject Classification: 11P83, 05A17 ## 1. Introduction In the study of integer partitions, the partitions wherein the parts are distinct have long played a key role, due in large part to Euler's famous identity which states that the number of partitions of weight \(n\) into distinct parts equals the number of partitions of weight \(n\) into odd parts. One of the most obvious refinements in this regard is to require distinct parts based on parity; i.e., to require either all of the even parts to be distinct or all of the odd parts to be distinct (while allowing the frequency of the other parts to be unrestricted). This leads to two types of partitions, those that we will call PED partitions (wherein the even parts must be distinct and the odd parts are unrestricted) and POD partitions (wherein the odd parts must be distinct and the even parts are unrestricted). We then define two corresponding enumerating functions, \(ped(n)\) which counts the number of PED partitions of weight \(n,\) and \(pod(n)\) which counts the number of POD partitions of weight \(n.\) These two functions have been studied from a variety of perspectives; the interested reader may wish to see [1, 2, 3, 4, 6, 7, 8, 9, 10, 13, 14, 15, 18, 20, 22, 23] for examples of work on identities involving, and arithmetic properties satisfied by, \(ped(n)\) and \(pod(n).\) Recently, Ballantine and Welch [5] generalized and refined these two functions in numerous ways. One of the outcomes of their work was to consider integer partitions which are, in some sense, the "opposite" of PED partitions and POD partitions. Namely, they considered PEND partitions and POND partitions, wherein the even (respectively, odd) parts are **not allowed** to be distinct. In a vein similar to that shared above, we let \(pend(n)\) denote the number of PEND partitions of weight \(n,\) and \(pond(n)\) denote the number of POND partitions of weight \(n.\) The first several values of \(pend(n)\) appear in the OEIS [19, A265254], while the first several values of \(pond(n)\) appear in [19, A265256]. It is worthwhile to share additional historical thoughts to place PEND and POND partitions in context. In his classic _Combinatory Analysis_[16], P. A. MacMahon proved that, for all \(n\geq 0,\) the number of partitions of weight \(n\) wherein no part appears with multiplicity one equals the number of partitions of weight \(n\) where all parts must be even or congruent to \(3\) modulo \(6\). As an aside, we note that numerous mathematicians have since generalized this theorem of MacMahon and have provided proofs of these results using both generating functions (which was MacMahon's original approach) as well as combinatorial arguments. The first half of the statement of MacMahon's theorem involves the function which counts the number of partitions wherein no part appears with multiplicity one, i.e., no part is allowed to be distinct. It is in this sense that POND and PEND partitions provide a natural, parity-based refinement of the partitions considered by MacMahon. At the end of their paper, Ballantine and Welch [5] shared the following possibilities for future work: In particular, we note two areas of interest. The first is examining the arithmetic properties of these generalizations. Much work has been done in studying arithmetic properties of PED and POD partitions... Hence, this would be a natural topic of further study... In light of this suggestion from Ballantine and Welch, our overarching goal in this work is to study \(pond(n)\) and \(pend(n)\) from an arithmetic perspective. With this in mind, we will first prove the following Ramanujan-like congruences satisfied by \(pond(n)\) and \(pend(n)\): **Theorem 1**.: _For all \(n\geq 0,\)_ \[pond(3n+2) \equiv 0\pmod{2}, \tag{1.1}\] \[pond(27n+26) \equiv 0\pmod{3},\text{ \ and}\] (1.2) \[pond(3n+1) \equiv 0\pmod{4}. \tag{1.3}\] **Theorem 2**.: _For all \(n\geq 0,\)_ \[pend(27n+19)\equiv 0\pmod{3}.\] We will then prove that each of these two functions satisfies an internal congruence modulo \(3\). **Theorem 3**.: _For all \(n\geq 0,\)\(pond(27n+17)\equiv pond(3n+2)\pmod{3}.\)_ **Theorem 4**.: _For all \(n\geq 0,\)\(pend(27n+10)\equiv pend(3n+1)\pmod{3}.\)_ Finally, with the above results in hand, we will prove the following infinite families of non-nested Ramanujan-like congruences modulo \(3\) by induction. **Theorem 5**.: _For all \(\alpha\geq 1\) and all \(n\geq 0,\)_ \[pond\left(3^{2\alpha+1}n+\frac{23\cdot 3^{2\alpha}+1}{8}\right)\equiv 0\pmod{ 3}.\] **Theorem 6**.: _For all \(\alpha\geq 1\) and all \(n\geq 0,\)_ \[pend\left(3^{2\alpha+1}n+\frac{17\cdot 3^{2\alpha}-1}{8}\right)\equiv 0\pmod{ 3}.\] Section 2 is devoted to providing the tools necessary for the remainder of the paper. In Section 3, we prove Theorems 1, 3, and 5. In Section 4, we prove Theorems 2, 4, and 6. All of the proof techniques used herein are elementary, relying on classical \(q\)-series identities and generating function manipulations, along with mathematical induction. ## 2. Preliminaries Throughout this work, we will use the following shorthand notation for \(q\)-Pochhammer symbols: \[f_{r}:=(q^{r};q^{r})_{\infty}=(1-q^{r})(1-q^{2r})(1-q^{3r})\ldots\] In order to prove the congruences mentioned above, several important \(3\)-dissections of various \(q\)-series will be needed. These results will allow us to write the necessary generating functions in an appropriate fashion. We now catalog these results here. **Lemma 7**.: _We have_ \[\frac{f_{2}}{f_{1}f_{4}}=\frac{f_{18}^{9}}{f_{3}^{2}f_{9}^{3}f_{12}^{2}f_{36}^{ 3}}+q\frac{f_{6}^{2}f_{18}^{3}}{f_{3}^{3}f_{12}^{3}}+q^{2}\frac{f_{6}^{4}f_{9}^ {3}f_{36}^{3}}{f_{3}^{4}f_{12}^{4}f_{18}^{3}}.\] Proof.: A proof of this identity appears in [21, Lemma 2.1]. **Lemma 8**.: _We have_ \[f_{1}f_{2}=\frac{f_{6}f_{9}^{4}}{f_{3}f_{18}^{2}}-qf_{9}f_{18}-2q^{2}\frac{f_{3 }f_{18}^{4}}{f_{6}f_{9}^{2}}.\] Proof.: A proof of this identity can be found in [14]. **Lemma 9**.: _We have_ \[\frac{1}{f_{1}f_{2}}=\frac{f_{9}^{9}}{f_{3}^{6}f_{6}^{2}f_{18}^{3}}+q\frac{f_ {9}^{6}}{f_{3}^{5}f_{6}^{3}}+3q^{2}\frac{f_{9}^{3}f_{18}^{3}}{f_{3}^{4}f_{6}^{ 4}}-2q^{3}\frac{f_{18}^{6}}{f_{3}^{3}f_{6}^{5}}+4q^{4}\frac{f_{18}^{9}}{f_{3}^ {2}f_{6}^{6}f_{9}^{3}}.\] Proof.: This lemma is equivalent to [17, Equation (39)]. **Lemma 10**.: _We have_ \[\frac{f_{2}^{2}}{f_{1}}=\frac{f_{6}f_{9}^{2}}{f_{3}f_{18}}+q\frac{f_{18}^{2}} {f_{9}}.\] Proof.: For a proof of this result, see [11, (14.3.3)]. **Lemma 11**.: _We have_ \[\frac{f_{2}}{f_{1}^{2}}=\frac{f_{6}^{4}f_{9}^{6}}{f_{3}^{8}f_{18}^{3}}+2q \frac{f_{6}^{3}f_{9}^{3}}{f_{3}^{7}}+4q^{2}\frac{f_{6}^{2}f_{18}^{3}}{f_{3}^{ 6}}.\] **Remark 1**.: Note that \[\frac{f_{2}}{f_{1}^{2}}=\sum_{n=0}^{\infty}\overline{p}(n)q^{n}\] where \(\overline{p}(n)\) is the number of overpartitions of \(n\). Proof.: For a proof of Lemma 11, see [12, Theorem 1]. **Lemma 12**.: _We have_ \[\frac{f_{4}}{f_{1}}=\frac{f_{12}f_{18}^{4}}{f_{3}^{3}f_{36}^{2}}+q\frac{f_{6} ^{2}f_{9}^{3}f_{36}}{f_{3}^{4}f_{18}^{2}}+2q^{2}\frac{f_{6}f_{18}f_{36}}{f_{3} ^{3}}.\] **Remark 2**.: Note that \[\frac{f_{4}}{f_{1}}=\sum_{n=0}^{\infty}ped(n)q^{n}\] where \(ped(n)\) is the number of partitions of \(n\) wherein even parts are distinct (as mentioned in the introductory comments above). Proof.: Lemma 12 follows from [2, Theorem 3.1] and [11, (33.2.6)]. One additional \(q\)-series identity will be beneficial in the proof of Theorem 3. **Lemma 13**.: _We have_ \[\frac{f_{3}^{3}}{f_{1}}-q\frac{f_{12}^{3}}{f_{4}}=\frac{f_{4}^{3}f_{6}^{2}}{f_ {2}^{2}f_{12}}.\] Proof.: This identity appears in [11, (22.7.5)]. Lastly, we will utilize the following result which, at its core, relies on the binomial theorem and the divisibility properties of various binomial coefficients. **Lemma 14**.: _For all primes \(p\) and all \(j,k,m\geq 1\), \(f_{m}^{p^{j}k}\equiv f_{pm}^{p^{j-1}k}\pmod{p^{j}}.\)_ With all of these tools in hand, we are now in a position to prove the theorems listed above. ## 3. Congruences for \(pond(n)\) We begin by considering the function \(pond(n).\) Although one can derive the generating function for \(pond(n)\) from the work of Ballantine and Welch [5], we provide a proof of the result here for the sake of completeness. **Theorem 15**.: _We have_ \[\sum_{n=0}^{\infty}pond(n)q^{n}=\frac{f_{4}f_{6}^{2}}{f_{2}^{2}f_{3}f_{12}}.\] Proof.: By definition, \[\sum_{n=0}^{\infty}pond(n)q^{n} =\frac{1}{f_{2}}\prod_{i=1}^{\infty}\left(\frac{1}{1-q^{2i-1}}-q^ {2i-1}\right)\] \[=\frac{1}{f_{2}}\prod_{i=1}^{\infty}\left(\frac{1-q^{2i-1}+q^{4i- 2}}{1-q^{2i-1}}\right)\] \[=\frac{1}{f_{2}}\prod_{i=1}^{\infty}\left(\frac{1+q^{6i-3}}{(1+q ^{2i-1})(1-q^{2i-1})}\right)\] \[=\frac{1}{f_{2}}\cdot\frac{(-q^{3};q^{6})_{\infty}}{(q^{2};q^{4})_{ \infty}}\] \[=\frac{1}{f_{2}}\cdot\frac{f_{4}}{f_{2}}\cdot\frac{(q^{6};q^{12})_{ \infty}}{(q^{3};q^{6})_{\infty}}\] \[=\frac{f_{4}}{f_{2}^{2}}\cdot\frac{f_{6}}{f_{12}}\cdot\frac{f_{6} }{f_{3}}\] \[=\frac{f_{4}f_{6}^{2}}{f_{2}^{2}f_{3}f_{12}}.\] We can now move to a proof of Theorem 1. Proof.: (of Theorem 1) Our first goal is to \(3\)-dissect the generating function for \(pond(n)\). Note that \[\sum_{n=0}^{\infty}pond(n)q^{n} =\frac{f_{4}f_{6}^{2}}{f_{2}^{2}f_{3}f_{12}}\] \[=\frac{f_{4}}{f_{2}^{2}}\cdot\frac{f_{6}^{2}}{f_{3}f_{12}}\] \[=\left(\frac{f_{12}^{4}f_{18}^{6}}{f_{6}^{8}f_{36}^{3}}+2q^{2} \frac{f_{12}^{3}f_{18}^{3}}{f_{6}^{7}}+4q^{4}\frac{f_{12}^{2}f_{36}^{3}}{f_{6} ^{6}}\right)\cdot\frac{f_{6}^{2}}{f_{3}f_{12}}\] thanks to Lemma 11. This means we know the following: \[\sum_{n=0}^{\infty}pond(3n)q^{3n} =\frac{f_{6}^{2}}{f_{3}f_{12}}\cdot\frac{f_{12}^{4}f_{18}^{6}}{f_ {6}^{8}f_{36}^{3}},\] \[\sum_{n=0}^{\infty}pond(3n+1)q^{3n+1} =\frac{f_{6}^{2}}{f_{3}f_{12}}\cdot 4q^{4}\frac{f_{12}^{2}f_{36}^{3}}{f_ {6}^{6}},\quad\text{and}\] \[\sum_{n=0}^{\infty}pond(3n+2)q^{3n+2} =\frac{f_{6}^{2}}{f_{3}f_{12}}\cdot 2q^{2}\frac{f_{12}^{3}f_{18}^{3}}{f_ {6}^{7}}.\] This is equivalent to the following \(3\)-dissection for the generating function for \(pond(n)\): \[\sum_{n=0}^{\infty}pond(3n)q^{n} =\frac{f_{2}^{2}}{f_{1}f_{4}}\cdot\frac{f_{4}^{4}f_{6}^{6}}{f_{2} ^{8}f_{12}^{3}}=\frac{f_{4}^{3}f_{6}^{6}}{f_{1}f_{2}^{6}f_{12}^{3}}, \tag{3.1}\] \[\sum_{n=0}^{\infty}pond(3n+1)q^{n} =4q\frac{f_{2}^{2}}{f_{1}f_{4}}\cdot\frac{f_{4}^{2}f_{12}^{3}}{f_ {2}^{6}}=4q\frac{f_{4}f_{12}^{3}}{f_{1}f_{2}^{4}},\quad\text{and} \tag{3.2}\] \[\sum_{n=0}^{\infty}pond(3n+2)q^{n}=2\frac{f_{2}^{2}}{f_{1}f_{4}}\cdot\frac{f_{4}^{ 3}f_{6}^{3}}{f_{2}^{7}}=2\frac{f_{4}^{2}f_{6}^{3}}{f_{1}f_{2}^{5}}. \tag{3.3}\] We pause here to note that (3.3) implies (1.1) while (3.2) implies (1.3). Thus, in order to complete the proof of Theorem 1, we simply need to prove (1.2), and this requires us to 3-dissect the generating function for \(pond(3n+2)\) which appears in (3.3): \[\sum_{n=0}^{\infty}pond(3n+2)q^{n}\] \[=2\frac{f_{4}^{2}f_{6}^{3}}{f_{1}f_{5}^{5}}\] \[\equiv 2\frac{f_{4}^{2}f_{2}^{9}}{f_{1}f_{5}^{5}}\pmod{3}\ \ \text{ thanks to Lemma \ref{lem:2}}\] \[=2\frac{f_{4}^{2}f_{2}^{4}}{f_{1}}\] \[=2(f_{2}f_{4})^{3}\cdot\frac{f_{2}^{2}}{f_{1}}\cdot\frac{1}{f_{2 }f_{4}}\] \[\equiv 2f_{6}f_{12}\cdot\frac{f_{2}^{2}}{f_{1}}\cdot\frac{1}{f_{2 }f_{4}}\pmod{3}\] \[\equiv 2f_{6}f_{12}\left(\frac{f_{6}f_{9}^{2}}{f_{3}f_{18}}+q\frac{ f_{18}^{2}}{f_{9}}\right)\] \[\qquad\qquad\times\left(\frac{f_{18}^{9}}{f_{6}^{6}f_{12}^{2}f_{3 6}^{3}}+q^{2}\frac{f_{18}^{6}}{f_{6}^{5}f_{12}^{3}}+q^{6}\frac{f_{36}^{6}}{f_{ 6}^{3}f_{12}^{5}}+q^{8}\frac{f_{36}^{9}}{f_{6}^{2}f_{12}^{6}f_{18}^{3}}\right) \pmod{3}\] thanks to Lemmas 9 and 10. Thus, we know \[\sum_{n=0}^{\infty}pond(9n+8)q^{3n+2}\equiv 2f_{6}f_{12}\cdot\frac{f_{6}f_{9}^{ 2}}{f_{3}f_{18}}\left(q^{2}\frac{f_{18}^{6}}{f_{6}^{5}f_{12}^{3}}+q^{8}\frac{ f_{36}^{9}}{f_{6}^{2}f_{12}^{6}f_{18}^{3}}\right)\pmod{3}.\] Therefore, \[\sum_{n=0}^{\infty}pond(9n+8)q^{n} \equiv 2\frac{f_{2}^{2}f_{3}^{2}f_{4}}{f_{1}f_{6}}\left(\frac{f_{6}^ {6}}{f_{2}^{5}f_{4}^{3}}+q^{2}\frac{f_{12}^{9}}{f_{2}^{2}f_{4}^{6}f_{6}^{3}} \right)\pmod{3}\] \[=2\frac{f_{3}^{2}}{f_{1}f_{4}^{2}}\left(\frac{f_{6}^{5}}{f_{2}^{3 }}+q^{2}\frac{f_{12}^{9}}{f_{3}^{3}f_{6}^{4}}\right)\] \[\equiv 2\frac{f_{3}^{2}}{f_{1}f_{4}^{2}}\left(\frac{f_{6}^{5}}{f_{6}}+q^ {2}\frac{f_{12}^{9}}{f_{12}f_{6}^{4}}\right)\pmod{3}\quad\text{using Lemma \ref{lem:23}}\] \[=2\frac{f_{3}^{2}f_{4}}{f_{1}f_{4}^{3}}\left(f_{6}^{4}+q^{2}\frac {f_{12}^{8}}{f_{6}^{4}}\right)\] \[\equiv 2\frac{f_{4}}{f_{1}}\cdot\frac{f_{3}^{2}}{f_{12}}\left(f_{6} ^{4}+q^{2}\frac{f_{12}^{8}}{f_{6}^{4}}\right)\pmod{3}.\] We now use Lemma 12 to see that \[\sum_{n=0}^{\infty}pond(27n+26)q^{3n+2}\equiv 2\frac{f_{3}^{2}}{f_{12}}\left(q^{ 2}\frac{f_{12}^{8}}{f_{6}^{4}}\cdot\frac{f_{12}f_{18}^{4}}{f_{3}^{3}f_{36}^{2 }}+2q^{2}\frac{f_{6}^{5}f_{18}f_{36}}{f_{3}^{3}}\right)\pmod{3}\] so that \[\sum_{n=0}^{\infty}pond(27n+26)q^{n} \equiv 2\frac{f_{1}^{2}}{f_{4}}\left(\frac{f_{4}^{9}f_{6}^{4}}{f_{ 4}^{4}f_{3}^{3}f_{12}^{2}}+2\frac{f_{2}^{5}f_{6}f_{12}}{f_{1}^{3}}\right) \pmod{3}\] \[=\frac{f_{2}^{5}}{f_{1}f_{4}}\left(2\frac{f_{4}^{9}f_{6}^{4}}{f_{ 2}^{9}f_{12}^{2}}+4f_{6}f_{12}\right)\] \[\equiv\frac{f_{2}^{5}}{f_{1}f_{4}}\left(2\frac{f_{12}^{3}f_{6}^{ 4}}{f_{3}^{3}f_{12}^{2}}+4f_{6}f_{12}\right)\pmod{3}\] \[\equiv\frac{f_{2}^{5}}{f_{1}f_{4}}\left(6f_{6}f_{12}\right) \pmod{3}\] \[\equiv 0\pmod{3}.\] This completes the proof of (1.2) and, therefore, Theorem 1. Equation (1.2) will serve as the base case for the proof by induction of Theorem 5. However, before we turn to the proof of Theorem 5, we first prove Theorem 3 which will be the "engine" for that proof by induction. Proof.: (of Theorem 3) Our goal is to prove that, for all \(n\geq 0\), \[pond(27n+17)\equiv pond(3n+2)\pmod{3}.\] From our work above, we know \[\sum_{n=0}^{\infty}pond(3n+2)q^{n}\equiv 2\frac{f_{4}^{2}}{f_{1}}(f_{2}^{4}) \pmod{3}. \tag{3.4}\] Next, we need to determine a corresponding congruence for the generating function for \(pond(27n+17).\) In our earlier work, we showed that \[\sum_{n=0}^{\infty}pond(9n+8)q^{n}\equiv 2\frac{f_{4}}{f_{1}}\cdot\frac{f_{3}^{2}} {f_{12}}\left(f_{6}^{4}+q^{2}\frac{f_{12}^{8}}{f_{6}^{4}}\right)\pmod{3}.\] We can then use Lemma 12 to see that \[\sum_{n=0}^{\infty}pond(9(3n+1)+8)q^{3n+1}\] \[\equiv 2\frac{f_{3}^{2}}{f_{12}}\left(f_{6}^{4}\cdot q\frac{f_{6}^ {2}f_{9}^{3}f_{36}}{f_{3}^{4}f_{18}^{2}}+q^{2}\frac{f_{12}^{8}}{f_{6}^{4}} \cdot 2q^{2}\frac{f_{6}f_{18}f_{36}}{f_{3}^{3}}\right)\pmod{3}\] or \[\sum_{n=0}^{\infty}pond(27n+17)q^{n} \equiv 2\frac{f_{1}^{2}}{f_{4}}\left(\frac{f_{2}^{6}f_{3}^{3}f_{12} }{f_{1}^{4}f_{6}^{2}}+2q\frac{f_{4}^{8}f_{6}f_{12}}{f_{1}^{3}f_{2}^{3}} \right)\pmod{3}\] \[\equiv 2\frac{f_{2}^{6}f_{3}^{3}f_{12}}{f_{1}^{2}f_{4}f_{6}^{2}}+4 q\frac{f_{4}^{7}f_{6}f_{12}}{f_{1}f_{2}^{3}}\pmod{3}\] \[\equiv 2\frac{f_{2}^{6}f_{1}^{9}f_{4}^{3}}{f_{1}^{2}f_{4}f_{6}^{6} }+4q\frac{f_{4}^{7}f_{2}^{3}f_{4}^{3}}{f_{1}f_{2}^{3}}\pmod{3}\] \[\equiv 2f_{1}^{7}f_{4}^{2}+4q\frac{f_{4}^{10}}{f_{1}}\pmod{3}\] \[=2\frac{f_{4}^{2}}{f_{1}}\left(f_{1}^{8}+2qf_{4}^{8}\right). \tag{3.5}\] Therefore, in order to prove this theorem, we know from (3.4) and (3.5) that we must show the following: \[2\frac{f_{4}^{2}}{f_{1}}\left(f_{1}^{8}+2qf_{4}^{8}\right)\equiv 2\frac{f_{4}^ {2}}{f_{1}}(f_{2}^{4})\pmod{3}\] or \[f_{1}^{8}+2qf_{4}^{8}\equiv f_{2}^{4}\pmod{3}.\] To complete this proof, we are reminded of Lemma 13: \[\frac{f_{3}^{3}}{f_{1}}-q\frac{f_{12}^{3}}{f_{4}}=\frac{f_{4}^{3}f_{6}^{2}}{f_ {2}^{2}f_{12}}.\] Note that this implies that \[\frac{f_{1}^{9}}{f_{1}}+2q\frac{f_{4}^{9}}{f_{4}}\equiv\frac{f_{12}f_{2}^{6}}{f _{2}^{2}f_{12}}\pmod{3}\] \[f_{1}^{8}+2qf_{4}^{8}\equiv f_{2}^{4}\pmod{3}\] which is the desired result. With Theorems 1 and 3 in hand, we can now turn to proving the infinite family of Ramanujan-like congruences modulo 3 satisfied by \(pond(n)\). Proof.: (of Theorem 5) We prove this theorem by induction on \(\alpha\). Note that the base case, \(\alpha=1\), which corresponds to the arithmetic progression \[3^{3}n+\frac{23\cdot 3^{2}+1}{8}=27n+26,\] has already been proved in Theorem 1 above. Thus, we assume that, for some \(\alpha\geq 1\) and all \(n\geq 0\), \[pond\left(3^{2\alpha+1}n+\frac{23\cdot 3^{2\alpha}+1}{8}\right)\equiv 0\pmod{ 3}.\] We then want to prove that \[pond\left(3^{2\alpha+3}n+\frac{23\cdot 3^{2\alpha+2}+1}{8}\right)\equiv 0\pmod{ 3}.\] Note that \[3^{2\alpha+1}n+\frac{23\cdot 3^{2\alpha}+1}{8} =3\left(3^{2\alpha}n\right)+\frac{23\cdot 3^{2\alpha}-15+16}{8}\] \[=3\left(3^{2\alpha}n\right)+3\left(\frac{23\cdot 3^{2\alpha-1}-5}{8 }\right)+2\] \[=3\left(3^{2\alpha}n+\frac{23\cdot 3^{2\alpha-1}-5}{8}\right)+2\] and it is easy to argue that \[3^{2\alpha}n+\frac{23\cdot 3^{2\alpha-1}-5}{8}\] is an integer for any \(\alpha\geq 1\). Therefore, we have the following: \[pond\left(3^{2\alpha+1}n+\frac{23\cdot 3^{2\alpha}+1}{8}\right)\] \[=pond\left(3\left(3^{2\alpha}n+\frac{23\cdot 3^{2\alpha-1}-5}{8} \right)+2\right)\] \[\equiv pond\left(27\left(3^{2\alpha}n+\frac{23\cdot 3^{2\alpha-1}-5}{8 }\right)+17\right)\pmod{3}\quad\text{thanks to Theorem 3}\] \[= pond\left(3^{2\alpha+3}n+\frac{23\cdot 3^{2\alpha+2}-27\cdot 5+17 \cdot 8}{8}\right)\] \[= pond\left(3^{2\alpha+3}n+\frac{23\cdot 3^{2\alpha+2}+1}{8}\right)\] \[\equiv 0\pmod{3}\] thanks to the induction hypothesis. This completes the proof. ## 4. Congruences for \(pend(n)\) We now turn our attention to proving Theorems 2, 4, and 6. We begin by finding the generating function for \(pend(n)\). **Theorem 16**.: _We have_ \[\sum_{n=0}^{\infty}pend(n)q^{n}=\frac{f_{2}f_{12}}{f_{1}f_{4}f_{6}}.\] Proof.: Using the definition of the partitions counted by \(pend(n)\), we know \[\sum_{n=0}^{\infty}pend(n)q^{n} =\frac{1}{(q;q^{2})_{\infty}}\prod_{i=1}^{\infty}\left(\frac{1}{ 1-q^{2i}}-q^{2i}\right)\] \[=\frac{f_{2}}{f_{1}}\prod_{i=1}^{\infty}\left(\frac{1-q^{2i}+q^{4 i}}{1-q^{2i}}\right)\] \[=\frac{f_{2}}{f_{1}}\prod_{i=1}^{\infty}\left(\frac{1+q^{6i}}{(1 +q^{2i})(1-q^{2i})}\right)\] \[=\frac{f_{2}}{f_{1}}\frac{(-q^{6};q^{6})_{\infty}}{f_{4}}\] \[=\frac{f_{2}}{f_{1}}\cdot\frac{f_{12}}{f_{4}f_{6}}\] \[=\frac{f_{2}f_{12}}{f_{1}f_{4}f_{6}}.\] We now turn our attention to proving Theorem 2. This will require that we 3-dissect the generating function for \(pend(n)\) in a particular way. Proof.: (of Theorem 2) Thanks to Theorem 16, we see that \[\sum_{n=0}^{\infty}pend(n)q^{n} =\frac{f_{2}f_{12}}{f_{1}f_{4}f_{6}}\] \[\equiv\frac{f_{4}^{2}}{f_{1}f_{2}^{2}}\pmod{3}\text{ \ from Lemma \ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lem:lem:lemlem:lem:lemlem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlem:lemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlem:lemlemlemlemlemlem:lemlemlemlemlemlemlem: From Lemma 10, we can rewrite this result as \[\sum_{n=0}^{\infty}pend(9n+1)q^{n}\equiv\left(\frac{f_{6}f_{9}^{2}}{f_{3}f_{18}}+ q\frac{f_{18}^{2}}{f_{9}}\right)\frac{f_{6}^{4}}{f_{12}^{2}}\pmod{3}. \tag{4.2}\] Note that the power series representation of the right-hand side of the above congruence contains no terms of the form \(q^{3n+2}.\) Thus, \[\sum_{n=0}^{\infty}pend(9(3n+2)+1)q^{3n+2}\equiv 0\pmod{3}\] which means that, for all \(n\geq 0,\) \[pend(9(3n+2)+1)=pend(27n+19)\equiv 0\pmod{3}.\] We next consider the proof of Theorem 4. Proof.: (of Theorem 4) Our goal here is to prove that, for all \(n\geq 0,\) \[pend(27n+10)\equiv pend(3n+1)\pmod{3}.\] Thanks to (4.2), we see that \[\sum_{n=0}^{\infty}pend(27n+10)q^{3n+1}\equiv q\frac{f_{6}^{4}f_{18}^{2}}{f_{9 }f_{12}^{2}}\pmod{3}\] which means \[\sum_{n=0}^{\infty}pend(27n+10)q^{n}\equiv\frac{f_{2}^{4}f_{6}^{2}}{f_{3}f_{4} ^{2}}\pmod{3}. \tag{4.3}\] From (4.1), we know \[\sum_{n=0}^{\infty}pend(3n+1)q^{n} \equiv\frac{f_{2}f_{4}f_{6}^{3}}{f_{3}f_{12}}\pmod{3}\] \[\equiv\frac{f_{2}f_{4}f_{6}^{3}}{f_{3}f_{4}^{3}}\pmod{3}\] \[=\frac{f_{2}f_{6}^{3}}{f_{3}f_{4}^{2}}\] \[\equiv\frac{f_{2}f_{2}^{3}f_{6}^{2}}{f_{3}f_{4}^{2}}\pmod{3}\] \[=\frac{f_{2}^{4}f_{6}^{2}}{f_{3}f_{4}^{2}}\] \[\equiv\sum_{n=0}^{\infty}pend(27n+10)q^{n}\pmod{3}\] thanks to (4.3). We are now in a position to prove the infinite family of congruences in Theorem 6. Proof.: (of Theorem 6) We prove this theorem by induction on \(\alpha\). Note that the base case, \(\alpha=1\), which corresponds to the arithmetic progression \[3^{3}n+\frac{17\cdot 3^{2}-1}{8}=27n+19,\] has already been proved in Theorem 2. Thus, we assume that, for some \(\alpha\geq 1\) and all \(n\geq 0\), \[pend\left(3^{2\alpha+1}n+\frac{17\cdot 3^{2\alpha}-1}{8}\right)\equiv 0\pmod{3}.\] We then want to prove that \[pend\left(3^{2\alpha+3}n+\frac{17\cdot 3^{2\alpha+2}-1}{8}\right)\equiv 0 \pmod{3}.\] Note that \[3^{2\alpha+1}n+\frac{17\cdot 3^{2\alpha}-1}{8} =3\left(3^{2\alpha}n\right)+\frac{17\cdot 3^{2\alpha}-9+8}{8}\] \[=3\left(3^{2\alpha}n\right)+3\left(\frac{17\cdot 3^{2\alpha-1}-3}{8 }\right)+1\] \[=3\left(3^{2\alpha}n+\frac{17\cdot 3^{2\alpha-1}-3}{8}\right)+1\] and it is easy to argue that \[3^{2\alpha}n+\frac{17\cdot 3^{2\alpha-1}-3}{8}\] is an integer for any \(\alpha\geq 1\). Therefore, we have the following: \[pend\left(3^{2\alpha+1}n+\frac{17\cdot 3^{2\alpha}-1}{8}\right)\] \[=pend\left(3\left(3^{2\alpha}n+\frac{17\cdot 3^{2\alpha-1}-3}{8} \right)+1\right)\] \[\equiv pend\left(27\left(3^{2\alpha}n+\frac{17\cdot 3^{2\alpha-1}-3} {8}\right)+10\right)\pmod{3}\mbox{ thanks to Theorem 4}\] \[=pend\left(3^{2\alpha+3}n+\frac{17\cdot 3^{2\alpha+2}-27\cdot 3+10 \cdot 8}{8}\right)\] \[=pend\left(3^{2\alpha+3}n+\frac{17\cdot 3^{2\alpha+2}-1}{8}\right)\] \[\equiv 0\pmod{3}\] thanks to the induction hypothesis. This completes the proof. ## 5. Closing Thoughts While it is very satisfying to see the proofs provided above, it would be interesting to see combinatorial proofs of these divisibility properties. We leave it to the interested reader to obtain such proofs. It may also be fruitful to consider further refinements of the functions \(pend(n)\) and \(pond(n)\). For example, rather than requiring that even parts must be repeated, one could restrict this requirement to only those parts which are divisible by 4 (with no such requirements on the other parts). It is certainly straightforward to find the generating functions for such refinements, which means that an analysis such as that above should be possible. Ballantine and Welch [5] share comments about such partitions (and their enumerating functions) near the end of their manuscript. The interested reader may wish to study such functions from an arithmetic perspective. ## Acknowledgements The author gratefully acknowledges Shane Chern for beneficial conversations during the development of this work. ## Declarations ### Ethical approval Not applicable. **Competing interests** The author declares that there are no competing interests. **Funding** Not applicable. **Availability of data and materials** Not applicable.
2302.05120
Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks
We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities. Our algorithm mitigates the gap between adversarial loss for continuous and discrete text representations by performing multi-step quantization in a quantization-compensation loop. Experiments show that our method significantly outperforms other approaches on various natural language processing (NLP) tasks.
Piotr Gaiński, Klaudia Bałazy
2023-02-10T08:50:51Z
http://arxiv.org/abs/2302.05120v1
# Step by Step Loss Goes Very Far: ###### Abstract We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities. Our algorithm mitigates the gap between adversarial loss for continuous and discrete text representations by performing multi-step quantization in a quantization-compensation loop. Experiments show that our method significantly outperforms other approaches on various natural language processing (NLP) tasks. ## 1 Introduction Deep neural networks achieve impressive results, but their vulnerability to adversarial attacks causes major security threats and is a concern when interpreting or explaining model predictions. In computer vision, the most successful attack methods use gradient-based optimization techniques Carlini and Wagner (2017); Madry et al. (2018). They minimize adversarial loss function that encourages the prediction error and imperceptibility of a generated example. Development of optimization-based attacks in NLP is much more challenging due to the discrete nature of text. Recent methods Guo et al. (2021); Yuan et al. (2021) overcome this limitation by performing a gradient descent in the continuous space of token representations and eventually quantizing them into discrete text. A quantization of a token can significantly change its embedding and cause an undesired change of the loss value, degrading the adversarial example. To our knowledge, all existing optimization-based NLP attacks quantize all tokens in a text at once, which creates a considerable gap between adversarial loss for continuous and discrete text representations. In this paper, we propose MANGO1 (Multi-step quANtization Gradient-based adversarial Optimizer): a novel optimization-based attack against Transformer Vaswani et al. (2017) language models that mitigates the aforementioned gap by performing multi-step quantization in a quantization-compensation loop. MANGO quantizes continuous token representations one by one and reoptimizes the adversarial example after each quantization to compensate undesired degradation of adversarial loss value. The construction of MANGO introduces interesting problems that are addressed in Section 3. MANGO achieves superior performance in various NLP tasks, outperforming recent white-box (optimization-based) and black-box attacks. Footnote 1: Code available at github.com/gmmum/MANGO. ## 2 Related Work Adversarial attacks can be roughly divided into two categories: white-box attacks that have access to the internal model's states (e.g. gradient) and more common black-box attacks that only know outputs of the model. In our paper, we focus on a white-box version of our MANGO attack. In Appendix D, we develop a version of MANGO that can be used in the loosened black-box setting. Black-Box MethodsMost black-box NLP attacks define a space of character or word replacements and heuristically search it for an adversarial example Yoo et al. (2020). The search space is limited with semantic ad hoc constraints (e.g. limiting edit distance or restricting possible replacements to synonyms) to preserve the attack's imperceptibility. Such constraints disallow some specific perturbations (e.g. replacing a word with its antagonist even if the semantics is preserved in the context of other perturbations) and tend to generate semantically incorrect examples Morris et al. (2020). White-Box MethodsMany white-box methods use gradients to guide a heuristic search in a space of text perturbations Ebrahimi et al. (2018); Cheng et al. (2019); Xu and Du (2020). Recent methods take a step further and perform gradient descent optimization. They aim to find an example that minimizes the adversarial loss function, which encourages the prediction error and the imperceptibility of the attack. Because the similarity and fluency of an example are controlled by a powerful external model used in the loss, optimization-based methods do not require hand-crafted semantic constraints, making them more flexible than black-box ones. Adapting gradient descent in NLP attacks is a challenging problem due to the discrete nature of the optimized text. Yuan et al. (2021) overcome this issue by performing optimization in the continuous space of token embeddings and replacing each token with a possibly new token, which embedding is the closest to the optimized one. An alternative approach is the GBDA method Guo et al. (2021) that optimizes a continuous distribution of stochastic one-hot vectors and repeatedly samples adversarial examples from the optimized distribution until it fools the attacked model. QuantizationBoth methods mentioned above quantize all continuous representations of tokens to a text at once. Quantization of a single token may significantly change its embedding and cause an undesirable change of adversarial loss value. When quantizing all tokens at once, the changes accumulate to a considerable gap between adversarial loss for continuous and discrete text representations (see Section 6). Our MANGO mitigates this gap. ## 3 Mango This section describes our MANGO method. Unlike other optimization-based methods that quantize all token representations at once, MANGO constitutes an entirely new algorithm that quantizes a token and compensates for the resulting change in an adversarial loss value in a step-by-step manner. The construction of MANGO introduces interesting problems that are addressed in the **Optimization**, **Vector Selection** and **Candidates Selection** paragraphs and are further evaluated in Section 5. Continuous Token RepresentationThe first learnable layer of Transformer takes as input a sequence of tokens \(x=(t_{1},...,t_{n})\), where \(t_{i}\in 2^{|V|}\) has a single non-zero binary value at index \(k\) indicating that it represents the \(k\)-th token in vocabulary \(V\). Similarly to Guo et al. (2021), we relax the input sequence \(x\) and replace one-hot encodings \(t_{i}\) with probability vectors \(\pi_{i}\). Because the first learnable Transformer layer is a simple linear layer, it can take probability vectors as input without any modification. A probability vector \(\pi_{i}\) constitutes probability distribution over tokens from \(V\). In the embedding layer, the Transformer embeds probability vectors with the function \(e\): \[e(\pi_{i})=\sum_{j=1}^{|V|}(\pi_{i})_{j}E_{j}, \tag{1}\] where \(E_{j}\) is the embedding vector of the \(j\)-th token. If \(\pi_{i}\) is quantized, meaning it is a one-hot vector representing some token \(k\), function \(e\) simply looks up the \(k\)-th embedding: \(e(\pi_{i})=E_{k}\). In MANGO, \(\pi_{i}\) is a probabilistic vector, and its embedding \(e(\pi_{i})\) is a mixture of embeddings of all tokens weighted by their probabilities \(\pi_{i}\). We parameterize \(\pi_{i}\) with logits \(\Theta_{i}\) and a standard softmax function \(\sigma\), so that \(\pi_{i}=\sigma(\Theta_{i})\) and \(x=\sigma(\Theta)\) for \(\Theta=(\Theta_{1},...,\Theta_{n})\). Loss functionLet \(m:X\rightarrow\mathbb{R}^{|Y|}\) be a classifier that outputs logit vectors and properly predicts a label \(y\in Y\) for some datapoint \(x\in X\), meaning that \(\arg\max_{k}m(x)_{k}=y\). An adversarial example is a sample \(x^{\prime}\in X\) that is imperceptible (according to specified criteria) from \(x\) but changes the output of the model. In an optimization-based setting, searching for an adversarial example is usually defined as a minimization of an adversarial loss function. Following Guo et al. (2021), we compose our adversarial loss \(\mathcal{L}\) as a combination of margin loss \(l_{m}\), fluency loss \(l_{f}\), and similarity loss \(l_{s}\): \[\mathcal{L}(x^{\prime})=l_{m}(m,x^{\prime},y)+\lambda_{f}l_{f}(g,x^{\prime})+ \lambda_{s}l_{s}(g,x^{\prime},x), \tag{2}\] where \(\lambda_{f}\) and \(\lambda_{s}\) are the coefficients used to balance the losses and \(g\) is a reference model. Margin loss \(l_{m}\) encourages model \(m\) to missclassify \(x^{\prime}\) by a margin \(\kappa\): \[l_{m}(m,x^{\prime},y)=\max(m(x^{\prime})_{y}-\max_{k\neq y}m(x^{\prime})_{k}+ \kappa,0).\] Fluency loss \(l_{f}\) promotes \(x^{\prime}\) with a high probability of being generated by a causal language model \(g\) that predicts the next token distribution: \[l_{f}(g,x^{\prime})=-\sum_{i=1}^{n}\sum_{j=1}^{|V|}(\pi_{i})_{j}g(\pi_{1},..., \pi_{i-1})_{j}.\] Similarity loss \(l_{s}\) is based on BERTScore Zhang et al. (2020) and captures the semantic similarity between \(x\) and \(x^{\prime}\) using contextualized embeddings of tokens \(\phi_{g}(x)=(v_{1},...,v_{n})\) and \(\phi_{g}(x^{\prime})=\phi_{g}(x^{\prime})\). \((v^{\prime}_{1},...,v^{\prime}_{n})\) produced by the reference model \(g\) : \[l_{s}(g,x^{\prime},x)=-\sum_{i=1}^{n}w_{i}\max_{j}v^{T}_{i}v^{\prime}_{j},\] where \(w_{i}\) is the inverse frequency of token \(t_{i}\). Quantization-Compensation LoopMANGO algorithm searches for a \(x^{\prime}\) that minimizes \(\mathcal{L}\), quantizing and compensating it step by step. Algorithm 1 introduces the idea of MANGO. In the first line, the parameters \(\Theta\) of \(x^{\prime}\) are initialized, so that \(\Theta^{\prime}_{ij}=C\cdot(x_{i})_{j}\) for some constant \(C\). Each loop starts with **optimization** of \(x^{\prime}\) with respect to \(\mathcal{L}\). Then **vector selection** is performed to select \(\pi^{\prime}_{i}\) from \(x^{\prime}\) which will be quantized in the current step. Given \(\pi^{\prime}_{i}\), MANGO performs **candidates selection** and selects \(m\) the most promising tokens \(c_{1},...,c_{m}\) to which \(x^{\prime}_{i}\) can be quantized. In the 6th line, each candidate \(c_{j}\) is evaluated by computing \(\mathcal{L}\) for a sequence \(x^{\prime}\) with vector \(\pi^{\prime}_{i}\) quantized to \(c_{j}\). Finally, \(\pi^{\prime}_{i}\) is quantized to the best \(c_{j}\) chosen from the previous step. Quantized \(\pi^{\prime}_{i}\) will no longer be updated during optimization. MANGO repeats lines 2-7 until all vectors in \(x^{\prime}\) are quantized. ``` Data: adversarial loss \(\mathcal{L}\) (eq. 2) Result: sentence \(x^{\prime}\) that minimizes \(\mathcal{L}\) initialize \(x^{\prime}=(\pi^{\prime}_{1},...,\pi^{\prime}_{n})\) while\(x^{\prime}\) is not fully quantizeddo optimization: optimize parameters of \(x^{\prime}\) vector selection: select probabilistic vector \(\pi^{\prime}_{i}\) from \(x^{\prime}\) for quantization candidates selection: select \(m\) tokens candidates from \(\pi^{\prime}_{i}\) evaluate these \(m\) candidates with loss \(\mathcal{L}\) quantize \(\pi^{\prime}_{i}\) to best evaluated token ``` **Algorithm 1**MANGO OptimizationWe optimize \(x^{\prime}\) with the Adam optimizer (Kingma and Ba, 2014) which is reset after each quantization (see Section 5). This allows \(x^{\prime}\) to rapidly change its trajectory to compensate for the degradation of \(\mathcal{L}\). The initial number of optimization steps is \(S\), but it decreases by a factor of 2 in each loop to reduce computational costs. Vector SelectionIn line 4th, we choose vector \(\pi^{\prime}_{i}\) with the highest entropy (see Section 5), because its quantization will introduce the most significant change to \(x^{\prime}\) and is likely to increase the loss value the most. Intuitively, we want such degrading quantizations to occur early in the algorithm, because the more vectors are not quantized yet, the larger capacity \(x^{\prime}\) has to compensate for degradation by finding another local minimum of \(\mathcal{L}\). Candidates SelectionIn this phase, we select \(m\) tokens that can be used to quantize the probability vector \(\pi^{\prime}_{i}\) with possibly a small degradation of \(\mathcal{L}\). Quantization of \(\pi^{\prime}_{i}\) with token \(k\) is a step \(q_{k}=(-(\pi^{\prime}_{i})_{1},-(\pi^{\prime}_{i})_{2},...,1-(\pi^{\prime}_{i}) _{k},...,-(\pi^{\prime}_{i})_{n})\) in the \(\pi^{\prime}_{i}\) space. As \(\pi^{\prime}_{i}\) is likely to be in the proximity of its local minimum with respect to \(\mathcal{L}\), we want the step \(q_{k}\) to have (1) the lowest norm \(\|q_{k}\|\) possible and (2) follow the direction of the local (minus) gradient. We use this intuition in the formulation of the token score \(s_{k}\), which is a weighted mean of the probability \((\pi^{\prime}_{i})_{k}\) and the direction score \(d_{k}\): \[s_{k}=\lambda_{prob}(\pi^{\prime}_{i})_{k}+(1-\lambda_{prob})d_{k}. \tag{3}\] Note that \((\pi^{\prime}_{i})_{k}\) is inversely proportional to \(\|q_{k}\|\). We define \(d_{k}\) as cosine similarity between \(q_{k}\) and the local (minus) gradient (see Section 5): \[d_{k}=\frac{q_{k}\left(-\nabla_{\pi^{\prime}_{i}}\mathcal{L}(x^{\prime})\right) ^{T}}{\|q_{k}\|\cdot\|\nabla_{\pi^{\prime}_{i}}\mathcal{L}(x^{\prime})\|} \tag{4}\] We then select \(m\) tokens with the highest scores \(s_{k}\). ## 4 Experiments In this section, we evaluate MANGO on various NLP tasks and compare it to recent NLP attacks. BaselinesWe compare our method with the latest white-box GBDA attack (Guo et al., 2021), as well as recent black-box attacks implemented in TextAttack (Morris et al., 2020): BERT-Attack (Li et al., 2020), BAE (Garg and Ramakrishnan, 2020) and TextFooler (Jin et al., 2020). To emphasize the importance of multi-step quantization, we evaluate the Naive version of MANGO that performs quantization in one step. MANGO, Naive and GBDA attacks use identical loss. All hyperparameters are listed in appendix A. TasksWe attack BERT models from TextAttack fine-tuned on three text classification tasks: AG News (Zhang et al., 2015), Yelp Reviews (Zhang et al., 2015), IMDB (Maas et al., 2011), and MNLI task for natural language inference, (Williams et al., 2018). In MNLI p., an attack is allowed to modify only the premise, and in MNLI h., only the hypothesis. For each task, we randomly select 1000 attack targets from the training set. We use a training set as it provides more challenging targets and is more relevant to Adversarial Training Bai et al. (2021). ResultsResults can be found in Table 1. Our MANGO substantially reduces the training accuracy of the BERT model in all tasks, while maintaining a high level of semantic similarity to the original input. The attacks of MANGO are difficult (low Adv. prob., which indicates that model misclassifies an example by a large margin), fluent (low \(\Delta\) perp.) and do not flaw the grammatical correctness (low \(\Delta\) gram.). In almost all settings, MANGO outperforms other attacks in terms of training accuracy, which we believe to be the fairest metric for comparing optimization-based methods with black-box ones due to inherent design biases (see Appendix B). MANGO surpasses the recent state-of-the-art optimization-based GBDA attack in terms of most considered metrics: in terms of Adv. acc. and BERTScore on 4/5 tasks and in terms of USE sim., \(\Delta\) perpl. and \(\Delta\) gram. on 5/5 tasks. Moreover, MANGO achieves considerably bet \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Task & Method & Adv. & Adv. prob. & USE sim. & BERTScore & \(\Delta\) perp. & \(\Delta\) gram. & \# queries \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & TextFooler & 16.2 & 43.7 \(\pm\) 26.0 & 0.81 \(\pm\) 0.13 & 0.83 \(\pm\) 0.10 & 373 \(\pm\) 548 & 0.26 \(\pm\) 0.69 & 334 \(\pm\) 224 \\ & Bert-Attack & 20.1 & 45.7 \(\pm\) 27.7 & 0.83 \(\pm\) 0.11 & 0.86 \(\pm\) 0.09 & 86 \(\pm\) 133 & 0.06 \(\pm\) 0.49 & 620 \(\pm\) 472 \\ & BAE & 12.6 & 41.1 \(\pm\) 24.1 & 0.78 \(\pm\) 0.16 & 0.84 \(\pm\) 0.11 & 157 \(\pm\) 289 & 0.07 \(\pm\) 0.53 & 424 \(\pm\) 353 \\ \cline{2-8} & naive & 43.7 & 44.5 \(\pm\) 43.1 & 0.82 \(\pm\) 0.10 & 0.87 \(\pm\) 0.06 & 67 \(\pm\) 141 & 0.13 \(\pm\) 0.62 & 102 \(\pm\) 6 \\ & GBDA & 12.9 & 13.7 \(\pm\) 29.4 & 0.72 \(\pm\) 0.13 & 0.80 \(\pm\) 0.09 & 241 \(\pm\) 382 & 0.17 \(\pm\) 0.72 & 1098 \(\pm\) 69 \\ & MANGO & **2.7** & 3.2 \(\pm\) 15.3 & 0.78 \(\pm\) 0.10 & 0.83 \(\pm\) 0.06 & 30 \(\pm\) 108 & 0.10 \(\pm\) 0.63 & 496 \(\pm\) 125 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & TextFooler & 0.6 & 34.1 \(\pm\) 16.9 & 0.94 \(\pm\) 0.08 & 0.93 \(\pm\) 0.07 & 108 \(\pm\) 214 & 0.103 \(\pm\) 1.81 & 761 \(\pm\) 1 000 \\ & Bert-Attack & 0.6 & 28.0 \(\pm\) 18.6 & 0.96 \(\pm\) 0.07 & 0.96 \(\pm\) 0.05 & 19 \(\pm\) 38 & 0.05 \(\pm\) 0.65 & 900 \(\pm\) 922 \\ & BAE & **0.2** & 29.3 \(\pm\) 18.3 & 0.95 \(\pm\) 0.08 & 0.95 \(\pm\) 0.06 & 27 \(\pm\) 59 & 0.10 \(\pm\) 0.76 & 651 \(\pm\) 665 \\ \cline{2-8} & naive & 30.5 & 31.1 \(\pm\) 42.6 & 0.86 \(\pm\) 0.09 & 0.83 \(\pm\) 0.10 & 288 \(\pm\) 346 & 1.56 \(\pm\) 2.75 & 100 \(\pm\) 13 \\ & GBDA & 6.3 & 7.0 \(\pm\) 21.3 & 0.83 \(\pm\) 0.11 & 0.79 \(\pm\) 0.08 & 294 \(\pm\) 271 & 1.44 \(\pm\) 2.22 & 1082 \(\pm\) 146 \\ & MANGO & 0.3 & 0.7 \(\pm\) 5.7 & 0.88 \(\pm\) 0.07 & 0.83 \(\pm\) 0.08 & 59 \(\pm\) 73 & 0.99 \(\pm\) 2.15 & 1647 \(\pm\) 746 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & TextFooler & 4.5 & 31.7 \(\pm\) 22.6 & 0.92 \(\pm\) 0.10 & 0.93 \(\pm\) 0.06 & 90 \(\pm\) 192 & 0.50 \(\pm\) 0.106 & 495 \(\pm\) 526 \\ & Bert-Attack & **1.9** & 28.3 \(\pm\) 19.1 & 0.93 \(\pm\) 0.09 & 0.94 \(\pm\) 0.06 & 16 \(\pm\) 38 & 0.00 \(\pm\) 0.55 & 665 \(\pm\) 173 \\ & BAE & 2.8 & 30.5 \(\pm\) 21.1 & 0.92 \(\pm\) 0.11 & 0.93 \(\pm\) 0.06 & 29 \(\pm\) 130 & 0.06 \(\pm\) 0.60 & 501 \(\pm\) 525 \\ \cline{2-8} & naive & 35.1 & 35.8 \(\pm\) 45.4 & 0.82 \(\pm\) 0.13 & 0.84 \(\pm\) 0.09 & 25 \(\pm\) 84 & 0.75 \(\pm\) 1.93 & 102 \(\pm\) 3 \\ & GBDA & 4.5 & 4.9 \(\pm\) 18.3 & 0.79 \(\pm\) 0.12 & 0.81 \(\pm\) 0.06 & 5 \(\pm\) 42 & 0.37 \(\pm\) 1.59 & 1101 \(\pm\) 35 \\ & MANGO & 8.5 & 8.9 \(\pm\) 27.4 & 0.82 \(\pm\) 0.12 & 0.80 \(\pm\) 0.07 & -30 \(\pm\) 38 & 0.34 \(\pm\) 1.72 & 1128 \(\pm\) 718 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & TextFooler & 94.7 & - & - & - & - & - \\ & Bert-Attack & 3.9 & 34.3 \(\pm\) 23.5 & 0.93 \(\pm\) 0.08 & 0.96 \(\pm\) 0.04 & 30 \(\pm\) 58 & 0.02 \(\pm\) 0.26 & 146 \(\pm\) 148 \\ & BAE & 5.0 & 34.3 \(\pm\) 23.5 & 0.92 \(\pm\) 0.09 & 0.95 \(\pm\) 0.04 & 42 \(\pm\) 107 & 0.01 \(\pm\) 0.26 & 112 \(\pm\) 108 \\ \cline{2-8} & naive & 31.6 & 33.9 \(\pm\) 24.0 & 0.91 \(\pm\) 0.07 & 0.94 \(\pm\) 0.04 & 64 \(\pm\) 116 & -0.01 \(\pm\) 0.50 & 97 \(\pm\) 23 \\ & GBDA & 5.9 & 30.3 \(\pm\) 21.9 & 0.80 \(\pm\) 0.12 & 0.87 \(\pm\) 0.07 & 301 \(\pm\) 446 & 0.09 \(\pm\) 0.67 & 1044 \(\pm\) 247 \\ & MANGO & **2.4** & 31.6 \(\pm\) 23.3 & 0.88 \(\pm\) 0.08 & 0.91 \(\pm\) 0.05 & 73 \(\pm\) 123 & 0.05 \(\pm\) 0.60 & 326 \(\pm\) 125 \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & TextFooler & 6.5 & 35.5 \(\pm\) 24.2 & 0.94 \(\pm\) 0.07 & 0.95 \(\pm\) 0.04 & 77 \(\pm\) 139 & 0.13 \(\pm\) 0.39 & 77 \(\pm\) 44 \\ & Bert-Attack & 2.6 & 34.3 \(\pm\) 24.3 & 1. ter results than its Naive version, emphasizing the importance of multi-step quantization. Qualitative ResultsWe provide qualitative analysis of a few adversarial examples generated by BAE, GBDA, and MANGO in Appendix C. ## 5 Ablation Study In this section, we evaluate three solutions from Section 3 that improve the core idea of multi-step quantization: 1. selection of probability vector to quantization by maximal entropy (instead of minimal entropy, which seems more natural choice), 2. scoring token candidates by weighted mean of token probability and gradient direction score (eq. 4), 3. resetting optimizer after every quantization. Figure 1 compares different MANGO settings. We may observe that selection of probability vector for quantization by maximal entropy ("max entropy") is better than selection by minimal entropy ("min entropy"). Resetting the optimizer after every quantization enhances the performance for both "max entropy" and "min entropy" settings. Finally, we see that MANGO benefits from using both token's probability and gradient direction to score token candidates. ## 6 Visualization of Quantization Gap To visualize the quantization gap between adversarial loss for continuous and discrete text representations, we compared adversarial losses of MANGO, GBDA and a Naive version of MANGO that does not use multi-step quantization. The comparison can be found in Figure 2. We observe that the Naive method converges to the lowest value loss in the optimization phase, but the value explodes after quantization. The GBDA method, which samples probability vectors that resemble discrete one-hot vectors using Gumbel-softmax (Jang et al., 2017), reaches a higher minimum, but its quantization gap is much smaller than that of Naive method. Finally, in the case of MANGO, we observe sudden peaks and slow declines of loss values that correspond to the quantization-compensation loop, in which the quantization of single tokens is followed by the compensation of the quantization gap. After optimization, MANGO continues to quantize tokens step by step further decreasing the loss. MANGO obtains a significantly lower final adversarial loss than GBDA and Naive, avoiding the quantization gap. ## 7 Conclusion We developed MANGO, a novel optimization-based attack against Transformer models that Figure 1: Final adversarial losses for different MANGO setting. “max entropy + optimizer resets” stands for a version of MANGO that selects probability vector for quantization by maximal entropy and resets optimizer after every quantization. Rest of the names follow the same pattern. We also present the influence of the coefficient \(\lambda_{prob}\) used in token candidates scoring function (eq. 4). Loss values are averaged over 10 samples from IMDB dataset. Figure 2: Adversarial loss for epochs 50-200 of optimization for Naive, GBDA and MANGO methods. The vertical dashed line shows the end of optimization. Naive and GBDA methods immediately quantize the tokens, while MANGO do it step by step. The right-most points shows the final adversarial loss value. We observe that after optimization, MANGO continues to quantize tokens step by step and eventually reaches the best adversarial loss value. Loss values are averaged over 9 samples from IMDB dataset. mitigates the gap between adversarial loss for continuous and discrete text representations using a quantization-compensation loop. MANGO achieves superior results on various NLP tasks, outperforming recent black-box and optimization-based attacks. ## Limitations One limitation is that the number of queries of MANGO to the attacked model depends on the length of the input sequence. Therefore, MANGO may suffer a long attack time on datasets with long sequences (like IMDB or Yelp). Moreover, MANGO is restricted only to token replacement. The inability to insert or remove tokens can lead to reduced attack performance. The most important limitation is the white-box nature of MANGO that excludes it from applications when the internal model's states cannot be known. To partially circumvent this limitation, we propose Gray MANGO - a version of MANGO that can be used in the loosened black-box setting, which we call gray-box setting (see appendix D). ## Acknowledgements The work of Klaudia Balazy was carried out within the research project "Bio-inspired artificial neural network" (grant no. POIR.04.04.00-00-14DE/18-00) within the Team-Net program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. Piotr Gainski and Klaudia Balazy are affiliated with Doctoral School of Exact and Natural Sciences at the Jagiellonian University.
2308.10210
3D coupled tearing-thermal evolution in solar current sheets
Combined tearing-thermal evolution plays an important role in the disruption of current sheets, and formation of cool condensations within the solar atmosphere. However, this has received limited attention to date. We numerically explore a combined tearing and thermal instability that causes the break up of an idealized current sheet in the solar atmosphere. The thermal component leads to the formation of localized, cool condensations within an otherwise 3D reconnecting magnetic topology. We construct a 3D resistive magnetohydrodynamic simulation of a force-free current sheet under solar atmospheric conditions that incorporate the non-adiabatic influence of background heating, optically thin radiative energy loss, and magnetic field aligned thermal conduction with the open source code MPI-AMRVAC. Multiple levels of adaptive mesh refinement reveal the self-consistent development of finer-scale condensation structures within the evolving system. The instability in the current sheet is triggered by magnetic field perturbations concentrated around the current sheet plane, and subsequent tearing modes develop. This in turn drives thermal runaway associated with the thermal instability of the system. We find subsequent, localized cool plasma condensations that form under the prevailing low plasma-$\beta$ conditions, and demonstrate that the density and temperature of these condensed structures are similar to more quiescent coronal condensations. Synthetic counterparts at Extreme-UltraViolet (EUV) and optical wavelengths show the formation of plasmoids (in EUV), and coronal condensations similar to prominences and coronal rain blobs in the vicinity of the reconnecting sheet. Our simulations imply that 3D reconnection in solar current sheets may well present an almost unavoidable multi-thermal aspect, that forms during their coupled tearing-thermal evolution.
Samrat Sen, Jack Jenkins, Rony Keppens
2023-08-20T09:28:45Z
http://arxiv.org/abs/2308.10210v1
# 3D coupled tearing-thermal evolution in solar current sheets ###### Abstract Context:The tearing instability plays a major role in the disruption of current sheets, whereas thermal modes can be responsible for condensation phenomena (forming prominences and coronal rain) in the solar atmosphere. However, how combined tearing-thermal unstable current sheets evolve within the solar atmosphere has received limited attention to date. Aims:We numerically explore a combined tearing and thermal instability that causes the break up of an idealized current sheet in the solar atmosphere. The thermal component leads to the formation of localized, cool condensations within an otherwise 3D reconnecting magnetic topology. Methods:We construct a 3D resistive magnetohydrodynamic simulation of a force-free current sheet under solar atmospheric conditions that incorporate the non-adiabatic influence of background heating, optically thin radiative energy loss, and magnetic field aligned thermal conduction with the open source code MPI-AMRVAC. Multiple levels of adaptive mesh refinement reveal the self-consistent development of finer-scale condensation structures within the evolving system. Results:The instability in the current sheet is triggered by magnetic field perturbations concentrated around the current sheet plane, and subsequent tearing modes develop. This in turn drives thermal runaway associated with the thermal instability of the system. We find subsequent, localized cool plasma condensations that form under the prevailing low plasma-\(\beta\) conditions, and demonstrate that the density and temperature of these condensed structures are similar to more quiescent coronal condensations. Synthetic counterparts at Extreme-UltraViolet (EUV) and optical wavelengths show the formation of plasmoids (in EUV), and coronal condensations similar to prominences and coronal rain blobs in the vicinity of the reconnecting sheet. Conclusions:Our simulations imply that 3D reconnection in solar current sheets may well present an almost unavoidable multi-thermal aspect, that forms during their coupled tearing-thermal evolution. ## 1 Introduction Magnetic reconnection is a fundamental process understood to play a critical role throughout the solar atmosphere. The change of magnetic field topology during reconnection leads to conversion of magnetic energy into thermal and kinetic energies (Biskamp, 2000), frequently leading to fast energy release of solar flares (Giovanelli, 1939, 1947, 1948; Priest & Forbes, 2000; Hesse & Cassak, 2020) and coronal mass ejections (Gosling et al., 1995; Schmidt & Cargill, 2003; Karpen et al., 2012) out into the heliosphere. It was suggested by Furth et al. (1963) that reconnection in an incompressible plasma may be triggered due to small perturbations in a current layer, correspondingly breaking up the current sheet in the form of the tearing instability. Linear analysis by Loureiro et al. (2007) suggests that the tearing instability in a single current sheet may lead to the formation of a chain of plasmoids (secondary magnetic islands). This was later verified in 2D numerical simulations by Huang & Bhattacharjee (2013) and Huang et al. (2013). Extensions to 2D double current layer models were seen to give rise to the development and layer-layer interactions of tearing modes with smaller scale plasmoid formation (Zhang & Ma, 2011; Keppens et al., 2013; Akramov & Baty, 2017; Paul & Vaidya, 2021, and references therein). Each of these previous studies does not consider non-adiabatic effects of background heating, radiative energy loss, and thermal conduction, which are essential components of the solar atmosphere. It is well established that the solar corona is in an overall delicate thermal balance. If this balance due to optically thin radiative loss, background heating in combination with thermal conduction is perturbed, an increment of the thermal energy loss cooling down the plasma may lead to an enhancement of the plasma density. This in turn radiates more energy (radiative loss in an optically thin medium is proportional to the square of plasma density), and the material becomes cooler still. Hence, a catastrophic runaway process ensues leading to a rapid rise in the density and drop in temperature which is in essence the thermal instability. A detailed linear analysis of the thermal instability is presented by Parker (1953) and Field (1965), who derived the criteria governing the onset to a catastrophic radiative loss in an infinite homogeneous medium. The linear magnetohydrodynamic (MHD) analysis was extended to a 1D slab configuration (van der Linden & Goossens, 1991; van der Linden et al., 1992), and cylindrical geometry (van der Linden & Goossens, 1991; Soler et al., 2011) under solar coronal conditions. Linear and follow-up nonlinear theory of thermal instability is a powerful tool to explain various fascinating features of the solar atmosphere. For example, the possible formation of a prominence in a current sheet is discussed by Smith & Priest (1977) and the dynamic thermal balance in a coronal arcade is studied in Priest & Smith (1979). The post-flare loop formation in a line-tied current sheet configuration with radiative energy loss was simulated in a 2D MHD setup by Forbes & Malherbe (1991). More recently, multidimensional simulations related to prominence formation emerged in a variety of magnetic topologies. Xia et al. (2012) reported the ab initio formation of a solar prominence in a 2.5D MHD simulation in a bipolar magnetic arcade due to chromospheric evaporation and thermal instability. This was revisited in a quadrupolar arcade setup, in which reconnection was induced by the condensing prominence by Keppens and Xia (2014). That prominences can also form by feeding chromospheric matter within plasmoids during a flux rope eruption is developed by Zhao and Keppens (2022). 3D models of prominence formation that establish the needed plasma cycle between chromosphere and corona are shown in Xia and Keppens (2016). More recently, prominence formation due to levitation-condensation (Kaneko and Yokoyama, 2015; Jenkins and Keppens, 2021) was demonstrated where a 3D realization is needed to allow magnetic Rayleigh-Taylor instability (Jenkins and Keppens, 2022). The effect of thermal instability has also been explored for the formation and dynamics of coronal rain in magnetic arcades in 2.5D geometry (Fang et al., 2013, 2015), in a weak magnetic bipole in 3D geometry (Xia et al., 2017), in a more self-consistent 3D radiative-magnetohydrodynamic setup in Koutova et al. (2020), and for randomly heated arcades by Li et al. (2022) in a 2.5D geometry. These works also triggered a renewed interest in more idealized studies of linear thermal instability and its nonlinear evolution, and in how the various linear MHD waves and instabilities may interact. Numerical analysis in the linear and non-linear domains due to the interaction of the slow MHD and entropy (thermal) modes was carried out in recent studies by Claes and Keppens (2019) and Claes et al. (2020), while the effect of different radiative loss functions on the onset and far nonlinear behavior of thermal modes was analyzed by Hermans and Keppens (2021). However, the influence of thermal instability on the tearing mode of solar current sheets has not gained much attention to date. Linear analysis by Ledentsov (2021, 2021) shows that the instability growth rate in a pre-flare current sheet is modified if the non-adiabatic effects of radiative energy loss, resistivity and thermal conductivities are included. Sen and Keppens (2022)[SK22 hereafter] extended this into the non-linear domain and incorporated background heating and optically thin radiative loss into a series of 2D resistive MHD simulations. This study finds that the instability growth rate of tearing modes in a solar current sheet increases by an order of magnitude when these non-adiabatic effects are incorporated, such that we can meaningfully speak of coupled tearing-thermal evolutions. The 2D current sheet produced a chain of plasmoid-trapped condensations with cool material, which are thermodynamically similar to prominence (or coronal rain) in the solar atmosphere. In this work, we extend our study to a 3D geometry and explore the tearing-thermal evolutionary process of an idealized current sheet model in solar atmosphere which is essentially non-adiabatic (with background heating, optically-thin radiative loss, and thermal conduction). Due to the mutual reinforcement of these instabilities, we demonstrate the combined effect of the complex evolution of the current layer, which disintegrates into finer structures with subsequent development of flux ropes, along with the formation of cool plasma condensations in the vicinity of this evolving current sheet. These localized cool, and plasma-condensed regions share similarities with the prominence and coronal rain structures observed in the solar atmosphere. Our findings here augment the growing theoretical basis for the combined effect of a current sheet fragmentation, and formation of cool-condensed plasma due to coupled tearing-thermal instability. This multi-mode evolution of the system occurring in association with or during reconnection in a current sheet is an important aspect to understand the dynamics and multi-thermal processes in the solar atmosphere. The paper is organized as follows. In Sect. 2, we describe the numerical model with an initial configuration with a precise magnetic and thermodynamic structure, and detail algorithmic aspects and boundary conditions. In Sect. 3, we discuss the main results of the study, and relevance of the model in the solar atmosphere. Section 4 addresses the significance and novelty of the work for a typical coronal atmosphere, the scope for further development and points out how this work will be useful for future studies. ## 2 Numerical setup We construct a resistive MHD simulation using MPI-parallelised Adaptive Mesh Refinement Versatile Advection Code or MPI-AMRVAC1(Keppens et al., 2012; Porth et al., 2014; Xia et al., 2018; Keppens et al., 2021; Keppens, R. et al., 2023) in 3D Cartesian geometry. The spatial domain of the simulation box spans from \(-10\) to \(10\) (in dimensionless units) along \(x-y-z\) directions. We activate the adaptive mesh refinement (AMR) up to level three, which gives a maximum resolution of \(512^{3}\). If the box size is set in units of 10 Mm, this achieves the smallest cell size of 390 km in each direction. Automated refinement and derefinement is triggered based on the errors estimated by the instantaneous density (gradient) at each time step. Footnote 1: Open source at: [http://amrvac.org/](http://amrvac.org/) To explore the influence of thermal instability on the tearing mode in a current sheet, the following normalized MHD equations are solved numerically, \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\mathbf{v}\rho)=0, \tag{1}\] \[\frac{\partial(\rho\mathbf{v})}{\partial t}+\nabla\cdot(\rho \mathbf{v}\mathbf{v}+p_{tot}\mathbf{I}-\mathbf{B}\mathbf{B})=\mathbf{0},\] (2) \[\frac{\partial\mathcal{E}}{\partial t}+\nabla\cdot(\mathcal{E} \mathbf{v}+p_{tot}\mathbf{v}-\mathbf{B}\mathbf{B}\cdot\mathbf{v})=\eta \mathbf{J}^{2}-\mathbf{B}\cdot\nabla\times(\eta\mathbf{J})\] (3) \[-\rho^{2}\Lambda(T)+H_{bgr}+\nabla\cdot(\kappa_{\parallel}\cdot \nabla T),\] \[\frac{\partial\mathbf{B}}{\partial t}+\nabla\cdot(\mathbf{v} \mathbf{B}-\mathbf{B}\mathbf{v})+\nabla\times(\eta\mathbf{J})=\mathbf{0}\,,\] (4) \[\nabla\cdot\mathbf{B}=0\,,\] (5) \[\mathbf{J}=\nabla\times\mathbf{B}\,. \tag{6}\] Note that we use magnetic units where the magnetic permeability is unity. Here, \(\mathbf{I}\) is the unit tensor, and \(\rho,T,\mathbf{B}\), \(\mathbf{v}\), and \(\eta\) represent mass density, temperature, magnetic field vector, velocity, and resistivity, respectively. A uniform resistivity, \(\eta=0.001\) (or \(1.2\times 10^{14}\) cm\({}^{2}\) s\({}^{-1}\) in physical units) is taken throughout the entire simulation domain. We adopt the Spitzer-type thermal conductivity, \(\kappa_{\parallel}=10^{-6}T^{5/2}\) erg cm\({}^{-1}\) s\({}^{-1}\) K\({}^{-1}\), which is purely aligned along the magnetic field. The total pressure \(p_{tot}\) is the sum of the plasma and magnetic pressure given by \[p_{tot}=p+\frac{B^{2}}{2}, \tag{7}\] where \(p\) is the gas pressure linked with the thermodynamic quantities through the ideal gas law. The total energy density is \[\mathcal{E}=\frac{p}{\gamma-1}+\frac{\rho\nu^{2}}{2}+\frac{B^{2}}{2}, \tag{8}\] where \(\gamma=5/3\) is the ratio of specific heats for a monoatomic gas (fully ionized hydrogen plasma). We set up a current sheet configuration using the magnetic field components \[B_{x} =B_{0}\tanh(z/l_{s}); \tag{9}\] \[B_{y} =\sqrt{B_{0}^{2}-B_{x}^{2}};\] (10) \[B_{z} =0, \tag{11}\] where \(B_{0}=1\) (corresponding to 2 G in physical units) is the magnetic field strength, which is comparable with the observations in the solar corona, where the field strength at the height of 1.05-1.35 solar radius are reported between 1-4 G (Lin et al., 2004; Kumari et al., 2019; Yang et al., 2020). The unit plasma density, temperature, and length scales are set as \(\tilde{\rho}=2.34\times 10^{-15}\) g cm\({}^{-3}\), \(\tilde{T}=10^{6}\) K, and \(\tilde{L}=10^{9}\) cm, which are relevant for the solar corona. The initial width of the current sheet is set to \(l_{s}=0.5\) (5 Mm in physical unit), which is comparable with the observed flare current sheet thickness (Li et al., 2018; Savage et al., 2010). The magnetic field configuration given by Eqns. (9-11) represents a force-free field, and the polarity reversal of the magnetic field occurs around the \(z=0\) plane, where the current sheet is. In line with a fully force-balanced equilibrium of the system, we use isothermal and isobaric conditions as the initial setup of the model. The third term at the right-hand side (RHS) of Eq. (3) represents the radiative cooling in the optically thin corona, where \(\Lambda(T)\) is the cooling function developed by Colgan et al. (2008) and extended to lower temperatures following Dalgarno & McCray (1972). The precise temperature dependence of \(\Lambda(T)\) was shown in Figure 1 of our previous 2D simulation (SK22). To maintain the initial thermal balance between optically thin radiative loss and background heating, \(H_{bgr}\) of the system, we prescribe a uniform, time-independent value, \[H_{bgr}=\rho_{0}^{2}\ \Lambda(T_{0}). \tag{12}\] The motivation of using the above form is that the radiative cooling term at the initial state exactly compensates the background heating term, and the heating/cooling (mis)balance in the system occurs after a long term evolution triggered by the external perturbations (which is magnetic field in this study). Note that, this heating model is similar, although uniform in space unlike to our earlier study in SK22. However, the role of different heating models based on the power-law behaviour of magnetic field strength, and density on the thermal runaway, and condensation processes have been reported by Brughmans et al. (2022), which finds that the different heating models can change the evolution and morphology of the condensations. Therefore, how the different heating rates can change the thermal balance of our model can be interesting to study in future. With a homogeneous density \(\rho_{0}=0.2\) (\(4.68\times 10^{-16}\) g cm\({}^{-3}\) in physical units) and isothermal atmosphere \(T_{0}=0.5\) (0.5 MK in physical units) as initial condition, we have an initially uniform plasma-\(\beta=0.2\), less than unity as appropriate for solar corona. We use the initial temperature, \(T_{0}=0.5\) MK, which is in the temperature regime where the cooling function we use in this study has a very sharp gradient, and the heating-cooling mis-balance may be dominated due to perturbation from the equilibrium temperature. However, we also notice that the system achieves to thermal runaway state, and cool-condensed materials form for other equilibrium temperature regime, \(T_{0}=1\) MK, which is shown in Appendix A. It is to be noted from the last term in the RHS of Eq. 3, that there is no role for thermal conduction at the initial time, as the system starts off isothermal. Therefore, the system is initially in thermal equilibrium, but the finite value of resistivity will drive the ideal force-balanced state away from its initial state, but only on the (slow) resistive timescale. This setup is liable to both linear resistive tearing modes, for which finite resistivity is key, and has all thermodynamic ingredients to allow for thermal instability. The equilibrium system is perturbed to trigger tearing modes, which can further trigger the thermal modes, and thus enforce each other in a coupled tearing-thermal fashion. We use parametrically controlled, monopole-free magnetic field perturbations mainly confined in the vicinity of the \(z=0\) plane (where the initial current sheet is present) and exponentially decaying for \(|z|>0\), \[\delta B_{x} =-\frac{2\pi}{l}\bigg{[}\psi_{01}\cos\bigg{(}\frac{2\pi m_{1}x}{ l}\bigg{)}\sin\bigg{(}\frac{2\pi m_{1}y}{l}\bigg{)} \tag{13}\] \[\quad+\psi_{02}\cos\bigg{(}\frac{2\pi m_{2}x}{l}\bigg{)}\sin\bigg{(} \frac{2\pi m_{2}y}{l}\bigg{)}\bigg{]}\ \mathrm{Exp}(-z^{2}/l_{s}),\] \[\delta B_{y} =+\frac{2\pi}{l}\bigg{[}\psi_{01}\ \sin\bigg{(}\frac{2\pi m_{1}x}{l}\bigg{)}\cos \bigg{(}\frac{2\pi m_{1}y}{l}\bigg{)}\] (14) \[\quad+\psi_{02}\ \sin\bigg{(}\frac{2\pi m_{2}x}{l}\bigg{)}\cos \bigg{(}\frac{2\pi m_{2}y}{l}\bigg{)}\bigg{]}\ \mathrm{Exp}(-z^{2}/l_{s}).\] Here, the parameter \(l=20\times L\) matches the geometric sizes of the simulation domain along \(x\) and \(y\) directions respectively, the perturbation amplitudes \(\psi_{01}=\psi_{02}=0.1\) ensure a variation of 10% of the magnetic field strength \(B_{0}\), and we take the multi-mode distribution of the perturbations using \(n_{1}=4\) and \(n_{2}=2\). The magnetic field distribution (Eqs. 9-11), and the perturbations (Eqs. 13-14) ensure the solenoidal condition, \(\nabla\cdot\mathbf{B}=\nabla\cdot\partial\mathbf{B}=0\). After the initial setup, the system is allowed to evolve as governed by the Eqs. (1-6). The equations are solved numerically using a three-step Runge-Kutta time integration with a second order slope limited reconstruction method (Ruuth, 2006) with 'Vanleer' flux limiter (van Leer, 1974), and a Total Variation Diminishing Lax-Friedrichs (TVDLF) flux scheme. We follow the evolution of the system for up to 214.7 minutes and save the data at a cadence of 85.87 s, which gives 151 snapshots. We use periodic boundary conditions along \(x,y\) directions, and open boundary condition along the \(z\) direction. The wall clock time for the entire simulation run is \(\approx 90\) hours using 8 nodes with 288 processors in total with GNU-Fortran (version 6.4.0) compiler and open MPI 2.1.2. Note that, there are some important differences in the initial conditions of this model with respect to SK22 mentioned in the following. (i) This is a force-free magnetic field configuration, and therefore an isobaric and isothermal medium ensures a fully force-balanced equilibrium, whereas the magnetic field configuration in SK22 was non-force free, and therefore we used a non-uniform density profile (though the initial temperature was also uniform) in such a way that it maintained the force-balance equilibrium, (ii) The magnetic field strength near the current sheet in SK22 is \(\ll 1\), which sets the plasma-\(\beta\gg 1\) near the current sheet, on the other hand, our current model has initially a uniform and low plasma-\(\beta=0.2\) throughout the entire simulation domain, (iii) the direction of imposed magnetic field perturbations for SK22 were both parallel and perpendicular to the current sheet, but in the current work we set the perturbation only parallel and concentrated around the current sheet plane. Besides the difference in affordable numerical resolution (SK22 has the maximum resolution of \(2048^{2}\), whereas the current setup has maximum resolution of \(512^{3}\)), the system studied here is intrinsically 3D, and is hence more relevant for actual current sheet conditions. ## 3 Result and Discussions ### Global evolution The spatial distribution of current density squared \(J^{2}\), as well as representative corresponding field line evolutions are shown in Fig. 1. The equilibrium configuration of the current sheet (at \(t=0\)) is formed due to the fact that the magnetic field shears across \(z=0\) as given by Eqs. 9, 10, and 11. The initial current distribution is mostly oriented in the \(x-y\) plane and concentrated around \(z=0\), with its main contribution from \(J_{x}=-dB_{y}/dz\) and \(J_{y}=dB_{x}/dz\) (while \(J_{z}\) is purely from the perturbed field). Due to the magnetic field perturbations, given by Eqs. 13 and 14, which are (mainly) confined near the \(z=0\) plane and extend along the \(x-y\) directions, linear resistive tearing modes start to develop leading to the disintegration of the current sheet. The inhomogeneities of \(J^{2}\) in the current sheet plane due to the multimode perturbation (\(n_{1}=4\), \(n_{2}=2\)) appear at the initial stage of the evolution as shown in Fig. 1a. Magnetic reconnections lead to pronounced magnetic topology changes, modifying the current sheet as shown in Fig. 1b. We see the perturbed field lines near \(z=0\) (see Fig. 1c) turn into extended flux rope-like structures while the system evolves (see Fig. 1d), yet the planar (\(x\)-oriented) field lines away from the current sheet plane remain unperturbed. The signature of flux ropes in the current density distribution in Fig. 1b can be clearly noticed. In a 2D setup, these would be the familiar magnetic islands or plasmoids, due to development of tearing modes in the current sheet. We notice the self consistent development of \(B_{z}\) due to the perturbed fields around the current sheet plane as shown in Fig. 2a (note that the initial \(B_{z}\) was set to zero with no magnetic field perturbation along \(z\)). Fig. 2b represents the plasma pressure distribution at the \(x-z\) plane (at \(y=0\)), while the variation along the vertical green dashed line is shown in Fig. 2c, which shows the expected linear eigenmode structure of the tearing mode, seen as the kinks in the pressure distribution around \(z=0\). This implies the development of the tearing modes at the initial stage of the current sheet evolution. The nonlinearly developing tearing evolution creates density perturbations in the surroundings of the current sheet, and the radiative cooling (in combination with thermal conduction) becomes dominant over the constant background heating in localized regions of the domain. This triggers the cooling of those regions, which in turn condenses the regions even more. Hence, a runaway process starts where tearing and thermal modes reinforce each other, which causes the spontaneous growth of density and temperature inhomogeneities in the surroundings of the current sheet. Note that our box is periodic along \(x\) and \(y\), so field aligned thermal conduction does not play a big role in the heating-cooling misbalance of the system (in the sense that it can not lead to heat fluxes down into lower-lying chromospheric regions: we have no stratification here). It just tries to homogenize the temperature along field lines, in competition with local resistive heating. The temporal variation of the instantaneous maximal plasma density and minimal temperature are shown in Fig. 3a. The peak density increases from the initial uniform \(4.68\times 10^{-16}\) g cm\({}^{-3}\) at \(t=0\) to \(7.11\times 10^{-14}\) g cm\({}^{-3}\) at \(t=214.7\) min. A sharp rise of the peak density starts from \(\approx 150\) min. The instantaneous minimum temperature starts from 0.5 MK (the initial uniform equilibrium temperature), which also drops sharply at the same time where the density peak has the sharp rise (\(t\approx 150\) min), and reaches down to \(10,148\) K at \(t=214.7\) min. This signals the formation of cool plasma condensations in the system at \(\approx 150\) min, or more than two-and-a-half hours following the initial tearing onset. The variation of the instantaneous maximal velocities (\(v_{x}\), \(v_{y}\) and \(v_{z}\)) in Fig. 3b demonstrates that a dynamical instability of the system occurs at the condensation onset time (\(\approx 150\) min). Here, the velocities are scaled in terms of Alfven velocity, \(v_{a}=261\) km s\({}^{-1}\), which is calculated based on the initial equilibrium density and the magnetic field strength of the system. We notice from Figure 4 that \(B_{z}\) field evolves up to \(\approx 0.4\) G along the \(x-z\) plane (at \(y=0\)) at \(t=207.5\) min, which is around an order of magnitude higher than the \(B_{z}(x,z)\) field at \(t=14.3\) min shown in Figure 2a. The kinks appear in the bottom panel of Figure 4 in the \(B_{z}\) field around \(z=0\) plane (at \(y=0\)) along \(x=\pm 2\) Mm shows the evolution of tearing mode around the current sheet plane. The evolution of density and temperature distributions along three orthogonal planes at \(x=0\), \(y=0\), and \(z=0\), and its 3D visualization are shown in Fig. 5. Density and temperature inhomogeneities appear in and around the current sheet plane that are aware of the initial multimode (\(n_{1}=4,n_{2}=2\)) magnetic field perturbation, as shown at \(t=14.3\) mins in Figs. 5a and 5d, respectively. Due to tearing-associated thermal instability, localized condensed structures can be clearly seen in the \(x=0\), \(y=0\), and \(z=0\) plane at \(t=207.5\) min (see Fig. 5b). The 3D visualization of the density distribution is shown in Fig. 5c, where we see the condensed structures are formed around the current sheet plane. These condensed structures correspond to the cooler (\(\sim 10^{4}\) K) regions compared to the background medium (see Figs. 5e and 5f). Their order 100 density and temperature contrasts are similar to coronal rain or prominence features. Note that they happen near the evolving current sheet, which is heated up to several million degrees due to effective Ohmic heating. The periodic boundary treatments that we use in this study are acceptable for the entire evolution, since the plasmoid sizes do not reach the lateral domain size of the simulation box. The condensations from thermal instability develop locally, and are expected not to be influenced by the type of boundary used laterally. Histograms of the mass and temperature distributions in the entire domain at \(t=207.5\) min are shown in Fig. 6, where we see in Fig. 6a that 98.8% of the cells within the simulation box contain mass within the range of \(4.84\times 10^{4}\) to \(1.43\times 10^{5}\) kg (the total number of cells in the simulation box used here is the effective resolution \(512^{3}\)), while the number of cells with mass \(\gtrsim 1.43\times 10^{5}\) kg is very low (\(\approx 1.2\%\)). The mass range with the most number of cells is in accord with the mass determined by the initial equilibrium density of the medium (since density remains almost unperturbed away from the current sheet plane). Similarly, most cells contain the temperature value in the range of \(\approx 0.32-0.63\) MK, namely 75.1% of the total cells in the box as shown in Fig. 6b. Note that the initial equilibrium 0.5 MK temperature of the system, lies within this range. The fraction of the cell numbers with temperature \(\lesssim 10^{5}\) K, and \(\gtrsim 1\) MK are low, and they are 2.1% and 1.2% respectively. This implies that the cool-condensed structures, as well as the hot regions with \(\gtrsim\) MK temperatures (which appear around the current sheet plane due to reconnection-induced heating) are very localised in the medium. To appreciate the thermodynamics of the cool condensed structures, we show slices of plasma pressure and different velocity components in Fig. 7. We notice that the velocities that develop in the different cutting planes in Figs. 7d-7f are associated with pressure gradient driven flows (see Figs. 7a-7c), also called siphon flows. They are directed from higher to lower pressure regions and demonstrate sub-Afvenic speeds (Alfvenic Mach number reaches up to \(\approx 0.075\)). Due to plasma accumulation as evident from the velocity maps, the cool condensation sites develop in these same regions as shown in Fig. 5. When the thermal runaway sets in after a long term evolutionary process, the velocities are domi nated by pressure gradient flows. In this stage initial condensation seeds merge along each other to form larger condensation seeds merge along each other to form larger condensation seeds. The resulting flow is sites, and drag the magnetic field lines along, which entangle and form flux ropes. The heating/cooling (mis)balance in different planes are shown in Fig. 8. We use a uniform (and constant in time) background heating of \(6.244\times 10^{-53}\) erg g\({}^{2}\) cm\({}^{-3}\) s\({}^{-1}\) which is equal to the radiative loss (no field aligned thermal conduction at the initial state due to isothermal condition) at the initial equilibrium state of the system. But this balance breaks down due to tearing influenced thermal changes around the current sheet, and the radiative loss in some regions near the current sheet plane dominates over the heating. These regions correspond to the cool condensed structures as shown in Fig. 5. The regions away from the current sheet plane maintain the heating/cooling balance as the initial perturbation was concentrated around the current sheet plane, and therefore those regions maintain the initial equilibrium density and temperature of the system. The energetic evolution of the system is shown in Fig. 9. As expected (despite the open boundaries along \(z\)), this shows that the mean total energy density (\(E_{T}\)), which is a sum of mean kinetic (\(E_{k}\)), magnetic (\(E_{M}\)), and internal (\(E_{int}\)) energy densities (see Eqn. 8) given by \[E_{k} =\frac{1}{V}\iiint_{V}\frac{\rho v^{2}}{2}\mathrm{d}x\mathrm{d} y\mathrm{d}z, \tag{15}\] \[E_{M} =\frac{1}{V}\iiint_{V}\frac{B^{2}}{2}\mathrm{d}x\mathrm{d}y \mathrm{d}z,\] (16) \[E_{int} =\frac{1}{V}\iiint_{V}\frac{p}{\gamma-1}\mathrm{d}x\mathrm{d} y\mathrm{d}z, \tag{17}\] respectively (where, \(V\) is the total volume of the simulation box) is nearly conserved in time. The resistivity and thermal conduction effects do not cause any deviation from total energy conservation, and only the heating/cooling misbalance may lead to net energy losses (or gains). Due to the resistive MHD evolution of the system, the energy exchange between magnetic and internal energies show an anti-correlation nature while the system evolves. This energy exchange occurs due to the mean Ohmic dissipation, which is quantified as \[E_{ohn}=\frac{1}{V}\iiint_{V}\eta J^{2}\mathrm{d}x\mathrm{d}y\mathrm{d}z. \tag{18}\] We notice that the mean magnetic energy density decreases up to \(\approx 150\) min due to release of magnetic energy through reconnection. Thereafter, when the thermal runaway process happens in the system in a coupled tearing-thermal fashion, the field lines start to entangle with each other and form flux ropes. This generates magnetic stress, and leads to the enhancement of the magnetic energy density. However, the open boundaries along \(z-\)direction, which are sufficiently away from the central current sheet plane allow the magnetic energy flux to flow through the boundaries. The current density (spatially) distributes rapidly due to the implemented perturbation, and therefore we see a sharp drop of volume averaged \(J^{2}\) initially. The temporal evolution of the mean kinetic energy density shows that the instability dynamics of the system shows a (nearly) quasi-equilibrium nature up to \(\approx 150\) min (which is the onset time of condensations), and then rises sharply due to a tearing-thermal coupled unstable evolution. Figure 4: Signature of tearing modes around the fragmented current sheet plane at \(t=207.5\) min. Top panel represents the variation of \(B_{z}\) along \(x-z\) at \(y=0\) plane. Bottom panel shows variation of \(B_{z}\) along the dashed vertical lines in the top panel at \(x=\pm 2\) Mm. Figure 3: Temporal variation of the (a) instantaneous maximal plasma density (blue line), and minimum temperatures (red line), and (b) instantaneous absolute peak velocities. The sharp rise of the density, and drop of minimum temperature over two orders of magnitude at \(\approx 150\) min signals the runaway thermal instability causing local condensations, when the absolute peak velocities, \(v_{x}\), \(v_{y}\), and \(v_{z}\) also rise sharply. Here, the velocities are scaled in terms of Alfvén velocity, \(v_{a}\approx 261\) km s\({}^{-1}\). ### Synthetic Observation The _Solar Dynamics Observatory_Pesnell et al. (2012) is a near-Earth orbiting satellite suite capable of routinely observing the Sun from its photosphere to corona. The _Atmospheric Imaging Assembly_Lemen et al. (2012) on board captures images of the solar atmosphere with a temporal cadence of \(\sim 12\) s at a spatial resolution of \(\sim 0.6^{"}\) per pixel across a range of ultraviolet and extreme ultraviolet wavelengths, the latter mainly associated with the different ionisation states of Iron, namely Fe xii - xxiv. Emission from such highly ionised Iron corresponds to coronal temperatures in the broad range of a few hundred kK to around 20 MK. The emission coefficient for such coronal plasma is Figure 5: Spatial distribution of density (top panel) and temperature (bottom) in the 3D domain, where the distances along \(x,y,\) and \(z\) directions are in units of \(10^{4}\) km. Density (a and b), and temperature (d and e) distribution along three orthogonal slices along \(x=0,\)\(y=0,\) and \(z=0\) planes for two different times, \(t=14.3\) and 207.5 min are shown. (c) and (f) represent isosurface views (five isosurfaces ranging from minimum to maximum values) on density and temperature, respectively. Panels (a) and (d) are the early phase of the evolution, where the density and temperature inhomogeneities appear around the current sheet plane due to the multi mode magnetic field perturbation. Panel (b) and (c) illustrate where high density structures appear, cospatial with cool (\(\sim 10^{4}\) K) regions in (e) and (f). (An animation is available online.) Figure 6: Histograms for (a) mass, and (b) temperature distribution for \(t=207.5\) min. The cells containing the minimum and maximum masses are \(4.8\times 10^{4}\) and \(2.7\times 10^{6}\) kg respectively, and the temperature minima and maxima for the cells are 5000 K and 3.1 MK repectively. The number of bins for (a) and (b) are 20 with bin sizes \(1.38\times 10^{5}\) kg and 0.15 MK respectively. given by, \[j_{\lambda}(\tau)=\frac{A_{b}}{4\pi}\,n_{e}^{2}(\tau)\,G_{\lambda}(n_{e}(\tau),T( \tau))\,, \tag{19}\] where \(A_{b}\) is the abundance of the emitting species, \(n_{e}\) is the ambient electron number density, and \(G_{\lambda}\) is the contribution function for a specific wavelength, indicated to be additionally dependent on \(n_{e}\) as well as temperature \(T\). In the absence of a modelled \(n_{e}\), it is instead approximated using local thermodynamic equilibrium Saha-Boltzman. This contribution function is precomputed for each of the _Atmospheric Imaging Assembly_ passbands using the CHIANTI atomic package for a range of electron number densities and temperatures between \(10^{6}-10^{12}\) cm\({}^{-3}\) and \(10^{4}-10^{8}\) K, respectively (Landi & Reale 2013; Verner et al. 1996). \(\tau\) here denotes the local optical depth, computed within each voxel that a given line of sight intersects as the product of the local absorption coefficient \(\alpha_{\lambda}\) and the length of the ray within that voxel. The atmosphere of the Sun is optically thin at the wavelengths corresponding to the emission by these Fe lines. As such, the standard approach to synthesizing simulations so as to resemble the appearance of structures as seen by _Solar Dynamics Observatory/Atmospheric Imaging Assembly_ is to employ Eq. 19 in a local manner and apply an arbitrary line of sight integration according to the position of an observer. For structures that are majority-comprised of material at coronal temperatures, such as coronal loops, this is deemed a sufficient approach (Van Doorsselaere et al. 2016; Gibson et al. 2016). Cool condensations within the solar corona appear dim in extreme ultraviolet contrast (cf. Carlyle et al. 2014), and indeed the value of \(G_{\lambda}\) for the _Atmospheric Imaging Assembly_ passbands that we consider is many orders of magnitude lower at condensation temperatures than for coronal temperatures. However, the strong contrast is not only due to small \(G_{\lambda}\), but also the direct ab Figure 8: Distribution of radiative cooling (RC) losses for optically thin coronal conditions at \(t=207.5\) min in different planes. Figure 7: Distribution of plasma pressure (top panels) and velocity (bottom panels) at \(t=207.5\) min in different planes, where the velocities scaled in units of Alfvén velocity, \(v_{a}\approx 261\) km s\({}^{-1}\). sorption and removal of background EUV photons from the light beam (Kucera et al. 1998). This is due to a number of the wavelengths observed by _Solar Dynamics Observatory/Atmospheric Imaging Assembly_ lying below the head of the Hydrogen Lyman continuum at 912 A, and so H, He, and He i (with characteristic temperatures \(<10\) kK) are photo-ionised by this extreme ultraviolet emission up to the ionisation continuum of He ii at 227 A (Williams et al. 2013). Hence, extreme ultraviolet photons are progressively removed from the line of sight if such cool material is encountered. The absorption coefficient as a consequence of this extreme ultraviolet photo-ionisation can be approximated in local thermodynamic equilibrium by, \[\alpha_{\lambda}=(n_{H}(\tau)+n_{He}(\tau))\sum_{s}w_{s}(\tau)\,A_{b,s}\,\sigma _{s}, \tag{20}\] where \(s\) refers to the photo-ionised element and \(A_{b,s}\) and \(\sigma_{s}\) are the assumed abundance and cross-section of ionization of element \(s\), measured observationally and experimentally/theoretically, respectively. The summation weights \(w_{s}(\tau)\) are the ratios of the number densities as \(w_{H}=1-n_{H}/n_{H}\), \(w_{He}=1-(n_{He}-n_{He})/n_{He}\), and \(w_{He_{H}}=n_{He}/n_{He}\). To obtain an approximation to these weights, and in accordance with our previous assumptions of local thermodynamic equilibrium, we iteratively solve for each of the considered population densities using the Saha equation and associated partition functions. We consider convergence under the local thermodynamic equilibrium assumption to be achieved once the absolute relative difference of \(n_{e}\) between iterations drops below an arbitrary value of \(10^{-4}\) (a method developed by Zhou et al. 2019). One must necessarily consider this photo-ionisation to correctly approximate the appearance of cold plasma condensations, if present, when synthesizing simulations of the solar atmosphere (Jenkins and Keppens 2022). Both the emission and absorption quantities as defined above are purely local properties, the total emergent intensity \(I_{\lambda}(\tau_{\lambda})\) along a given line of sight through these local voxels is then given by the integral form of the transport equation, \[I_{\lambda}\left(\tau_{\lambda}\right)=I_{\lambda}(0)\,\mathrm{e}^{-\tau_{ \lambda}}+\int_{0}^{\tau_{\lambda}}S_{\lambda}\left(\tau_{\lambda}^{\prime} \right)\mathrm{e}^{-\left(\tau_{\lambda}-\tau_{\lambda}^{\prime}\right)} \mathrm{d}\tau_{\lambda}^{\prime}, \tag{21}\] where the combined influence of \(j_{\lambda}\) and \(\alpha_{\lambda}\) are taken into account in the source function \(S_{\lambda}=j_{\lambda}/\alpha_{\lambda}\), and \(\tau_{\lambda}\) now represents the total optical thickness along the chosen line of sight, and \(I_{\lambda}(0)\) is the intensity of any background illumination, when the emergent intensity is calculated along the specific LOS with zero optical thickness region (Rybicki and Lightman 1986). The non-standard inclusion of the absorption coefficient requires every local voxel, with their respective optical depths, \(\tau_{\lambda}^{\prime}\) to have access to a globally integrated and line of sight-specific optical depth, \(\tau_{\lambda}\). Such a requirement is not compatible with the block-based architecture of MPI-AMRVAC and is hence completed in post processing using a combination of yt-project,numpy, scipy, and matplotlib in python. The implementation here represents an update of that previously presented in Jenkins and Keppens (2022). Ground-based observatories do not have access to extreme ultraviolet wavelengths as recorded by _Solar Dynamics Observatory/Atmospheric Imaging Assembly_, and instead commonly observe, amongst others, the strong Hydrogen \(n=3\,\rightarrow\,n=2\) (H\(\alpha\)) line at 6563 A. This line is known to straddle the optically-thic - thin divide under solar atmospheric conditions, and a complete handling of the plasma-light interaction of such photons through the simulation domain would require non-"local thermodynamic equilibrium" modelling, outside the scope of this study (but we point an interested reader to the recent work of Jenkins et al. 2023, for comparison). Instead, Heinzel et al. (2015) reported an approximate approach to relating local pressure and temperature conditions to H\(\alpha\) opacity (\(\alpha_{\lambda}\)) according to their series of 1.5D radiative transfer models. A key property of these models considers the source function of Eq. 21 to remain constant along a given line of sight, and enables the following simplification, \[I_{\lambda}(\tau_{\lambda})=I_{\lambda}(0)\,\mathrm{e}^{-\tau_{\lambda}}+S_{ \lambda}(1-\mathrm{e}^{-\tau_{\lambda}}). \tag{22}\] The resulting emergent (specific) intensity of the H\(\alpha\) line is therefore found through a line of sight integration of the approximate \(\alpha_{\lambda}\) according to the tables of Heinzel et al. (2015), wherein the authors also provide a coarse height-dependent estimation to the constant value of \(S_{\lambda}\). To convert the physical variables (plasma density and temperature) into spectroscopic observables (namely specific intensity), we generate the synthetic maps of the simulation output using forward modeling. Measuring LOS integrated specific intensity depends on the (theoretical) viewing position of the observer. Thus, due to the spatial distribution of the substructures due to the condensations, the synthetic maps show differences for different LOS views. Fig. 10 shows the synthetic maps of the reoriented spatial domain (we now show the current sheet vertically) capturing the internal structures as being viewed sideways, representative for the solar limb. We do so for three different extreme ultraviolet passband filters of AIA, 171, 193, and 304 A using the contribution functions of their specific spectral lines, which each highlight material at \(\approx 0.8\) MK, 1.5 MK, and 80 kK respectively, for a LOS integration along our \(y\) direction at \(t=207.5\) min. An animation of the synthetic maps for different LOS directions around the \(z\) axis is available online. Due to the absorption features of the condensed plasma regions, the strongest absorption corresponds to the LOS direction along the \(y\) direction, as the condensations are aligned with it (see Fig. 5). The cool, condensed plasma appears darker in AIA 171, 193, and 304 A passband filters due to photo-ionisation of H, He, and He i as previously outlined. For the remaining optical H\(\alpha\) line, we obtain a positive intensity in the absence of any background illumination, as would be the case for, amongst others, prominences and jets positioned at or above the limb. The island like structures near \(z=0\) in the EUV maps in Fig. 10 represent plasmoids, which are the manifestation of the extended flux ropes Figure 9: Time series of mean kinetic (\(E_{k}\)), magnetic (\(E_{M}\)), internal (\(E_{int}\)), total (\(E_{T}\)) energy density, and ohmic heating rate (\(E_{ohm}\)), normalised with respect to their maximum values which are \(1.89\times 10^{-3}\)\(1.61\times 10^{-1}\), \(7.40\times 10^{-2}\), \(2.13\times 10^{-1}\) erg cm\({}^{-3}\), and \(2.23\times 10^{-9}\) erg cm\({}^{-3}\) s\({}^{-1}\), respectively. along \(y\) direction as shown Fig. 1d. The central current sheet is hotter than the surroundings, and therefore we see the widened area of the bright band in AIA 171, and 193 filters. The dark small features that appear in the EUV maps are due to absorption from the dense materials as those are located along the \(y\)-direction. These cool materials with temperature \(\sim 10^{4}\) K appear bright in H\(\alpha\) map. The synthetic observation for other LOS directions (shown in the animation) demonstrate a wide range of cool dense structures distributed along \(x\) and \(y\) directions. The resolution of the presented simulation is 390 km\({}^{2}\) per pixel. In all cases, the spatial resolution of our synthesized images have been modified to match that of an equivalent observatory. For the EUV pass bands, this is set to the instrumental resolution of the AIA filters (\(\approx 430\) km\({}^{2}\) per pixel), and leads to a minor smearing of the finer structures inside the finest condensations. This blurring is stronger for the H\(\alpha\) line where the resolution is set to \(1^{{}^{\prime\prime}}=725.27\) km so as to match the GONG ground-based instruments (Harvey et al., 2011). It will be useful to increase our model resolution for comparisons against the state-of-the-art observations anticipated from Solar Orbiter High Resolution Imager (HRI) campaigns (Rochus et al., 2020). ## 4 Summary and Conclusions The study of the combined tearing-thermal instability of an idealized 3D current sheet configuration as addressed in this work is to understand the theoretical basis of the multi-mode evolution of a current sheet, which is an important aspect to understand the dynamics and multi-thermal behaviour of solar atmosphere. In contrast to our earlier 2D simulations of a non-force-free current sheet (SK22), where thermal runaway was happening simultaneously with chaotic tearing, and condensations were trapped and gathered within coalescing plasmoid structures, the current setup shows a clear tearing evolution at first leading to 3D topological changes in the magnetic field, and later on show runaway condensations near the central current sheet. Note that there are some important differences between the initial setup of the current model with the one used in SK22, as explained in section 2, most notably perhaps the plasma beta regime which is uniformly low in the current 3D simulation. Here, the condensation onset time in the current model is much later than in SK22, because the initial setup and adopted perturbations do not directly modify the heat-loss balance and a purely tearing evolution starts at first. In a 3D long-term simulation of a macroscopic current sheet, we find that cool plasma condensations are produced in the vicinity of the current sheet due to tearing-influenced thermal instability. Our findings are based on a 3D resistive MHD simulation with non-adiabatic effects of radiative cooling (for an optically thin medium), background heating, and magnetic field-aligned thermal conduction, which are relevant for the solar corona. We find that the plasma density of the condensations can go up to \(\sim 10^{-14}\) g cm\({}^{-3}\), which is two orders of magnitude more than the initial background density (\(4.6\times 10^{-16}\) g cm\({}^{-3}\)), and the temperature of the condensations can drop down to \(\sim 10^{4}\) K, which is an order of magnitude less than the initial equilibrium temperature (0.5 MK) of the medium. These locally, in-situ forming cool condensed structures hence show similar thermodynamic contrasts with their surroundings, just as coronal rain or prominences observed in the solar atmosphere. This is highlighted by synthetic views, which account for important absorption effects within the synthesis of the EUV channels of _Solar Dynamics Observatory/Amospheric Imaging Assembly_. However, our model ignores the effect of the stratified solar atmosphere due to solar gravity. The condensation time scales in this idealized current sheet model is \(\approx 150\) min, which is larger than the time scales (which is \(\approx 30\) min) compared to the earlier study for a post-flare coronal rain model by Ruan et al. (2021), where they use a stratified atmosphere, and reveal the multi-thermal aspects of a post flare loop underneath the current sheet and reconnection sites. Whereas, our study demonstrates the development of multi-thermal plasma (\(\sim 10\) kK - MK) in and around the current sheet and reconnection sites. Also, earlier models of coronal rain in arcades show a clear tendency to develop strong shearing motions (Li et al., 2022; Zhou et al., 2021; Fang et al., 2015), and velocity shear can alter the tearing mode growth, which we plan to explore by incorporating velocity shear flows into the model. Future efforts should exploit a more realistic 3D model by using the initial magnetic field configuration from an extrapolated magnetic field for an active region vector magnetogram. Nevertheless, the current study sheds new light on the instability of solar current sheets due to the combined effect of tearing and thermal modes, and unifies multi-thermal processes in current sheets with the formation mechanism of cool condensations such as prominences, and coronal rain in solar atmosphere, which are important aspects in the understanding of the broader solar coronal heating. ###### Acknowledgements. Data visualization and analysis are performed using Visit and yt-project. SS and RK acknowledge support by the C1 project TRACESpace Figure 10: Specific intensity counterparts to the simulation, where we account for emission and absorption. Synthetic maps for the broadband 171, 193, 304 Å SDO/AIA filters, and a narrowband Hydrogen-H\(\alpha\) filter, for a LOS view along the \(y\) direction at \(t=207.5\) min, are shown from the left to right panels, respectively. An animation of this figure for a rotating LOS view around the \(z\) (from 0 to 360\({}^{\circ}\)) axis is available online. This shows the limb view on a current sheet that shows thermal-tearing evolutions. funded by KU Leuven. JJM and RK acknowledge the support by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No. 833251 PROMINENT ERC-ADG 2018) and aFWO project G0804521N. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation Flanders (FWO) and the Flemish Government - department EWI. RK acknowledges the International Space Science Institute (ISSI) in Bern, ISSI international team project #545.
2304.08952
A Hyper-network Based End-to-end Visual Servoing with Arbitrary Desired Poses
Recently, several works achieve end-to-end visual servoing (VS) for robotic manipulation by replacing traditional controller with differentiable neural networks, but lose the ability to servo arbitrary desired poses. This letter proposes a differentiable architecture for arbitrary pose servoing: a hyper-network based neural controller (HPN-NC). To achieve this, HPN-NC consists of a hyper net and a low-level controller, where the hyper net learns to generate the parameters of the low-level controller and the controller uses the 2D keypoints error for control like traditional image-based visual servoing (IBVS). HPN-NC can complete 6 degree of freedom visual servoing with large initial offset. Taking advantage of the fully differentiable nature of HPN-NC, we provide a three-stage training procedure to servo real world objects. With self-supervised end-to-end training, the performance of the integrated model can be further improved in unseen scenes and the amount of manual annotations can be significantly reduced.
Hongxiang Yu, Anzhe Chen, Kechun Xu, Zhongxiang Zhou, Wei Jing, Yue Wang, Rong Xiong
2023-04-18T12:45:55Z
http://arxiv.org/abs/2304.08952v1
# A Hyper-network Based End-to-end Visual Servoing ###### Abstract Recently, several works achieve end-to-end visual servoing (VS) for robotic manipulation by replacing traditional controller with differentiable neural networks, but lose the ability to servo arbitrary desired poses. This letter proposes a differentiable architecture for arbitrary pose servoing: a hyper-network based neural controller (HPN-NC). To achieve this, HPN-NC consists of a hyper net and a low-level controller, where the hyper net learns to generate the parameters of the low-level controller and the controller uses the 2D keypoints error for control like traditional image-based visual servoing (IBVS). HPN-NC can complete 6 degree of freedom visual servoing with large initial offset. Taking advantage of the fully differentiable nature of HPN-NC, we provide a three-stage training procedure to servo real world objects. With self-supervised end-to-end training, the performance of the integrated model can be further improved in unseen scenes and the amount of manual annotations can be significantly reduced. ## I Introduction Visual servoing (VS) is a technique that uses vision feedback to guide the robot to achieve high-precision positioning. In classical VS[1, 2, 3, 4], a set of handcrafted visual features such as points, lines, contours, moments are extracted and compared with features of a pre-defined desired pose. A manually designed controller then moves the camera to the pre-defined desired pose by reducing the feature error between the desired and current poses. With the development of deep learning, some learning-based methods have emerged to reduce the excessive manual effort in VS. [5, 6, 7, 8] use Convolutional Neural Networks (CNNs) to process the images observed at the current and the desired pose separately and estimate the relative pose, following by a position-based visual servo (PBVS) controller[1]. They get rid of expensive manual feature annotations by considering the whole image as a feature. However, the pose estimation performance depends on the similarity of the input image pairs, so the offset between the initial pose and the desired pose cannot be large [9, 10, 11, 11] use deep learning to improve the reliability of 2D correspondences extraction followed by an image-based visual servo (IBVS) controller[1] or IBVS-based MPC[11]. The consistency of features enables VS with large pose offset. But IBVS has inherent drawbacks such as small convergence region and local minima[12, 13]. Recently, several works achieve self-supervised end-to-end VS for robotic manipulation tasks[14, 15, 16]. Given a real world target object, they take the 2D keypoints extracted in an unsupervised manner as features, which avoids either manual keypoint annotations and pose estimation. By replacing IBVS with a neural controller, they make the whole architecture differentiable which enables self-supervised end-to-end training. However, these methods have an obvious weakness in that they are designed to servo a fixed desired pose, thus losing the functionality of the traditional VS controller to handle arbitrary desired poses. By only taking the image observed at the current pose as input, they lack the information about the desired pose. This means that if the number of desired poses is limited, they will have to train several neural controllers and select the appropriate one according to the given desired pose. But considering a scenario that needs to change the desired pose constantly, these methods will have to train infinite controllers or to frequently fine-tune the controller to ensure VS performance. _How to implement a lightweight differentiable neural controller that can servo arbitrary desired poses remains a challenging problem._ In this letter, we try to investigate an appropriate neural controller architecture capable of servoing arbitrary desired poses. To servo a random sampled desired poses in the 6 degree of freedom (DOF) space, it is not practical to train an infinite number of controllers. Simply adding an input that encoding information of the desired pose will inevitably enlarge network volume and prolong the inference time. In Fig. 1: The upper coordinate represents the camera’s frame and the lower represents the target object’s frame. Given the keypoints of desired poses, HPN generates an unique neural controller for each pose. For example, given the keypoints observed at desired Pose 1, HPN generates Controller 1 that helps the camera move to Pose 1 from arbitrary initial poses. contrast, we define servoing a single desired pose as a task, and state servoing arbitrary desired poses as a multi-task learning problem. We proposing a hyper-network (HPN)[17] based neural controller (HPN-NC) to tackle this problem. HPN-NC consists of a hyper net and a low-level neural controller. Following [14, 15, 16], we use 2D keypoints as features. As shown in Fig. 1, given 2D keypoints extracted at a desired pose, the hyper net generates the parameters for the low-level controller corresponding to this desired pose. The modulated low-level controller takes the error between current and desired 2D keypoints for control inference. In this way, we can generate an unique controller for each desired pose that avoids endless fine-tuning. HPN-NC outperforms traditional IBVS and other neural controllers(NCs). To servo real world objects, we connect it with a supervised neural observer(NO) and provide a three-stage training procedure: training HPN-NC in simulation with synthetic data, training NO with manual keypoint annotations and end-to-end training the integrated model(IM:NO with NC) with all synthetic, annotated and robot's self-supervision data. Taking advantage of the fully differentiable nature of HPN-NC, IM can be further improved fully automatically in an end-to-end manner transfering to unseen scenes. Note that the amount of manual annotations can also be significantly reduced through end-to-end training, as they are only used as regularizer. Overall, the contributions of this paper are three-fold: * We are the first work to state VS arbitrary desired poses with neural network as a multi-task learning problem. To solve this problem, we propose the HPN-NC. It outperforms other network structures when VS arbitrary desired poses both in simulation and real world experiments. * Neural controllers can be further fine-tuned fully automatically with self-supervised end-to-end training. HPN-NC's fine-tuning ability outperforms other NCs given error-free 2D keypoints in simulation. Given imprecise 2D keypoints in unseen scene in real world, the performance of IM consists of NO and HPN-NC can be further improved. * We also consider a situation with insufficient manual annotations. Self-supervised end-to-end training enables IM to achieve 92\(\%\) success rate VS with only 30 annotations. ## II Related Work **Traditional VS Methods:** Classical VS methods can help the robot achieve high-precision positioning through vision feedback but highly rely on handcrafted visual features, manually labeling or recognizable QR code [3, 4]. Traditional VS controllers include IBVS[1, 18], PBVS[19, 20] and hybrid approaches [21, 22]. PBVS uses the relative pose between current and desired pose as visual feature and plans a globally asymptotically stable straight trajectory in 3D Cartesian space. IBVS uses matched keypoints on 2D image plane, which is insensitive to calibration error, but suffers from small convergence region due to the high non-linearity [12, 13]. It may meet feature loss problem[23] dealing with large initial pose offset. **Deep learning Based Methods:** Deep Neural Networks[24, 25] have shown remarkable feature extraction ability on various tasks such as detection, segmentation or tracking, and also alleviate the dependency of manual effort for VS task. Pose estimation methods[6, 5, 7] usually bypass the 2D keypoints prediction and estimate the relative pose or directly to predict control commands from image pairs observed at the current and the desired poses. [6] implements a deep neural network to estimate relative pose between current camera pose and the desired camera pose, then performs PBVS based on the relative pose. [5] trains a convolutional neural network over the whole image with synchronised camera poses to guide the quadrotor. [7] proposes a new neural network based on a Siamese architecture which outputs the relative pose between any pair of images and realize VGA connector insertion with submillimeter accuracy. As the input image pairs have to be similar to ensure the performance of learning based pose estimator, camera pose offsets between the desired and the initial poses are limited. Keypoint based methods[9, 10, 11, 16, 14, 15] extract 2D keypoints for the subsequent controller. [9, 10, 11] use neural networks to predict matched visual features or optical flow, then calculate control command through IBVS controller. However, IBVS has internal deficiency and may fail to servo the desired pose with large initial pose offset. Other methods[16, 14, 15] use neural controllers instead of IBVS controller. [16] learn policies that map raw image observations directly to torques at the robot's motors through deep convolutional neural networks with self-supervised learning. [14] learns the 2D keypoint representation from the image with an auto-encoder and learns the motion based on extracted keypoints. The controllers in the above two methods are trained end-to-end by self-supervised learning. The extracted keypoints can also be used to learn robot motion with end-to-end reinforcement learning [15]. ## III Methods HPN-NC generates a unique neural controller for each desired pose in 6 DOF space. In this chapter, we first introduce implementation details of HPN-NC in Section III-A. In Section III-B, we introduce a three stage training procedure that enables HPN-NC to servo real world objects and to adapt to unseen scenes. ### _Hyper-network Neural Controller_ As shown in right part of Fig. 2, HPN-NC has a hyper network (HPN, in pink) specialized in encoding the information of desired poses, and a low-level neural controller (NC, in blue) responsible for servoing the given pose. Both of the upper HPN and the low-level NC are lightweight three-layer fully connected neural networks. HPN takes the pixel coordinates \(\mathbf{s}^{*}\in\mathbb{R}^{2\times n}\) of \(n\) 2D keypoints extracted at the desired pose as input and outputs the weights and biases of the last layer of the low-level NC. The last two layers of NC have 128 and 6 units respectively, so HPN outputs 128x6 weights and 6 biases. The low-level NC takes \(\mathbf{e}=\mathbf{s}^{*}-\mathbf{s}\) as input just like traditional IBVS controller, which is the coordinate error of the 2D keypoints between the desired pose and the current pose. The output of NC is the 6-dimensional control command \(\bar{{}^{c}}\mathbf{V}_{c}=\left[\begin{array}{cc}v_{c}&\omega_{c}\end{array} \right]^{T}\in\mathbb{R}^{6}\) consists of instantaneous camera linear velocity and angular velocity under camera frame. Since the last layer parameters of NC are determined by the desired pose, we are able to generate an independent controller for each desired pose and avoid any fine-tuning when switching the desired pose. HPN-NC is trained in Pybullet simulation automatically. As shown in left part of Fig. 2, like traditional VS, we first set the virtual camera to a random desired pose under the object frame \({}^{o}\mathbf{T}_{c^{*}}\) to get desired 2D keypoints \(\mathbf{s}^{*}\), \[{}^{o}\mathbf{T}_{c^{*}}=\left[\begin{array}{cc}{}^{o}\mathbf{R}_{c^{*}}&{}^{o}\mathbf{ T}_{c^{*}}\\ 0&1\end{array}\right]\in\mathbb{R}^{4\times 4} \tag{1}\] where \(o\) represents the object frame and \(c^{*}\) represents desired camera frame. Given the 3D model of the object and the camera intrinsic matrix, we can obtain \(\mathbf{s}^{*}\) by projection. Taking the 2D keypoint \(\mathbf{s}^{*}\) of the desired pose as input, HPN infers the parameters of NC's last layer, while the other two layers of NC are identical for any desired pose, \[\mathbf{\theta}_{NC}=f^{HPN}_{\mathbf{\theta}_{HPN}}\left(\mathbf{s}^{*}\right) \tag{2}\] Then, we set the camera to a random initial pose \({}^{o}\mathbf{T}_{c}\), and get current keypoints \(\mathbf{s}\). The low-level NC takes the keypoint error \(\mathbf{e}=\mathbf{s}^{*}-\mathbf{s}\) as input and outputs the camera velocity, \[\bar{{}^{c}}\mathbf{V}_{c}=f^{NC}_{\mathbf{\theta}_{NC}}\left(\mathbf{s}^{*}-\mathbf{s}\right) =f^{NC}_{\mathbf{\theta}_{HPN},\mathbf{s}^{*}}\left(\mathbf{s}^{*}-\mathbf{s}\right) \tag{3}\] PBVS provides the supervision as its trajectory in 3D space is an efficient and secure straight line. We get the supervision through desired pose \({}^{o}\mathbf{T}_{c^{*}}\) and current pose \({}^{o}\mathbf{T}_{c}\): \[\bar{{}^{c}}\mathbf{V}^{PBVS}_{c}=-\lambda\left[\begin{array}{cc}{}^{c}\mathbf{R}^{ T}_{c}\cdot^{c}\mathbf{t}_{c}\\ \theta u\end{array}\right] \tag{4}\] where \(\theta u\) is the axial angle of the rotation between current and the desired pose, \(\lambda\) is the coefficient that uniformly set as 0.4 in this work. We denote the input and output tuples to be \(q_{NC}\) and the collected training dataset to be \(D_{NC}\): \[\begin{split} q_{NC}\triangleq(\mathbf{s}^{*},\mathbf{s},\bar{{}^{c}} \mathbf{V}^{PBVS}_{c})\\ D_{NC}=\{q_{NC}\}\end{split} \tag{5}\] We use MSE loss \(\mathcal{L}_{NC}\) for training and dataset aggregation (DAgger) technique[26] for training acceleration: \[\mathcal{L}_{NC}=\|\bar{{}^{c}}\mathbf{V}^{PBVS}_{c}-f^{NC}_{\mathbf{\theta}_{HPN}, \mathbf{s}^{*}}\left(\mathbf{s}^{*}-\mathbf{s}\right)\|^{2}_{2} \tag{6}\] The upper HPN could be a large and powerful network that has strong encoding ability. The lower NC is a lightweight fully connected network without any complicated structure. For a given desired pose, the parameter inference of NC is done before visual servoing. so the control command inference by NC during VS is efficient. Therefore, HPN-NC takes both the strong modulation for the variation of desired poses and the efficient control inference into account. ### _Real world VS with HPN-NC_ We trained a neural observer (NO) to obtain 2D keypoints of ordinary objects in real world (see Fig. 4), so that it can be used as the front end of VS controllers. In order to servo objects in a large range in Cartesian space, NO needs to ensure the consistency between 2D keypoints extracted at different poses and the ground truth keypoints projected by the pre-defined 3D model on the camera plane, and also be able to adapt to various backgrounds. We train NO with manual annotated 2D keypoints. To improve its robustness, we expand the dataset with techniques such as translation, rotation, scaling, background replacement and homography matrix stretching. But there still remains some shortcomings: due to the limited manual annotations, it is impossible to cover all the viewing angles in the workspace and NO may fail at certain camera poses, affecting the performance of Fig. 2: The left part gives the pipeline of training HPN-NC in simulation. HPN-NC is supervised by PBVS to ensure a satisfactory servo performance. DAgger helps the model learn more quickly. The right part shows the detail of how HPN generates a NC for a given desired pose. To switch between different desired poses, HPN inferences the weights and biases of the low-level NC’s last layer with the 2D keypoints obtained at the desired pose. VS; VS the target object in a new scenario may cause the performance degradation of NO; manual annotating 2D keypoints is costly. These shortcomings are fatal for traditional controllers, but not for neural controllers. Being fully differentiable, the integrated model (IM, shown in Fig. 3) consists of NO and HPN-NC can be fine-tuned in an end-to-end manner in unseen scenarios. As the supervision is calculated automatically by camera poses, the training process can be self-supervised, which leads to lower data acquisition cost than manual labeling. Therefore, we can utilize a large amount of end-to-end data to improve IM, and use the manual annotations only as the regularizer. This greatly reduces the amount of the manual annotations required for training. **Stage1 Training of Controller:** The training procedure of HPN-NC is described in Section III-A. **Stage2 Training of Observer:** The input of the neural observer is a RGB image \(I\), and the output is the pixel coordinates of \(n\) 2D keypoints. The backbone of NO can be SpatialConfiguration-Net(SCN)[27] or pre-trained ResNet[28]. For training dataset \(D_{NO}\), we collect RGB images from various perspectives, distances and illuminations, and manually annotate the ground truth 2D keypoint coordinates \(\mathbf{s}_{i}^{MA}\) of pre-defined 3D model. We have \[\begin{split} q_{NO}\triangleq(I,\mathbf{s}_{i}^{MA})\;\;for\;i=1,2,...,n\\ D_{NO}=\{q_{NO}\}\end{split} \tag{7}\] We use data augmentation techniques described above to improve the generalization of NO. NO generates a heatmap \(h_{i}(x)\) for each keypoint \(i\): \[h_{i}(x)=f_{\mathbf{\theta}_{NO}}^{NO}(x)\;\;for\;i=1,2,...,n \tag{8}\] where \(x\) is pixel coordinates in \(I\). We tries to minimize the difference between the predicted heatmap and the ground truth heatmap \(g_{i}(x)\) peaking at \(\mathbf{s}^{MA}\). At the same time, in order to improve the accuracy of predicted keypoints, we minimize the \(\mathbf{L_{2}}\) norm between the keypoint coordinates calculated by spatial-softmax operation and the ground truth coordinates \(\mathbf{s}_{i}^{MA}\). Therefore,the total loss \(\mathcal{L}_{NO}\) to train the observer is: \[\begin{split}\mathcal{L}_{NO}=\sum_{i=1}^{n}(\gamma_{h}\sum_{x \in I}\left\|g_{i}(x)-h_{i}(x)\right\|_{2}^{2}\\ +\gamma_{k}\left\|\mathbf{s}_{i}^{MA}-\sum_{x\in I}xh_{i}(x)\right\|_ {2}^{2})\end{split} \tag{9}\] where \(\gamma_{h}=10\) and \(\gamma_{k}=0.00001\) are scale factors to facilitate the learning process convergence. **Stage3 End-to-end Training of Integrated Model:** We fine-tune IM with large amount robot's self-supervision data in an end-to-end manner. As shown in Fig. 3, a random desired pose under the robot's base frame \({}^{b}\mathbf{T}_{c^{*}}\) is sampled, the robot first moves the camera to this pose. An image \(I^{*}\) observed at the desired poses \({}^{b}\mathbf{T}_{c^{*}}\) is sent to NO to obtain desired 2D keypoints \(\mathbf{s}^{*}\). Afterwards the robot moves the camera to a random sampled initial pose \({}^{b}\mathbf{T}_{c}\) to get the initial 2D keypoints \(\mathbf{s}\), \[\mathbf{s}^{*}=f_{\mathbf{\theta}_{NO}}^{NO}\left(I^{*}\right),\mathbf{s}=f_{\mathbf{\theta}_ {NO}}^{NO}(I) \tag{10}\] HPN infers an unique NC for \({}^{b}\mathbf{T}_{c^{*}}\) according to \(\mathbf{s}^{*}\), \[\mathbf{\theta}_{NC}=f_{\mathbf{\theta}_{HPN}}^{HPN}\left(\mathbf{s}^{*}\right) \tag{11}\] NC outputs the velocity of the camera according to the 2D keypoint error \(\mathbf{e}=\mathbf{s}^{*}-\mathbf{s}\), \[\bar{\mathbf{V}}_{c}=f_{\mathbf{\theta}_{HPN},\mathbf{s}^{*}}^{NC}\left(\mathbf{s}^{*}-\mathbf{s}\right) \tag{12}\] We use DAgger to deal the out of distribution problem. When the robot is wandering in the workspace following \(\bar{\mathbf{v}}\mathbf{{}_{c}}\), it automatically collects the end-to-end (E2E) training dataset \(D_{E2E}\). \(D_{E2E}\) is consist of image and control tuple \(q_{E2E}\) at different poses: \[\begin{split} q_{E2E}\triangleq(I,I^{*},\bar{\mathbf{v}}\mathbf{{}_{c}} ^{PBVS})\\ D_{E2E}=\{q_{E2E}\}\end{split} \tag{13}\] Fig. 3: The training procedure for real world VS. Image and control tuples are automatically collected to optimize the integrated model consisted of a neural observer and a neural controller. Through end-to-end training, the performance of integrated model could be improved. Note that manual annotations and the simulation data are respectively used as regularizer for observer and controller, so the amount of manual annotations can be significantly reduced. where \(\bar{{}^{c}}\mathbf{V}_{c}^{PBVS}\) is calculated by PBVS controller according to desired pose \({}^{b}\mathbf{T}_{c^{*}}\) and the current pose \({}^{b}\mathbf{T}_{c}\): \[\bar{{}^{c}}\mathbf{V}_{c}^{PBVS}=-\lambda\left[\begin{array}{c}{{}^{c}}^{\mathbf{R}_{ c}^{T}\cdot{{}^{c}}^{*}\mathbf{t}_{c}}\\ \theta u\end{array}\right] \tag{14}\] We use MSE loss \(\mathcal{L}_{E2E}\) for training: \[\mathcal{L}_{E2E}=\left\|\bar{{}^{c}}\mathbf{V}_{c}^{PBVS}-f_{\mathbf{\theta}_{HPN},I^ {*}}^{NC}\left(f_{\mathbf{\theta}_{NO}}^{NO}\left(I^{*}\right)-f_{\mathbf{\theta}_{NO}}^ {NO}(I)\right)\right\|_{2}^{2} \tag{15}\] However, there are actually infinite kinds of IMs that satisfy the constraints from \(D_{E2E}\), results in a drift in the outputs of observer NO and controller HPN-NC. To prevent drift, we want NO and HPN-NC to satisfy the constraints from \(D_{NO}\) and \(D_{NC}\). Thus, we use \(D_{NO}\) and \(D_{NC}\) to co-train NO and HPN-NC with \(D_{E2E}\), so the data in \(D_{NO}\) and \(D_{NC}\) will regularize NO and HPN-NC. Since the data in \(D_{NO}\) only acts as the regularizer, only a small number of manual annotations is needed. The total loss function of Stage 3 is: \[\mathcal{L}=\mathcal{L}_{NC}+\mathcal{L}_{NO}+\mathcal{L}_{E2E} \tag{16}\] Note that when a calibrated camera extrinsic matrix is given, the robotic arm can move the camera to a specified pose automatically, also the end-to-end supervision does not require any manual labeling, so the entire learning process can be fully automated. ## IV System Implementation ### _Simulation Settings_ An environment including a virtual camera and the target object's 3D model is built in Pybullet. In each data collecting or model evaluation episode, a random desired camera pose \({}^{o}\mathbf{T}_{c^{*}}\) is sampled in the space 15cm above the target object with 0 to 5cm disturbance in \(XYZ\) translation and a random initial camera pose \({}^{o}\mathbf{T}_{c}\) is sampled in the space 30cm above the target object with 0 to 10cm disturbance in \(XYZ\) translation. Both \({}^{o}\mathbf{T}_{c}\) and \({}^{o}\mathbf{T}_{c^{*}}\) ensure all keypoints within the camera's field of view (Fov). The maximum initial pose offset between the initial and desired pose is \(\Delta\mathbf{r}_{0}=(c^{*}\mathbf{t}_{c},\theta\mathbf{u}):c^{*}\mathbf{t}_{c}=(15cm,15cm,30cm ),\theta\mathbf{u}=(53.1^{\circ},53.1^{\circ},180^{\circ},)\). An episode is considered to be finished successfully if the total error between the current and the desired keypoints is lower than a specified threshold \(\delta_{f}\): \[\sum_{k=1}^{n}|u_{k}-u_{k}^{*}|+|v_{k}-v_{k}^{*}|\leq\delta_{f} \tag{17}\] Before the keypoint error fully converges, there are several situations that trigger the early termination of the episode: * Every episode has a maximum steps of \(\delta_{s}\) with each step takes \(0.1s\). An episode will finish if the keypoint error hasn't reached the threshold \(\delta_{f}\) within \(\delta_{s}\) steps. * The camera walks out the workspace. * Any 2D keypoint is out of the camera's Fov. We use four criterias to analysis the servo performance: servo success rate (**SR**), servo efficiency (timesteps,**TS**), final rotation error (**RE**) and final translation error (**TE**). We calculate the transformation between the final camera pose and the desired pose \({}^{c^{*}}\mathbf{T}_{c}\). **Rotation Error (RE):** The relative rotation \({}^{c^{*}}\mathbf{R}_{c}\) is converted into an axis-angle representation \({}^{c^{*}}\theta_{c}\)\({}^{c^{*}}\mathbf{u}_{c}\). The rotation accuracy of camera is considered satisfactory if the deflection angle between the final pose and the desired pose is less than threshold \(\delta_{r}\). **Translation Error (TE):** The translation accuracy of camera is considered satisfactory if the displacement between the final position and the desired position is less than \(\delta_{t}\). If RE and TE are less than corresponding threshold, an episode would also be regarded as a successful trial. For threshold values, please refer to the supplementary material. ### _Real World Settings_ The real world experiment is carried out on an UR5 robot. The settings of the real world experiment are the same as those of the simulation except for some thresholds. For specific parameter values of real world experiments, please refer to the supplementary material. ### _Target Objects and 3D Models_ Compared with some pioneer works[14, 9] that need accurate objects' meshes for rendering and training, we only need to define \(n\) 3D feature points on the object and roughly measure their coordinates under object's frame as 3D model. Fig. 4 shows several objects we used as target objects. For specific coordinates of their 3D models, please refer to the supplementary material. ## V Experimental Results In this section, we carried out a series of experiments to evaluate our method. Simulation experiments are performed on a computer with 16 Intel(R) Core(TM) i9-9900K 3.60GHz and one NVIDIA GeForce RTX 2080 SUPER. Real world experiments are performed on a computer with 12 Intel(R) Core(TM) i7-8700 3.20GHz and one NVIDIA GeForce GTX 1060. The goals of the experiments are: * to validate that the performance of proposed HPN-NC is better than the traditional IBVS controller and other neural controllers in multiple desired poses VS tasks. * to validate that the integrated model (IM) is able to servo real world object with no obvious feature in unseen scenes. * to validate that IM can further improve the servo performance in unseen scene, promote the keypoint extraction ability and reduce manual annotation cost with self-supervised end-to-end training. Fig. 4: Target objects: Apriltag is used for evaluation of HPN-NC in simulation. The charging port and the toy horse are real world objects without obvious feature. Specific values of 3D models is in the supplementary materials. ### _Introduction of Baseline_ We compare the performance of HPN-NC with IBVS[1] and three neural controllers. These NCs have different structures but are all supervised by the same PBVS teacher. Fully connected neural controller (FCN-NC) is a three layers fully connected neural networks derived from [16], whose input is the 2D keypoints error of current and desired pose. DenseNet based neural controller (DenseNet-NC) is the controller used in [14]. DenseNet-NC's main structure is a DenseNet following with a fully connected layer. It takes the concatenated vector of current and desired keypoints as input. Auto-encoder based neural controller (AE-NC) draws on the structure of auto-encoder[29] to encode the information of the desired pose. Its low-level controller is similar to FCN-NC except for an additional input: a latent vector from the auto-encoder. For implementation details, please refer to the supplementary material. ### _Simulation Evaluations_ **Performance Comparison:** All the NCs are trained for 100 epochs. Each epoch first runs 10 thousands data collection steps. Then NCs are trained for 500 batches with a batch size of 512. The evaluation is performed on the apriltag shown in Fig. 4. Tab. I shows the performance of controllers for 500 trials with different desired poses. Supervised by a perfect PBVS teacher, all of NCs have higher SR than IBVS. Among these NCs, HPN-NC has higher SR, servo accuracy (RE and TE) and shorter inference time. We believe that the advantage of HPN-NC stems from the fact that it uses a complex hyper network to model the modulate mechanism of different VS tasks, but only the lightweight low-level fully connected NC is used during the servo process, which ensures efficient inference. Fig. 5 shows the 2D, 3D and error trajectories for two visual servoing tasks with different desired and initial poses. All controllers have reached the desired pose. But only DenseNet-NC, AE-NC and HPN-NC complete within 200 steps. The terminal error of HPN-NC is smaller than that of DenseNet-NC and AE-NC because of stronger modulate ability. With such strong hyper net, HPN-NC's 2D and 3D trajectories are more similar to the ground truth PBVS's. For more cases, please refer to the supplementary material. **Fine-tuning with Self-supervised End-to-end Training:** No matter how powerful the neural controllers are, they cannot guarantee that all desired poses can be successfully VS. But since these NCs are completely differentiable, for those desired poses that cannot be servoed, NCs can be fine-tuned through self-supervised end-to-end training. The fine-tuning process is similar to the training process described in Section III-A, except that the desired pose is fixed. We select 10 desired poses that all of the NCs in Tab. I failed to VS and compare the performance of fine-tuned NCs. As shown in Fig. 6, we compare the average SR and TS after 1-step and 3-step fine-tuning for 10 desired poses. Each step of fine-tuning runs 10 thousand data collection steps. Then the model is fine-tuned for 500 batches with a batch size of 512. For evaluation, the fine-tuned models try to VS the selected desired poses from 500 different initial poses. After 1-step fine-tuning, HPN-NC's average servo SR is about 90\(\%\) for 10 desired poses, which is the highest among all NCs'. After 3-step fine-tuning, HPN-NC, AE-NC and FCN-NC all have high SR, but HPN-NC has less TS for about 200 steps which means it can reach the desired poses faster. In other words, only HPN-NC can achieve high success rate and high efficiency VS with efficient fine-tuning. ### _Real World Evaluations_ **Performance Comparison:** VS real world objects in unseen scenes inevitably needs to face the recognition error of the observer. We first compare the performance of different controllers given the same neural observer. We train the NO with SpatialConfiguration-Net[27] (SCN) to extract the pre-defined 2D keypoints of the charging port on 1000 annotations. For specific training details, please refer to the supplementary material. 2D keypoints predicted by NO are used as the input of controllers. Tab. II shows the results Fig. 5: 2D, 3D and error trajectories for a VS task. For 2D trajectories, black dots represent the initial keypoints and red dots represent the desired keypoints. For 3D trajectories, purple triangles represent the initial camera poses and red triangles represent the desired camera poses. Error trajectories visualize the TE and RE between the current and the desired camera poses. TE is in meters and RE is represented by the axis angle. of different integrated models to servo 50 different desired poses. As discussed in Section III-B, limited annotations and unseen scene leads to the recognition error of NO. Although affected by recognition error, HPN-NC has the highest SR and smallest TS compared with IBVS and other NCs. The same degradation is happened to BBVS. In Row 7, the SR of PBVS which uses relative camera pose estimated by Perspective-n-Point[30] (PnP) with 2D keypoints extracted by NO, drops 10\(\%\) compared with ground truth PBVS. The relative camera pose of ground truth PBVS is calculated by robot's tool center point and calibrated camera extrinsic, so it will not be affected by NO's recognition error. **Performance Improvement with Self-supervised End-to-end Training:** Unfortunately, PnP is not differentiable, IM consists of NO and PBVS cannot be further promoted to deal with recognition error. Taking advantage of the fully differentiable nature of HPN-NC, IM consists of NO and HPN-NC can be improved by self-supervised end-to-end training with DAgger. IM is fine-tuned for 5 epochs. Each epoch runs 2000 data collection steps to get \(D_{E2E}\). \(D_{NC}\), \(D_{NO}\) and \(D_{E2E}\) are divided to be the training set with 80\(\%\) data and the validation set with 20\(\%\) data. Then, IM is fine-tuned for 2000 epochs with a batch size of 512,2,1 respectively for \(D_{NC}\), \(D_{NO}\) and \(D_{E2E}\). Lastly, a model with smallest end-to-end loss is selected for the next DAgger epoch. As shown in Tab. II, end-to-end training improves IM's SR from 78\(\%\) to 98\(\%\). Servo efficiency(TS) and accuracy(RE and TE) are also improved. As discussed in Stage 3 of Section III-B, during the end-to-end training, the robot is self-supervised and no manual effort is introduced. **Manual Annotation Reduction with Self-supervised End-to-end Training:** A more realistic problem is that real world objects' annotations are often insufficient due to the high production costs. Less training data introduces larger recognition error which causes VS performance degradation. Another advantage of self-supervised end-to-end training is to reduce the amount of manual annotations needed for VS. By using those annotations only as regularizer, we replace the expensive manual annotations with cheap self-supervised end-to-end data with control labels. We choose a pre-trained ResNet-18[24] as NO to avoid the failure of training NO with too little annotations. We respectively use 600, 300 and 30 pieces of annotations to train NO. As shown in Tab. III, SR of IM with PBVS(PnP) gradually decreases as the amount of manual annotations decreases. Through the end-to-end training described in Section III-B, SR of IMs can respectively be promoted to 94\(\%\)(300 annotations) and 92\(\%\)(30 annotations). From Fig. 7, we could find the recognition error can be reduced after end-to-end training. ## VI Conclusions In this paper, we explore that hyper-network is an appropriate architecture for multiple desired poses VS. It outperforms IBVS and other neural controllers by success rate, servo efficiency, network volume, inference time and adaptation efficiency. We evaluate the proposed model in both simulation and real world experiments. For real world VS task, we propose a three-stage training procedure that can further improve the model's servo performance and reduces manual annotation amount. It's fully automatic and achieves 92\(\%\) success rate with only 30 pieces of manual annotations. With the proposed training procedure, VS can be efficiently applied to similar scenarios in real world. In the future, we will address the task with more matched correspondences such as eye-to-hand VS with optical flow.
2307.00686
Neural network execution using nicked DNA and microfluidics
DNA has been discussed as a potential medium for data storage. Potentially it could be denser, could consume less energy, and could be more durable than conventional storage media such as hard drives, solid-state storage, and optical media. However, computing on data stored in DNA is a largely unexplored challenge. This paper proposes an integrated circuit (IC) based on microfluidics that can perform complex operations such as artificial neural network (ANN) computation on data stored in DNA. It computes entirely in the molecular domain without converting data to electrical form, making it a form of in-memory computing on DNA. The computation is achieved by topologically modifying DNA strands through the use of enzymes called nickases. A novel scheme is proposed for representing data stochastically through the concentration of the DNA molecules that are nicked at specific sites. The paper provides details of the biochemical design, as well as the design, layout, and operation of the microfluidics device. Benchmarks are reported on the performance of neural network computation.
Arnav Solanki, Zak Griffin, Purab Ranjan Sutradhar, Amlan Ganguly, Marc D. Riedel
2023-07-02T23:42:27Z
http://arxiv.org/abs/2307.00686v1
Neural network execution using nicked DNA and microfluidics Arnav Solanki\({}^{*1}\), Zak Griffin\({}^{*2}\), Purab Ranjan Sutradhar\({}^{2}\), Amlan Ganguly\({}^{2}\), Marc Riedel\({}^{1\dagger}\) ## 1 Abstract DNA has been discussed as a potential medium for data storage. Potentially it could be denser, could consume less energy, and could be more durable than conventional storage media such as hard drives, solid-state storage, and optical media. However, computing on data stored in DNA is a largely unexplored challenge. This paper proposes an integrated circuit (IC) based on microfluidics that can perform complex operations such as artificial neural network (ANN) computation on data stored in DNA. It computes entirely in the molecular domain without converting data to electrical form, making it a form of _in-memory_ computing on DNA. The computation is achieved by topologically modifying DNA strands through the use of enzymes called nickases. A novel scheme is proposed for representing data stochastically through the concentration of the DNA molecules that are nicked at specific sites. The paper provides details of the biochemical design, as well as the design, layout, and operation of the microfluidics device. Benchmarks are reported on the performance of neural network computation. ## 2 Introduction This paper presents a novel method for implementing mathematical operations in general, and artificial neural networks (ANNs) in particular, with molecular reactions on DNA in a microfluidic device. In what follows, we discuss the impetus to store data and perform computation with DNA. Then we outline the microfluidic technology that we will use for these tasks. ### Background The fields of _molecular computing_ and _molecular storage_ are based on the quixotic idea of creating molecular systems that perform computation or store data directly in molecular form. Everything consists of molecules, of course, so the terms generally mean computing and storage in _aqueous_ environments, based on chemical or biochemical mechanisms. This is in contrast to conventional computers, in which computing is effected _electrically_ and data is either stored _electrically_, in terms of voltage, in solid-state storage devices; or _magnetically_, in hard drives; or _optically_ on CDs and DVDs. Given the maturity of these conventional forms of computing and storage, why consider chemical or biochemical means? The motivation comes from distinct angles: 1. Molecules are very, very **small**, even compared to the remarkable densities in our modern electronic systems. For instance, DNA has the potential to store approximately 1,000 times more data per unit volume compared to solid-state drives. Small size also means that molecular computing can be _localized_, so it can be performed in confined spaces, such as inside cells or on tiny sensors. 2. In principle, molecular computing could offer unprecedented **parallelism**, with billions of operations occurring simultaneously. 3. In principle, molecular computing could consume much less **energy** than our silicon systems, which always need a bulky battery or wired power source. 4. The use of naturally occurring molecules with enzymes results in a more **sustainable** computer design without the use of toxic and unethically sourced raw materials. 5. Finally, molecular computing could be deployed **in situ** in our bodies or our environment. Here the goal is to perform sensing, computing, and actuating at a molecular level, with no interfacing at all with external electronics. The inherent biocompatibility of molecular computing components offers the possibility of seamless integration into biological systems. #### DNA Storage The leading contender for a molecular storage medium is DNA. Ever since Watson and Crick first described the molecular structure of DNA, its information-bearing potential has been apparent to computer scientists. With each nucleotide in the sequence drawn from the four-valued alphabet of \(\{A,T,C,G\}\), a molecule of DNA with \(n\) nucleotides stores \(4^{n}\) bits of data. Indeed, this information storage underpins life as we know it: all the instructions on how to build and operate a life form are stored in its DNA, honed over eons of evolutionary time. In a highly influential Science paper in 2012, the renowned Harvard genomicist George Church made the case that we will eventually turn to DNA for information storage, based on the ultimate physical limits of materials [1]. He delineated the theoretical storage **capacity** of DNA: 200 petabytes per gram; the read-write **speed**: less than 100 microseconds per bit; and, most importantly, the **energy**: as little as \(10^{-19}\) joules per bit, which is orders of magnitude below the femtojoules/bit (\(10^{-15}\) J/bit) barrier touted for other emerging technologies. Moreover, DNA is stable for decades, perhaps even millennia, as DNA extracted from the carcasses of woolly mammoths can attest. In principle, DNA could outperform all other types of media that have been studied or proposed. Of course, no one has yet built a DNA storage system that comes close to beating existing media (magnetic, optical, or solid-state storage). The practical challenges are formidable. Fortunately, DNA technology is not exotic. Spurred by the biotech and pharma industries, the technology for both sequencing (_reading_) and synthesizing (_writing_) DNA has followed a Moore's law-like trajectory for the past 20 years. Sequencing 3 billion nucleotides in a human genome can be done for less than $1,000. Synthesizing a megabyte of DNA data can be done in less than a day. Inspired no doubt by Church's first-principles thinking, but also motivated the trajectory of sequencing and synthesis technology, there has been a groundswell of interest in DNA storage. The leading approach is the synthesis of DNA based on phosphoramidite chemistry [2]. However, many other creative ideas and novel technologies, ranging from nanopores [3] to DNA origami [4], are being deployed. #### DNA Computing Beginning with the seminal work of Adelman a quarter-century ago [5], DNA computing has promised the benefits of massive parallelism in operations. Operations are typically performed on the _concentration_ of DNA strands in solution. For instance, with DNA strand displacement cascades, single strands displace parts of double strands, releasing single strands that can then participate in further operations [6, 7, 8]. The inputs and outputs are the concentration values of specific strands. It is fair to say that in the three decades since Adelman first proposed the idea, the practical impact of research on this topic has been modest. A practical DNA storage system, particularly one that is inherently programmable, changes this. Such storage opens up the possibility of "in-memory" computing, that is computing directly on the data stored in DNA [9, 10, 11]. One performs such computation not on data stored not in the sequence of nucleotides, but rather by making topological modifications to the strands: breaks in the phosphodieser backbone of DNA that we call "nicks" and gaps in the backbone that we call "toeholds." The nicking can be performed enzymatically with a system such as CRISPR/Cas9 [12, 13]. Note that the data that we operate on with this form of DNA computing is encoded in a different dimension than the data encoded in the sequence data of the DNA. The **underlying data** - perhaps terabyte's worth of it - is stored as the sequence of \(A\)'s, \(C\)'s, and \(G\)'s in synthesized strands. Superimposed on this, we store **metadata** via topological modifications. This is illustrated in Fig. 1. This metadata is rewritable. Accordingly, it fits the paradigm of "in-memory" computing [14]. The computation is of SIMD form1 SIMD provides a means to transform stored data, perhaps large amounts of it, with a single parallel instruction. Footnote 1: SIMD is a computer engineering acronym for Single Instruction, Multiple Data [15], a form of computation in which multiple processing elements perform the same operation on multiple data points simultaneously. It contrasts with the more general class of parallel computation called MIMD (Multiple Instructions, Multiple Data). Much of the modern progress in electronic computing power has come by scaling up SIMD computation with platforms such as graphical processing units (GPUs). ### Stochastic Logic The form of molecular computing that we present in this paper is predicated on a novel encoding of data. A link is made between the representation of random variables with a paradigm called _stochastic logic_ on the one hand, and the representation of variables in molecular systems as the concentration of molecular species, on the other. Stochastic logic is an active topic of research in digital design, with applications to emerging technologies [16, 17, 18]. Computation is performed with familiar digital constructs, Figure 1: Data is stored in multiple dimensions. The sequence of nucleotides stores data in the form of the \(A\)’s, \(C\)’s, \(T\)’s, and \(G\), with 2 bits per letter. Superimposed on this, we store data via topological modifications to the DNA, in the form of nicks and exposed toeholds. This data is **rewritable**, with techniques developed for DNA computation. such as AND, OR, and NOT gates. However, instead of having specific Boolean values of 0 and 1, the inputs are random bitstreams. A number \(x\) (\(0\leq x\leq 1\)) corresponds to a sequence of random bits. Each bit has _probability_\(x\) of being one and probability \(1-x\) of being zero, as illustrated in Figure 2. Computation is recast in terms of the probabilities observed in these streams. Research in stochastic logic has demonstrated that many mathematical functions of interest can be computed with simple circuits built with logic gates [17, 19]. Consider basic logic gates. Given a stochastic input \(x\), a NOT gate implements the function \[\text{NOT}(x)=1-x. \tag{1}\] This means that while an individual input of 1 results in an output of 0 for the NOT gate (and vice versa), statistically, for a random bitstream that encodes the stochastic value \(x\), the NOT gate output is a new bitstream that encodes \(1-x\). The output of an AND gate is 1 only if all the inputs are simultaneously 1. The probability of the output being 1 is thus the probability of all the inputs being 1. Therefore, an AND gate implements the stochastic function: \[\text{AND}(x,y)=xy, \tag{2}\] that is to say, multiplication. Probabilities, of course, are values between 0 and 1, inclusive. If we express them as rational numbers, given some positive integer \(n\) as the denominator, we have fractions \[x=\frac{a}{n},\,y=\frac{b}{n}\] where \(0\leq a\leq n\) and \(0\leq b\leq n\). So an AND gate computes _a fraction of a fraction._ We can implement other logic functions. The output of an OR gate is 0 only if all the inputs are 0. Therefore, an OR gate implements the stochastic function: \[\text{OR}(x,y)=1-(1-x)(1-y)=x+y-xy. \tag{3}\] The output of an XOR gate is 1 only if the two inputs \(x,y\) are different. Therefore, an XOR gate implements the stochastic function: \[\text{XOR}(x,y)=(1-x)y+x(1-y)=x+y-2xy. \tag{4}\] The NAND, NOR, and XNOR gates can be derived by composing the AND, OR, and XOR gates each with a NOT gate, respectively. Please refer to Table 1 for a full list of the algebraic expressions of these gates. It is well known that any Boolean Fig 2: Stochastic representation: A random bitstream. A value \(x\in[0,1]\), in this case \(3/8\), is represented as a bitstream. The probability that a randomly sampled bit in the stream is one is \(x=3/8\); the probability that it is zero is \(1-x=5/8\). function can be expressed in terms of AND and NOT operations (or entirely in terms of NAND operations). Accordingly, any function can be expressed as a nested sequence of multiplications and \(~{}1-x~{}\) type operations. There is a large body of literature on the topic of stochastic logic. We point to some of our prior work in this field. In [20] we proved that any multivariate polynomial function with its domain and codomain in the unit interval \([0,1]\) can be implemented using stochastic logic. In [17], we provide an efficient and general synthesis procedure for stochastic logic, the first in the field. In [21], we provided a method for transforming probabilities values with digital logic. Finally, in [22, 23] we demonstrated how stochastic computation can be performed deterministically. ### DNA Strand Displacement DNA is generally present in double-stranded form (**dsDNA**), in double-helix, with A's pairing with T's, and C's with G's. Without qualification, when we refer to "DNA" we mean double-stranded. However, for the operation we describe here, DNA in single-stranded form (**ssDNA**) plays a role. The molecular operation that we exploit in our system is called DNA strand displacement [24, 6]. It has been widely studied and deployed. Indeed, prior work has shown that such a system can emulate _any_ abstract set of chemical reactions. The reader is referred to Soloveichik et al. and Zhang et al. for further details [25, 7]. Here we illustrate a simple, generic example. In Section 5, we discuss how to map our models to such DNA strand-displacement systems. We begin by first defining a few basic concepts. DNA strands are linear sequences of four different nucleotides \(\{A,T,C,G\}\). A nucleotide can bind to another following _Watson-Crick_ base-pairing: A binds to T, C binds to G. A pair of single DNA strands will bind to each other, a process called _hybridization_, if their sequences are complementary according to the base-pairing rule, that is to say, wherever there is an \(A\) in one, there is a \(T\) in the other, and vice versa; and whenever there is a \(C\) in one, there is a \(G\) in the other and vice-versa. The binding strength depends on the length of the complementary regions. Longer regions will bind strongly, smaller ones weakly. Reaction rates match binding strength: hybridization completes quickly if the complementary regions are long and slowly if they are short. If the complementary regions are very short, hybridization might not occur at all. (We acknowledge that, in this brief discussion, we are omitting many relevant details such as temperature, concentration, and the distribution of nucleotide types, i.e., the fraction of paired bases that are A-T versus C-G. All of these parameters must be accounted for in realistic simulation runs.) Figure 3 illustrates strand displacement with a set of reversible reactions. The entire reaction occurs as reactant molecules \(A\) and \(B\) form products \(E\) and \(F\), with each intermediate stage operating on molecules \(C\) and \(D\). In the figure, \(A\) and \(F\) are \begin{table} \begin{tabular}{c|c|c} gate & inputs & function \\ \hline NOT & \(x\) & \(1-x\) \\ \hline AND & \(x,y\) & \(xy\) \\ \hline OR & \(x,y\) & \(x+y-xy\) \\ \hline NAND & \(x,y\) & \(1-xy\) \\ \hline NOR & \(x,y\) & \(1-x-y+xy\) \\ \hline XOR & \(x,y\) & \(x+y-2xy\) \\ \hline XNOR & \(x,y\) & \(1-x-y+2xy\) \\ \end{tabular} \end{table} Table 1: Stochastic Function Implemented by Basic Logic Gates single strands of DNA, while \(B\), \(C\), \(D\), and \(E\) are double-stranded complexes. Each single-strand DNA molecule is divided, conceptually, into subsequences that we call **domains**, denoted as 1, 2, and 3 in the figure. The complementary sequences for these domains are \(1^{*},2^{*}\) and \(3^{*}\). (We will use this notation for complementarity throughout.) All distinct domains are assumed to be _orthogonal_ to each other, meaning that these domains do not hybridize. **Toeholds** are a specific kind of domain in a double-stranded DNA complex where a single strand is exposed. For instance, the molecule \(B\) contains a toehold domain at \(1^{*}\) in Figure 3. Toeholds are usually 6 to 10 nucleotides long, while the lengths of regular domains are typically 20 nucleotides. The exposed strand of a toehold domain can bind to the complementary domain from a longer ssDNA, and thus toeholds can trigger the binding and displacement of DNA strands. The small length of the toehold makes this hybridization reversible. In the first reaction in Figure 3, the open toehold \(1^{*}\) in molecule \(B\) binds with domain 1 from strand \(A\). This forms the molecule \(C\) where the duplicate 2 domain section from molecule \(A\) forms an overhanging flap. This reaction shows how a toehold triggers the binding of DNA strands. In molecule \(C\), the overhanging flap can stick onto the complementary domain \(2^{*}\), thus displacing the previously bound strand. This type of branch migration is shown in the second reaction, where the displacement of one flap to the other forms the molecule \(D\). This reaction is reversible, and the molecules \(C\) and \(D\) exist in a dynamic equilibrium. The process of branch migration of the flap is essentially a random walk: at any time when part of the strand from molecule \(A\) hybridizes with strand \(B\), more of \(A\) might bind and displace a part of \(F\), or more of \(F\) might bind and displace a part of \(A\). Therefore, this reaction is reversible. The third reaction is the exact opposite of reaction 1 - the new flap in molecule \(D\) can peel off from the complex and thus create the single-strand molecule \(F\) and leave a new double-stranded complex \(E\). Molecule \(E\) is similar to molecule \(B\), but the toehold has migrated from \(1^{*}\) to \(3^{*}\). The reaction rate of this reaction depends on the length of the toehold \(3^{*}\). If we reduce the length of the toehold, the rate of reaction 3 becomes so small that the reaction can be treated as a forward-only reaction. This bias in the direction of the reaction means that we can model the entire set of reactions as a single DNA strand displacement event, where reactants \(A\) and \(B\) react to produce \(E\) and \(F\). Note that the strand \(F\) can now participate in further toehold-mediated reactions, allowing for cascading of such these DNA strand displacement systems. ### Chemical Model Recent research has shown how data can be encoded via _nicks_ on DNA using gene-editing enzymes like CRISPR-Cas9 and PfAgo [26]. _Probabilistic switching_ of concentration values has been demonstrated by the DNA computing community [27]. In previous work, we demonstrated how a concept from computer engineering called _stochastic logic_ can be adapted to DNA computing [28]. In this paper, we bring these disparate threads together: we demonstrate how to perform stochastic computation on _fractionally-encoded_ data stored on nicked DNA. The conventional approach to storing data in DNA is to use a single species of strand to represent a value. It is either encoded as a binary value, where the presence of the specific strand represents a 1 and its absence a 0 [29]; or as a non-integer value, encoded according to its concentration, called a _direct representation_[30]. In recent research, we have shown how a _fractional representation_ can be used [31, 11, 28]. The idea is to use the concentration of two species of strand \(X_{0},X_{1}\) to represent a value \(x\) with \[x=\frac{X_{1}}{X_{0}+X_{1}}\] where \(x\in[0,1]\). This encoding is related to the concept of _stochastic logic_ in which computation is performed on randomized bit streams, with values represented by the fraction of 1's versus 0's in the stream [32], [33], [17]. In this work, we store values according to nicking sites on double DNA strands. For a given site, we will have some strands nicked there, but others not. Let the overall concentration of the double strand equal \(C_{0}\), and the concentration of strands nicked at the site equal \(C_{1}\). The ratio of the concentration of strands nicked versus the overall concentration is \[x=\frac{C_{1}}{C_{0}}\] So this ratio is the relative concentration of the nicked strand at this site. We use it to represent a variable \(x\in[0,1]\). Setting this ratio can be achieved by two possible methods. One is that we nick a site using a gene-editing guide that is not fully complementary to the nicking site. The degree of complementarity would control the rate of nicking and so set the relative concentration of strands that are nicked. A simpler method is to split the initial solution containing the strand into two samples; nick all the strands in one sample; and then mix the two samples with the desired ratio \(x\). ### Microfluidics and Lab-on-Chip Microfluidics is a rapidly developing discipline where small volumes of fluids are manipulated and transferred over channels whose dimensions range from one to hundreds of microns [34]. Typically, such channels leverage principles of fluid dynamics enabling the modeling and design of systems where small volumes of fluids are moved to achieve a variety of purposes such as information and energy transfer. Due to their small form factors and need for very small amounts of fluids, this discipline is finding application in a variety of application domains such as cell sorting, DNA analysis, chemical synthesis and medical applications. Fig 3: A set of DNA strand displacement reactions. Each DNA single strand is drawn as a continuous arrow, consisting of different colored domains numbered 1 through 3. DNA domains that are complementary to each other due to A–T, C–G binding are paired as 1 and \(1^{*}\). The first reaction shows reactants A and B hybridizing together via the toehold at domain \(1^{*}\) on molecule \(B\). The second reaction depicts branch migration of the overhanging flap of DNA in molecule \(C\), thereby resulting in the nick migrating from after domain 1 to 2. The third reaction shows how an overhanging strand of DNA can be peeled off of molecule \(D\), thereby exposing a toehold at domain \(3^{*}\) on molecule \(E\) and releasing a freely floating strand \(F\). All reactions are reversible. The only domains that are toeholds are \(1^{*}\) and \(3^{*}\). Utilizing the advances in microfluidics a practical device concept was envisioned as a Lab-on-Chip (LoC) [35]. A LoC is a device consisting of a network of microfluidic channels and microcells capable of transferring fluids to perform several functions such as chemical analysis, reactions, and sorting. Typical applications were in the area of medical sciences where small amounts of samples were needed to perform tests and diagnoses [35]. While the dominant application area of LoCs remains efficient medical diagnoses, advances in manufacturing capability using Integrated Circuit (IC) fabrication methodologies or 3D printing their applicability is expanding into sensing and processing more widely. In this paper, we envision an LoC device enabled by microfluidics to perform neural network computations using DNA molecules as the medium. ### Organization The rest of this paper is organized as follows. Section 3.3 describes how we implement our core operation, namely multiplication. We do so by computing a _fraction_ of a _fraction_ of concentration values. Section 4 presents the architecture of the microfluidic system that we use to implement computation on data stored in DNA. Section 5 discusses the implementation of an artificial neural network (ANN) using our microfluidic neural engine. Section 6 simulations results of the ANN computation. Finally, Section 7 presents conclusions and discusses future work. Fig 4: Microcell operation sequence. The microfluidic channels are painted blue, with arrows showing flow direction induced by pressure differentiation. The gray and red boxes respectively represent Quake valves open and closed. ## 3 Multiplication The core component of our design is the multiplication operation, computed as a fraction of a fraction of a concentration value of nicked DNA. ### Encoding Scheme Nicking enzymes such as CRISPR-Cas9 can be used to effectively "nick" dsDNA at a particular site [12, 13]. Since DNA is double-stranded, with strong base pairing between the A's and T's and the C's and G's, the molecule does not fall apart. Indeed, the nicking can be performed at multiple sites, and this process can be conducted independently. Suppose a molecule of DNA molecule with a particular nicking site labeled \(A\) is in a solution. We separate the solution into two parts with a volume ratio \(a\) to \(1-a\) for some fraction \(a\). Now site \(A\) is nicked on all DNA molecules in the first solution, while the second solution is left untouched. These two solutions are mixed back to obtain a single solution. Some molecules in this solution are nicked, while others are not. The relative concentration of DNA molecules with a nick at site \(A\) is \(a\), while that of the molecules that are not nicked is \(1-a\). Thus, any arbitrary fraction \(a\) can be encoded in a solution of DNA molecules with a nicking site. In our framework, the stochastic value encoded at a particular site in DNA is the relative concentration (between 0 and 1) of DNA molecules with a nick at that site. ### Multiplying two values Consider a DNA molecule with two unique nicking sites, \(A\) and \(B\). First, a stochastic value \(a\) is encoded at site \(A\), as was discussed in Section 3.1. Now the single solution is again split into two parts, of volume ratio \(b\) to \(1-b\). All molecules are nicked at site \(B\) in the first solution, while the second solution is again left untouched. Mixing these two solutions yields a solution containing DNA molecules that are either nicked at site \(B\) or not. Thus, site \(B\) now encodes the stochastic value \(b\). Now both sites \(A\) and \(B\) are being used to independently store stochastic values \(a\) and \(b\). Since either site could be Fig 5: Multiplying two values, \(a\) and \(b\), through nicking DNA. We start with a solution containing the DNA molecule shown on the top row. Fractional amount \(a\) of these molecules are nicked at site \(A\), and \(b\) amount of all DNA molecules are nicked at site \(B\). This results in a solution of 4 different possible DNA molecule types (as shown on each row). Assuming independent nicking on both sites, the concentration of each of these molecules is shown on the right. The molecule with nicks on both sites \(A\) and \(B\) has a concentration of \(a\times b\), that is, the product of the two fractions. nicked or not nicked, there are 4 different possible molecules, as shown in Fig. 5. Most significantly, the molecule containing two nicks, both at site \(A\) and \(B\), has a relative concentration of \(a\times b\). That is the product of the two fractional values - a fraction of a fraction. The concentrations of all other molecules are also listed in Fig. 5. Note that these values only hold if both sites are nicked independently. Thus, our encoding approach not only allows us not only to store data but also to compute on it. This is ideal for computing a scalar multiplication in a neural network - input data is initialized at site \(A\) in a given solution, and then the scalar weight it is to be multiplied with is stored at site \(B\). In this approach, it is necessary for sites \(A\) and \(B\) to be neighboring each other (i.e., no other nicking sites lie between them) to allow for readout. ### Reading Out Having covered storing two stochastic values in a single solution, we now discuss multiplying these values. Assume a solution storing two stochastic values \(a\) and \(b\), as detailed in Section 3.2. This solution is gently heated to initiate denaturing of DNA. That is, the DNA starts to break apart into two strands. By restricting the temperature, only short regions with low G-C content will fully denature, while longer strands remain bound. For our starting molecule, the short region between the nicking sites \(A\) and \(B\) will fully break apart into a single-stranded region. That is, a toehold will be formed between these two sites [36]. This toehold will only be formed on DNA molecules with nicks on both sites, so only \(a\times b\) amount of molecules will have a toehold. Now a probe strand is supplied that will bind to the newly exposed toehold. This probe strand is used to displace the DNA strand adjacent to the toehold. The amount of single-stranded DNA (ssDNA) that is displaced through this process is again \(a\times b\) the amount of the starting dsDNA. Thus, the product of two stochastic variables can be read out _in vitro_. This procedure is shown in Fig. 6. In Section 5, we discuss how these single strands can then participate in further strand-displacement operations. Fig 6: Reading out the multiplication results. (a) The DNA solution storing stochastic values \(a\) and \(b\) on sites \(A\) and \(B\) is gently heated. This creates a toehold only on the molecules with nicks on both sites, i.e., the \(a\times b\) molecules. (b) A probe strand (the first reactant) can then bind with the newly exposed toehold and displace ssDNA (the first product). The concentration of this ssDNA stores the product \(a\times b\). It is important to cleanly separate the dsDNA molecules from the ssDNA extracted above. To achieve this, the dsDNA molecules and probe strands can have magnetic beads attached to them. When a magnetic field is applied to the solution, the dsDNA molecules and any excess probe strands can be pulled down from the solution, allowing the displaced ssDNA to be separated. These magnetic beads are shown in Fig. 9. ## 4 DNA-based Neural Engine ANN computational workload consists primarily of matrix operations and activation functions. Among the matrix operations, matrix-matrix multiplication (GEMM) and matrix-vector multiplication (GEMV) make up almost the entirety of the workload which can be performed via repeated multiplications and accumulations (MAC). In the proposed DNA Neural Engine the process of performing a multiplication will take advantage of the stochastic representation of the operands. The input to a single neuron can be stochastically represented by the proportion of DNA strands nicked at a consistent site, compared to the total number of DNA strands in a solution (_i.e.,_ the concentration of specifically nicked DNA strands). In this paper, molecules with 2 nicks as shown in Fig. 7 represent value 1, while all other molecule types correspond to 0. The relative concentration of doubly-nicked DNA molecules is the stochastic value stored in the solution. The neuron weights, on the other hand, are represented by the concentration of enzymes in a droplet intended to create a second nick on the already-nicked DNA molecules. To perform the stochastic multiplication for each neuron's input-weight pair, the droplet with a concentration of enzymes, representing the weight value, is mixed with the droplet of the nicked DNA strands to create a second nick in the DNA strands. The second nicking site is required to be within around 18 base pairs of the first nick to allow Fig 7: Storing data on DNA molecules using nicks. (a) The DNA template molecule consists of domains 0 to 4 (in color), with an additional unnamed domain (black) preceding them and a magnetic bead attached (on the left). 0*-4* denote the complementary top strand sequence for these domains. (b) The DNA molecule with a nick at nicking site \(A\) between the black domain and 0*. (c) The DNA molecule with a nick at nicking site \(B\) between the 0* and 1*. (d) The DNA with nicks on both nicking sites. Only this DNA molecule with two nicks represents data value 1; the other three configurations (a)-(c) correspond 0. a small fragment between the two nicked sites to be detached upon the introduction of probe strands. The product of the input and weight for this particular neuron is represented by the relative concentration of double-nicked strands compared to the total concentrations of DNA strands. It may be noted that at the beginning of the processing the inputs to the neural engine may also be set by this multiplication process where a solution of un-nicked DNA strands are nicked in a single site by the nickase enzymes whose concentrations are set to represent the input values thereby, creating an array of solutions with DNA strands with a single nick in concentrations representing the concentrations of the nickase and therefore the values of the inputs. Next, we describe the DNA-based neural engine hardware proposed in this work followed by the execution of the basic operations for an ANN. ### Neural Engine Architecture For the implementation of this process, we adopt a lab-on-chip (LoC) architecture. LoC emulates the electric signals in a digital chip with a set of controlled fluid channels, valves, and similar components. In our implementation, we will be using microfluidics where components are on the scale of 1-100\(\mu\). Our system will operate using droplet-based microfluidics, meaning the fluid that holds data such as DNA or enzymes will move in small packages called droplets. The movement of droplets through the system will Fig 8: (a) Microcell operation sequence, and (b) Microcell assembly for Matrix Multiplications. The microfluidic channels are painted blue, with arrows showing flow direction induced by pressure differentiation. The gray and red boxes respectively represent Quake valves open and closed. be controlled by creating pressure differentials. One critical component for controlling the flow of the microfluidic channels is the Quake valve which operates by running a pneumatic channel perpendicularly over a microfluidic channel. When the pneumatic channel is pressurized, it expands, closing the flow across the two sides of the microfluidic channel. To contain each stochastically nicked DNA droplet and merge these with weight enzymes, a small droplet storage container, which we will call a microcell, will be used as seen in Figure 8(a). ### Microcell Function Figure 8(a) shows the sequence used to load and mix the two droplets holding the stochastically nicked DNA and weight enzymes. Throughout the loading, mixing, and release processes, there will be a constant pressure difference between the bottom and the top of the microcells shown in the figure, creating the upward flow into the next microcell. The steps, as demonstrated in Figure 8(a), are described below: 1. The right valve R is closed, and the left valve L is kept open. This has the effect of routing the fluid through the left side of the microcell, leaving the fluid on the right-side static. 2. The droplet of stochastically nicked DNA enters the microcell and continues until it is known to be at a predefined, timed distance along the left channel. 3. The left valve is closed, and the right valve is opened, rerouting the fluid to flow along the right channel. 4. The weight enzyme droplet is inserted into the microcell and continuously until it is known to be approximately the same distance along the right channel. It can be observed that the DNA droplet does not move since it is in static fluid. 5. Both valves are opened, pushing both droplets simultaneously. 6. The two droplets exit the microcell together, mixing them as the channels merge. ### Microcell Assembly The microcells will be arranged in a \(k\times k\) formation, each capable of holding and mixing two droplets. These \(k^{2}\) microcells are interconnected with a mesh of microfluidic channels, as shown in Figure 8(b). In this figure, M, S, and P respectively represent the microcells, the merge modules, and the closing reaction pipelines. When delivering the nicked DNA droplets, all right valves are closed, and all left valves are open. The droplets are arranged at fixed distances so will travel across the microcells until each contains a single droplet. The weight enzyme droplets will similarly be inserted as in steps 3 and 4 of the microcell operation, with the exception that the left and right valve states are swapped this time. All left and right valves are then opened to perform steps 5 and 6 of the microcell operation shown previously in Figure 8(a) and described in Section 4.2. ## 5 Implementation of ANN Operation in the Neural Engine Using the principles of stochastic computing with DNA nicking, we implement the operations involved in an ANN using the above microfluidic neural engine. ### Execution of a Multiplication in a Neuron We demonstrate the execution of a single multiplication within a microcell by mixing two droplets containing our operands. The multiplicand is a concentration of \(t\) DNA strands, nicked at a known site \(A\) at a concentration \(a\) (as shown in Fig. 7). The multiplier \(b\) is represented by the concentration of nicking enzymes. The nicking enzymes are responsible for weakening the bonds holding the strands together so that after mixing and reacting, the strands nicked at both sites are our product, \(a\times b\). The multiplier is a droplet of the weight enzyme with a concentration: \[E=b\times t\times(1/k). \tag{5}\] Here, \(k\) represents the number of neurons present in the ANN layer, processed across \(k\) microcells and the factor \(1/k\) is a consequence of distributing the nicking enzymes over k microcells. To compensate for this \(1/k\) operand, each of these nicking enzymes will be given enough time to react with \(k\) DNA strands. This new nick will be at a second known site, \(B\), nearby the first site \(A\) as shown in Fig 7d. This will result in \(a\times t\) of the strands nicked at site \(A\) and \(b\times t\) of the strands nicked at site \(B\). This means that the proportion of strands nicked at both sites will be the product of the two operands. A concentration of _probe strands_ are then introduced to displace the small ssDNA fragment from each of the aforementioned DNA product strands, as shown in Fig. 9. The resulting proportion of free-floating ssDNA fragments with respect to the total DNA (\(t\)) strands represents the product, \(ab\). ### Execution of Dot Product The above method for scalar multiplication can be used to compute the dot product for \(k\) microcells, where each microcell contains the corresponding element of both input and weight vectors. Each of these \(k\) microcells will undergo the multiplication as described, with the multiplier, \(b\), being a unique weight enzyme concentration representing the Fig 9: Extracting ssDNA from dsDNA molecules using probe strands. (a) The DNA template molecule with two nicks at sites \(A\) and \(B\). After applying gentle heat, the ssDNA between the two nicks is selectively denatured to create a toehold at domain 0. (b) A probe strand is used to displace the ssDNA spanning domains 1 to 3 from the DNA molecule. The ssDNA is separated from all the other DNA molecules (i.e., the DNA and any excess probe strands) as the other molecules can all be pulled out. weight values for each input pair. The products in each row of the microcell array as shown in Figure 8(b) are then aggregated by mixing the droplets row-wise into one large combined droplet. This large combined droplet contains the sum of the number of fragments from each microcell which represents the dot product. Since the multiplicand in subsequent multiplications must be in the form of nicked DNA strands, this concentration of fragments must be transformed. Each fragment within the large droplet is mapped one-to-one to a nicking enzyme. This nicking enzyme is designed to nick at the primary site along a fresh, un-nicked DNA strand using a method known as strand displacement. The aforementioned method for dot product is implemented in the proposed microcell architecture using the following steps. #### 5.2.1 Droplet Merging The droplet merging module, S shown in Figure 8(b) adds the individual products of the elements of the two vectors to create the dot product. To compute the dot products as described, the mixed droplets from each microcell must be merged row-wise. Each droplet will exit the microcell, then take an immediate right turn, and remain on this horizontal path until entering the merging module, S. The two-step process is outlined as follows. Please refer to Figure 10. 1. All droplets are merged into a single large droplet with the Y valves kept open (shown in green) and the Z valves closed (shown in red). This ensures a rightward flow and no vertical pressure difference. This is shown in Figure 10(a) 2. Next, the Y valves are closed (red), and the Z valves are opened (green), causing a pressure difference that forces each droplet upward through the merge channels. The construction of the merge channels is such that each droplet reaches the final merge point at the same time. This is shown in Figure 10(b) Once each row of droplets has been mixed, they will go through the three-step closing reaction pipeline to apply the necessary transformations as discussed below. #### 5.2.2 Reaction Pipeline The Reaction Pipeline module enables the implementation of an activation function in the DNA Neural Engine to the previously computed dot products. In addition to implementing the activation, it also transforms the nicked fragments into a singly-nicked DNA molecule to iteratively repeat the process to implement multiple ANN layers using the following steps. After merging all the droplets, the fraction of doubly nicked DNA molecules to all DNA molecules represents the dot product stored in the merged droplet as shown Fig 10: The merging module, S. in Section 3. By applying gentle heat to this droplet, toeholds are created on DNA molecules with two nicks due to partial denaturing. The ssDNA next to this toehold can be displaced using probe strands as shown in Fig. 9. Assuming complete displacement of these ssDNA molecules, the relative concentration (or to be even more precise, the relative number of molecules) of the ssDNA still represents the same fraction as the double-nicked DNA. Following this, we must apply an activation function on this ssDNA value to incorporate non-linear computations necessary in the neural networks. Our approach utilizes a sharp sigmoid function with a user-defined transition point - i.e., the activation function is a step function with the domain and range \([0,1]\), and the transition point can be set in the range \((0,1)\). This is achieved with the DNA seesaw gates presented by Qian and Winfree [37]. This approach involves utilizing a basic DNA gate motif, which relies on a reversible strand-displacement reaction utilizing the concept of toehold exchange. The seesawing process allows for the exchange of DNA signals, with a pair of seesawing steps completing a catalytic cycle. The reader is referred to [37] for further details. We use different DNA strands for thresholding and replenishing the output. The threshold molecule binds with the input ssDNA to generate waste (Fig 11a), so the input ssDNA concentration must be larger than the threshold molecule concentration to preserve some residual amount of input ssDNA for the next stage. In the next stage, the gate reaction, the input ssDNA is used to generate output ssDNA (Fig 11b). The replenishment strand in the (Fig 11c) drives the gate reaction since it frees up more input ssDNA (Fig 11c). That is, increasing the replenishment strand concentration maximizes the concentration of the output ssDNA [37]. With these DNA reactions, a gate can be designed that applies a threshold (in detail, the input ssDNA must be greater than the threshold DNA concentration) on the input ssDNA value, and then generates an output ssDNA value of 1 due to excess replenishment molecules. This allows us to implement a sigmoid activation function. If desired, the concentration of the replenishment molecules (Fig 11c) can be limited to also apply an upper bound to the output ssDNA concentration. With an activation function applied to the ssDNA concentration, we must now transform this value of DNA molecules to a value of nicking enzymes that can be used to trigger the next level of computation in the network. To achieve this, we will use a DNA strand displacement-based protein switch. First, we will conjugate the nicking enzyme with a DNA tag. This DNA tag will have one strand (called the _major strand_) attached to the protein and contain a toehold, while the other strand (the _minor strand_) will have a magnetic bead attached but will not connect with the protein directly. This is shown in Fig. 11d. The DNA tag sequence will be constructed such that the toehold on the major strand will recruit the displaced DNA strands from the previous step, and the resulting strand-displacement reaction will entirely release the minor strand. The design of the protein-DNA tag allows individual displaced DNA strands to "untag" nicking enzyme molecules. The remaining nicking enzymes (those that did not get to react with the DNA strands) will still be "tagged" with magnetic beads and can be pulled out from the solution through the application of a magnetic field. After the pull-down process, the solution contains only untagged nicking enzymes at a specific concentration (this is discussed in detail below). This solution of nicking enzyme can now be used to nick site \(A\) on a new droplet of DNA in the neuron downstream in the network. 1. Gentle heat is applied to the large, merged droplet. This allows denaturing of short DNA molecules and creates toeholds. 2. A droplet containing excess probe strands is mixed to release the input ssDNA fragments. The input ssDNA is separated from the remaining molecules through the application of a magnetic field. Fig 11: The set of reactions used to apply the activation function on ssDNA and generate an equivalent concentration of nicking enzyme. (a) The threshold reaction: the threshold molecule reacts with the input ssDNA to generate products that do not participate in any further reactions. (b) The gate reaction: the input ssDNA reacts with the seesaw gate molecule to create the output ssDNA and an intermediate molecule (c) The replenishment reaction: the replenishment strand reacts with the intermediate molecule to release more input ssDNA. This replenishes the concentration of input ssDNA and drives the production of more output ssDNA. (d) The translation reaction: the output ssDNA (domain 3* is not shown for clarity) reacts with the “tagged” nicking enzyme (provided in excess) to produce an “untagged” nicking enzyme. The concentration of untagged nicking enzyme is proportional to the concentration of the output ssDNA. 3. A droplet containing the DNA seesaw gate, the threshold DNA (this amount is controlled by the user-defined sigmoid function), and the replenishment DNA (in excess) molecules is mixed with the ssDNA fragments. This applies a sigmoidal activation function on the ssDNA concentration. 4. The ssDNA strands are now mapped to a specific nicking enzyme concentration. For this, a drop containing an excess of the DNA-tagged nicking enzyme will be mixed with the ssDNA. After completion of the reaction, the drop will be subjected to a magnetic field to pull down the surplus nicking enzyme molecules. The resulting solution will contain the nicking enzyme with a concentration proportional to the particular concentration of the ssDNA strands after the activation function. 5. The droplet containing the nicking enzyme is now mixed with un-nicked DNA strands to prepare the inputs to the next layer of neurons in the ANN. After each stage of the reaction pipeline is completed, the merged droplets from each row now must be broken down into a collection of \(k\) smaller droplets to be entered column-wise into the microcell array. This is accomplished using a droplet separator which functions by applying a pinching pressure at some regular interval to the channels carrying merged droplets [38]. This results in a series of equally spaced droplets, which can then be placed back into the microcells column-wise. ### Layer-wise Execution of an ANN Using the \(k\times k\) array of microcells and the S and P modules an entire layer of an ANN with \(k\) neurons can be implemented. In this array, each column implements a single neuron of the layer, and all the columns collectively form a single layer of the ANN. All microcells in the same column contain an equal nickel DNA concentration of double-stranded DNA molecules, \(A_{1}-A_{k}\). The large droplet resulting from the output of each row's activation function is now divided back into \(k\) originally sized droplets, which are then entered back into the microcell array column-wise, to repeat the computations for the next layer of the ANN, with the new inputs to each neuron held within the microcells. ## 6 Results In this work, we evaluate the proposed DNA Neural Engine while processing a simple ANN using the microfluidics-based DNA computing architecture in terms of latency of processing and area footprint of the device. The time for execution of a single layer, \(t_{\text{layer}}\) can be modelled as follows: \[t_{\text{layer}}=t_{\text{transport}}+t_{\text{mult}}+t_{\text{merge}}+t_{ \text{activation}}. \tag{6}\] And, \[t_{\text{activation}}=t_{\text{displacement}}+t_{\text{threshold}}+t_{\text{ gate}}+t_{\text{translation}}+t_{\text{nick}}. \tag{7}\] Here: 1. \(t_{\text{transport}}\) is the time it takes for all droplets to travel throughout the microfluidic channels for all stages in the process. It is assumed that the time taken just for transportation is not the dominant bottleneck, and so it has been estimated to be around 2 minutes. 2. \(t_{\rm mult}\) is the time taken to perform a multiplication. This is the time taken for the second nicking of the strands, the second factor in the multiplication. 3. \(t_{\rm merge}\) is the time taken to merge each of the small droplets per row into a single large droplet, the major step of the dot product summation. 4. \(t_{\rm activation}\) can be broken up into several parts: displacement, inhibit, and nicking. 5. \(t_{\rm displacement}\) is the time it takes to displace each of the ssDNA fragments from the doubly nicked strands 6. \(t_{\rm threshold}\) is the time it takes for some input ssDNA strands to react with the threshold DNA. 7. \(t_{\rm gate}\) is the time it takes for the displacement of the output ssDNA alongside the replenishment reaction being used to drive the gate reaction. 8. \(t_{\rm translation}\) is the time it takes for "untagging" the right concentration of nicking enzyme and separating it. 9. \(t_{\rm nick}\) is the time it takes for the untagged nicking enzyme to react with the fresh DNA strands for the resultant node value. The size of the proposed microfluidic device will scale quadratically with the number of neurons in a layer of the ANN, \(k\) to support parallel execution of all neurons. This is because any layer with \(k\) neurons requires an array of \(k\times k\) microcells. As a pessimistic estimate, we assume each microcell will occupy an area equivalent of 6 channel widths of space in both length and breadth, given their structure with 2 microfluidic channel tracks in both horizontal and vertical directions as well an empty track for separation between the channels. Each track is assumed to be twice in width compared to the width of a channel to allow for manufacturability of the system. The following expression shows the area of a microcell array, \(W\) with \(k\times k\) microcells, where \(c\) representing microfluidic channel width \[W=s(c)=(6kc)^{2}. \tag{8}\] A pessimistic channel width of \(200\mathrm{\SIUnitSymbolMicro m}\) yields a resulting expression for area of \((6\times 0.2\times k)^{2}=\)\(1.44k^{2}\mathrm{mm}^{2}\) for the array [39]. For an optimistic estimate, assuming a channel width of \(35\mathrm{\SIUnitSymbolMicro m}\), and a condensed microchamber estimate of \(3\times 3\) channel widths per cell, we get an area estimate of \(0.01k^{2}\mathrm{mm}^{2}\) for the microcell array [39]. So depending on the manufacturing technology and fabrication node adopted, the parallelism of the device can be scaled up significantly to accommodate large hidden layers. Table 1 shows the size and timing parameters of the microfluidic architecture [39]. Here we assume that all neurons of a single layer of the ANN can be accommodated in the device simultaneously. Using these parameters we estimate the area requirements and delay for implementation of a simple ANN capable of classifying MNIST digits [refs]. In Table 2 we show the area and delay of the ANN for various device dimensions. The area estimate considers both a pessimistic and an optimistic dimension of the microfluidic channels and chambers from a fabrication perspective. We have considered multiple configurations (Config-1 to Config-4) corresponding to different device dimensions capable of accommodating varying numbers of microcells. These configurations offer a trade-off between device size and delay in ANN processing. In Config-1, we consider the number of microcells in the microfluidic system as 196 \(\times\) 196 which is capable of accommodating an ANN layer with 196 neurons. Therefore, to accommodate the input layer for the ANN that receives the 28 \(\times\) 28 MNIST frames the computations are serialized by a factor of 4 to compute the whole frame. Similarly, the other configurations require serialization by factors of 16, 49 and 196, respectively. Besides the input later, the designed ANN has a single hidden layer of 784 neurons and an output layer with 10 neurons. The hidden layer is serialized with the same factor as the input layer while the output layer did not need any serialization as it has only 10 neurons except for Config-4 where it was serialized by a factor of 3. Based on the required serialization factor and due to the limited number of microcells in a die the delay of executing a single layer is modified as follows, \(t_{layer}=((k_{layer}/k_{physical})*(t_{transport}+t_{mult}))+t_{merge}+t_{activation}\), where \(k_{layer}\) and \(k_{physical}\) are the number of neurons in an ANN layer and the number of neurons that can be computed simultaneously on the microfluidic die respectively. The Python model of the ANN was constrained to consider only positive inputs and weights and yielded an accuracy of 96% in all the configurations as the computation model was not altered in any of them. We use a sigmoid activation function in all the layers, implemented with "seesaw" gates [37], as discussed above. This enables signal amplification in the form of a sigmoid function - precisely what we need. Again, the reader is referred to [37] for further details. We assume that the partial results of the serialized computation can be stored in the DNA solution medium in an external reservoir array [40] that is communicating with the microfluidic ANN system through a microfluidic bus interface where the reservoirs are indexed and routed using the valve-system of the microfluidic system to the appropriate micro-chamber corresponding to the appropriate neuron. Note that a configuration that minimizes the computational delay of the ANN for MNIST classification evaluated here would need a system with an array of \(784\times 784\) microcells to accommodate the entire input layer simultaneously. However, that would make the die size unrealistic. Therefore, such a system could consist of multiple smaller microfluidic dies integrated on a microfluidic interposer substrate capable of communicating between the dies enabling a scalable solution [41]. This system with \(784\times 784\) microcells would reduce the delay per layer of the ANN to 8.07 hours. A distinct advantage of using the DNA-based approach is that the variability of DNA as a computing medium adds an interesting new factor to ANN training. Slight variations in any reaction in the process could be used as a natural source of drift in training. Iterative feedback from executing the model could be used to correct the errors and further train the model indefinitely. This is not something reflected in traditional digital implementations without the artificial introduction of variation or noise between the models. ## 7 Conclusions Conventional silicon computing systems generally have centralized control with a CPU that can aggregate sensory data, execute arbitrarily complex analysis, and then actuate. For molecular applications, the actions of sensing, processing, and actuating must all be performed _in situ_, in a decentralized way. Our goal in this paper was to devise molecular computing in which data processing occurs in the storage system itself using the natural properties of the molecules, with no need for readout and external electronic processing. \begin{table} \begin{tabular}{|c|c|} \hline **Attribute** & **Value** \\ \hline Delay of single ANN layer (t\({}_{layer}\)) & 8.07 hrs \\ \hline Channel Width (Optimistic) & 35\(\upmu\)m \\ \hline Channel Width (Pessimistic) & 200\(\upmu\)m \\ \hline Microcell Area (Optimistic) (\(W_{min}\)) & \(0.01mm^{2}\) \\ \hline Microcell Area (Pessimistic) (\(W_{max}\)) & \(1.44mm^{2}\) \\ \hline \end{tabular} \end{table} Table 2: Summary of the estimated system performance _In situ_ molecular processing of data is critical from the standpoint of I/O: reading and writing data will always be a bottleneck for molecular systems. Computing "in-memory" is, therefore, a prerequisite. We are collaborating with an industrial partner, Seagate, on the development of microfluidics technology for DNA storage. This technology will take many years to mature; however, when it does, techniques for computing _on_ the data that is stored in DNA will be needed. While conceptual in nature, this paper demonstrates how such computation could be performed. In this paper presented a methodology for implementing complex operations, including ANN computation, on data stored in DNA. The paper weaves together two distinct strands: a conceptual representation of data, on the one hand, and the technology to compute with this representation, on the other hand. The representation is a fractional encoding on the concentration of nicked DNA strands. With this representation, we can compute a _fraction_ of a _fraction_ - so the operation of multiplication - borrowing ideas from stochastic logic. The "read-out" process is effected by releasing single strands via DNA toehold-mediated strand displacement. The technology is microfluidics. We described the microcell layout used in a pneumatic lab-on-a-chip (LOC) to control mixing. Mixing allows us to compute a fraction of a fraction of a concentration value. Based on this core operation, we presented a full architecture to implement neural computation. There are a number of practical challenges. One of the concerns, ubiquitous with DNA strand displacement operations, is "leakage", that is to say errors in transforming concentrations. This occurs because we never have 100% of DNA strands participating in designated reactions. Based upon the actual experimental results, we might have to mitigate leakage with error correction methods or adopt so-called "leakless" designs [42]. In future work, we will investigate ambitious applications of small molecule storage and computing. Our goal is to devise _in situ_ computing capabilities, where sensing, computing, and actuating occur at the molecular level, with no interfacing at all with external electronics. The applications include: * **Image processing and classification**: We will implement a full-scale molecular image classifier using neural network algorithms. Performing the requisite image processing _in situ_, in molecular form, eliminates data transfer bottlenecks. We will quantify the accuracy of image processing in terms of the _signal-to-noise_ ratio and the _structural similarity index_. * **Machine learning**: We will explore a common data representation for integrating sensing, computing, and actuation _in situ_: hyperdimensional random vectors. Data is represented by long random vectors of integer or Boolean values. We will deploy this paradigm for machine learning, exploiting the randomness of molecular mixtures for encoding, which can naturally map to large vector representations.
2303.04117
Validation of a Hospital Digital Twin with Machine Learning
Recently there has been a surge of interest in developing Digital Twins of process flows in healthcare to better understand bottlenecks and areas of improvement. A key challenge is in the validation process. We describe a work in progress for a digital twin using an agent based simulation model for determining bed turnaround time for patients in hospitals. We employ a strategy using machine learning for validating the model and implementing sensitivity analysis.
Muhammad Aurangzeb Ahmad, Vijay Chickarmane, Farinaz Sabz Ali Pour, Nima Shariari, Taposh Dutta Roy
2023-03-07T18:28:45Z
http://arxiv.org/abs/2303.04117v2
# Validation of a Hospital Digital Twin with Machine Learning ###### Abstract Recently there has been a surge of interest in developing Digital Twins of process flows in healthcare to better understand bottlenecks and areas of improvement. A key challenge is in the validation process. We describe a work in progress for a digital twin using an agent based simulation model for determining bed turnaround time for patients in hospitals. We employ a strategy using machine learning for validating the model and implementing sensitivity analysis. Digital Twin, Simulation Modeling, Healthcare ## I Introduction Digital twins are virtual copies or simulations of systems. These systems can be used throughout the life-cycle of the physical system being digitized from their inception to decommissioning. Digital twins are often used in to optimize operations, reduce costs, and improve efficiency. They can be used to test and optimize the design of a system before it is built, to monitor and diagnose problems with the system while it is in operation, and to predict and prevent failures. Digital twins are increasingly being applied in various fields such as manufacturing, healthcare, public health and governance and meteorology [8][9]. Validation of digital twins and simulation models in general pose a number of challenges. While simulations can be validated on retrospective longitudinal data, alternate scenarios by definition are not present in the ground truth. In this paper we investigate how machine learning could be used for validation of simulations in the context of large hospital systems. Simulations are often used in scenarios where there is some knowledge about a phenomenon but not enough information regarding how the outputs of a system would change for a given sub-space of inputs or perturbations since these have not been observed before. Consider how patterns of resource usage changed during COVID-19 which rendered most prediction models ineffective. The goal of employing machine learning models in the context of simulations is two-fold. First, to create a model that can predict the outcomes with some level of fidelity and can be used as a benchmark to compare simulated outcomes based on the same parameters that the ML model used. Second, to use machine learning towards sensitivity analysis of the simulation which is normally computationally intensive [24]. This is achieved in 2 steps. (1) The simulation model is used to generate synthetic data that can be used to train a ML model. (2) Post-hoc explanation models like SHAP [13] can be employed to then determine contributions from individual parameters/features. This winter (Nov 2022- Feb 2023), Covid-19, RSV, FLU and other ILI (Influenza Like Illness) have pushed our hospital systems to the brink. Studying the flow of patients from their arrival into the in-patient settings to bed being available to next patient, exposes the various bottlenecks in the process and opportunities for improvement. In the past, hospitals performed process improvement (PI) efforts to address this. However, these efforts take a long time and are generally limited to a small subset of questions that are considered. Further, they cannot be queried to enable a what-if type scenario. Building a digital twin that models from patient arrival to bed turn around time enables a saleable, continuously running and long term solution. A detailed throughput flow displayed in Figure 1 follows patients as they get treated and finally discharged. Subsequently, the dirty beds they occupied are cleaned and recycled back to the various units in the Hospital. The outcome of interest is the bed turnaround time (BTT), which indicates how efficiently beds can be made available to the next waiting patient. Understanding the factors that influence BTT is critical to minimizing any bottlenecks in the flow of clean beds and hence ultimately serving more patients quickly. The goal of this study is to develop a framework that can be used to validate a digital twin model of discharge to bed ready in hospitals. The model will later be adapted according to feedback from operational leadership for individual hospitals. This framework can be used for validation as the simulation evolves. The main contribution of this paper are as follows: * Introduce a digital twin for the discharge to bed ready process. * Develop a framework for validating the digital twin utilizing both machine learning and simulation modeling. ## II Simulation Models, Digital Twins and Machine Learning A simulation model is a replica of a real-world system on the computer and can be used to evaluate 'what-if?' scenarios before actually implementing changes in the real system. For example, a simulation model of a hospital's radiology department could be used to better understand the impact that a new Magnetic Resonance Imaging scanner might have on the hospital's quality of service [10]. The difference between a digital twin and a simulation is scale, although both fall under the rubric of simulation models. Simulations are meant to accurately represent the phenomenon that they are modeling, this is referred to as validation. verification on the other hand corresponds to establishing that the simulation is correctly implemented. Verification answers the question "Have we built the model right?" whereas validation answers the question "Have we built the right model?" [11] One straightforward way to validate simulations is to compare the outputs of the simulation with ground truth i.e., historical data. For new scenarios, which may not exist in the data, we would require another method to validate the output of the simulation. Here AI/ML models that are trained on historical data can be used to provide a prediction for the new scenarios that can then be used to compare to the simulated outcomes. In machine learning, surrogate models are often used to either create simplified or interpretable models [13]. Machine Learning model that is built off of synthetic data can be conceptualized as a surrogate model. Sensitivity analysis [11] is an important part of validation. Understanding which parameters of a model can lead to large variations in the outcomes allows the model to be tested against intuition and from experts who know the ground truth of the phenomena we are trying to model. Even in scenarios where sensitivity analysis may not capture non-linear interactions between the various combinations of inputs and outputs, it can still help the end user understand the relative importance of inputs to the model [13]. Interpretability of predictive models is important factor in adoption of such models in healthcare since many end users e.g., physicians, nurses etc. are interested in knowing what are the driving factors behind predictions [12]. Thus, sensitivity analysis of complex simulation models is computationally hard using traditional methods [26]. We propose that we use model attribution methods widely used in machine learning like SHAP [13] for sensitivity analysis for variable importance attribution. Training machine learning models with synthetic data may help the machine learning model learns about the internal causal structure of the phenomenon of interest [26], as opposed to just using only the historical data. ## III Related Work There is a large body of work on simulation modeling in various domains. It is not possible to cover the literature comprehensively. We refer the reader to [14] and [15] for an overview of simulation modeling. One of the earliest works on simulation of hospital systems was published in 1965 by Fetter and Thompson [16] who first described the problem of using simulations to understand processes in hospital systems. There are multiple surveys of simulation modeling in healthcare: Klein et. al. [17], Mielczarek et al [18], Arisha et al [19], and Vazquez et al [20]. While there are several simulation modeling paradigms, much of the work in simulation modeling in healthcare has been done in discrete event simulation. Applications of simulation modeling in healthcare include patient admission models [18], patient flow in emergency rooms [21], bed utilization in hospitals and clinics [28], modeling chronic conditions [18], allocation of human resources in hospital systems [8]. There is some previous work in combining simulation models with machine learning approaches in healthcare. Elbattah et al [22] describe approaches for coupling machine learning with simulation modeling for elderly discharge planning. Olave-Rojas et al describe a hybrid model for pre-hospital Emergency Medical Services for combining machine learning and simulation models. Mivsic et al [23] employed simulation methods for evaluation of machine learning systems for hospital readmission systems. ## IV Experiments and Results Figure 1 shows the _Discharge to Bed Ready Model_ which simulates the process by which dirty beds get cleaned and are readied for the next patient. The process flow is as follows: After the patient is discharged the dirty bed that was being occupied is designated as dirty by a Unit assistant(UA) resource. An Environmental services(EVS) resource is then assigned to clean the bed. The target Fig. 1: Dirty Bed to Bed Ready Process Simulation Model Time (BTT), corresponds to how much time does it take for a bed, previously occupied by a patient, to be cleaned. ### _Data_ The dataset spans from April 11, 2021 to March 30, 2022. The data is available for 6 different hospital facilities and prediction is also done at the facility level. The features used in the ML/simulation models are daily averages of the following: * Number of discharges during the Morning/Evening/Night shifts ("day", "eve", "night) * Number of Unit Assistant (UA) resources during the Morning/Evening/Night shifts ("day ua", "eve ua", "night ua") * Number of Environmental resources (EVS) resources during the Morning/Evening/Night shifts ("day ",evs "eve evs", "night evs") * Time steps for the cleaning process (4 steps) * time for dirty bed to be assigned ("Avg Dirty Wait Duration") * time for EVS resource to be assigned for cleaning ("Avg Assigned Wait Duration") * time for bed to be cleaned by an EVS resource ("Avg Clean Wait Duration") * time for clean bed to be recycled back into "Avg In Progress Wait Duration" ### _Model Setup_ #### Iv-B1 Simulation Model The simulation model is implemented in AnyLogic [27] which is a widely used simulation software. A schema of the simulation is given in figure 1 which shows that the simulation is modeling two interrelated processes: patients arrival to wheel out process and dirty bed to bed ready process. The simulations are stochastic in nature and for any set of inputs the simulation is run multiple times so that the output is not just a single output but rather set of outputs. For comparison with the ground truth we use the mean of the output as well as the standard deviation. The Hospital Throughput Simulation which is the focal point of the digital twin system, mimics the patient flow process as follows: (i) Patients are admitted from the Emergency Department (ED), Direct Admissions (DA) and Operations Room (OR) into the ICU, Medical Surgery Department or the Medical Telemetry Department. (ii) After treatment, patients either transition among units or get discharged after a discharge order is given. (iii) The dirty beds are cleaned and recycled back into the units for newly arriving patients. The agent based model simulates each dirty bed coming in randomly in time during each of the shifts sampled from an empirical distribution based on Hospital data. The data also includes the average number of discharges per shift per day. For each bed that is cleaned there are 4 time steps which correspond to the steps in the cleaning process during which the UA and EVS resources are utilized. The time spent in each stage of the cleaning process is also sampled from empirical data. The simulation is typically implemented for 100 days which allows the collection of enough observations for analysis. The target variable is the time taken between when the dirty bed first gets assigned to be cleaned till it is finally cleaned which is the bed turn around time (BTT). A bottleneck can occur in the process flow when several discharges occur within a short time, since there are limited resources and each step takes time. This leads to a delay in the total time it takes for a bed to be cleaned (after averaging over the simulation time period). #### Iv-B2 Machine Learning To give confidence in the simulation, one would require a comparison of the simulated BTT average with another benchmark. We trained an ML model on the same data that was an input to the simulation and regressed against the actual BTT observed. This ML model was then used to Fig. 3: Error Distribution of Simulation Model Fig. 2: Error Distribution of Machine Learning Model generate predictions for new scenarios (new values of the input parameters) and can serve as a benchmark for comparison with the simulated BTT. We used a number of regression prediction models but obtained the best results from Gradient Boosting Regressor. ### _Results_ #### Iv-C1 Comparison with Historical Data We used the historical empirical data as inputs to the simulation model. On average, one simulation take about one minutes and thus it took several days to run the simulations for the whole dataset. BTT of the simulation is then compared to the historical BTT. In addition to MAE, we also look at how often does the Actual BTT falls within 1-2 \(\sigma\)standard deviation (SD) of the simulated BTT. Table I gives a summary of the prediction results. _Sim 1D_ and _Sim 2D_ correspond to whether the prediction was within two and three standard deviations from the prediction. The results suggest that the simulation is able to cover \(>94\%\) of the actual results. #### Iv-C2 Comparison with ML Predictions In Table I we see that the errors for the ML model are in the same range as the simulation which allows comparison of simulated outcomes with ML predictions for new scenarios. The reason why in some facilities the ML results are worse off than the simulated ones can be attributed to both insufficient data as well as the quality of the data. This is also the reason that the simulation MAE is better that the ML MAE overall. A few facilities which have low quality data bias the model in making poorer predictions. In the future we hope to speed up simulations which will not limit the number of instances that we will be able to generate, thereby giving us more data. Data quality will also improve as more facilities embrace digital reporting. In Figure 2 and Figure 3 we plot the error distribution, where an error defined as \(error=actual-predicted\), for the ML model and for the simulation. As compared to the ML error simulation error is skewed suggesting that the simulation tends to over-predict. An analysis of the outliers (error \(<\) -60) showed that the inputs into the simulation have a smaller number of EVS resources as compared to the "normal" cases. Incoming dirty beds have to wait in the queue before each EVS resource finishes cleaning the current bed that the resource has been assigned. If there are fewer EVS resources then this process would take more time as it cannot run in parallel as opposed to when there are additional resources. #### Iv-C3 Sensitivity Analysis through ML models Sensitivity analysis, which model explanations are a subset of, enables ascription of change in parameters to the outcomes [29]. This allows a domain expert to determine which factors have the largest impact on the outcomes. Afterwards the domain expert, in our case the hospital operational leadership team, has the choice of the levers that can be used to optimize the outcomes. However, in large simulations there are multiple parameters which make sensitivity analysis a computational challenge [30][31]. By using simulated data to train ML models, we can test global sensitivity of the model parameters through attribution analysis such as SHAP [13]. Figure 4 shows the relative importance of input variables at the global level for the simulation model. It shows that the time steps for bed cleaning and evening EVS resources are most important. Whereas an increase in the time steps raises the BTT, a decrease in EVS resources leads to the same outcome. Number of evening discharges have a higher impact on BTT than discharges during the day or night. This accords with the intuition around how patient flow in such departments work. One can also observe that the impact of the average clean wait duration on BTT is linear. Adding more EVS resources reduces BTT but its effect saturates beyond a certain limit. ## V Conclusion and Future Work In this paper we described how machine learning can augment simulation modeling for a digital twin system. The specific case we described was the process of discharge to bed ready, in which after a patient is discharges, the dirty bed that was occupied is cleaned and made available to the next patient. We used historical data from six hospital systems to validate the models. We are working closely with the hospital operations leadership team to deploy these models in a real world setting where they will be used for decision making for resource allocation. In the future we plan to expand the current framework to include a Full Hospital Simulation. We plan to extent this work to a Multi-Outcome regression problem, which will include patient wait times and other metrics which relate to hospital operational efficiency. We also plan to use emulation for expanding the scope of the sensitivity analysis and automating parts of the validation framework to take into account updates in model as well as input data. ## Acknowledgment We would like to acknowledge Dr. Stephen Parodi, Dr. Chethna Vijay, Mitchell Winnik, Dr. Yu-te Lee, Tanya Scott, Vivian Tan, Sabrina Dahlgren, Wendy Lin, Mike Page, Sean Schuller, Nitin Roy, and Ilker Yaramis for their contributions to this work. Fig. 4: Model Explanation for the Simulation Model
2306.10964
Multilingual Few-Shot Learning via Language Model Retrieval
Transformer-based language models have achieved remarkable success in few-shot in-context learning and drawn a lot of research interest. However, these models' performance greatly depends on the choice of the example prompts and also has high variability depending on how samples are chosen. In this paper, we conduct a comprehensive study of retrieving semantically similar few-shot samples and using them as the context, as it helps the model decide the correct label without any gradient update in the multilingual and cross-lingual settings. We evaluate the proposed method on five natural language understanding datasets related to intent detection, question classification, sentiment analysis, and topic classification. The proposed method consistently outperforms random sampling in monolingual and cross-lingual tasks in non-English languages.
Genta Indra Winata, Liang-Kang Huang, Soumya Vadlamannati, Yash Chandarana
2023-06-19T14:27:21Z
http://arxiv.org/abs/2306.10964v1
# Multilingual Few-Shot Learning via Language Model Retrieval ###### Abstract Transformer-based language models have achieved remarkable success in few-shot in-context learning and drawn a lot of research interest. However, these models' performance greatly depends on the choice of the example prompts and also has high variability depending on how samples are chosen. In this paper, we conduct a comprehensive study of retrieving semantically similar few-shot samples and using them as the context, as it helps the model decide the correct label without any gradient update in the multilingual and cross-lingual settings. We evaluate the proposed method on five natural language understanding datasets related to intent detection, question classification, sentiment analysis, and topic classification. The proposed method consistently outperforms random sampling in monolingual and cross-lingual tasks in non-English languages. ## 1 Introduction Transformer-based language models (LMs) Devlin et al. (2019); Raffel et al. (2020); Xue et al. (2021); Lewis et al. (2020); Liu et al. (2020); Radford et al. (2019); Brown et al. (2020); Wang (2021); Zhang et al. (2022) have shown strong capability in few short learning with prompts. This capability allows them to adapt quickly to diverse tasks from a small amount of data without tedious tuning, and is particularly useful in low-resource settings where the data of the target task, domain or language is fairly limited Louvan and Magnini (2020); Schick and Schutze (2021); Lester et al. (2021); Perez et al. (2021); Winata et al. (2022). Among various studies that aimed to understand and further improve the few-shot prompt learning capabilities, some recent literature present evidences that shows the model's performance on few-shot learning could vary greatly according to the choice of the prompting examples. However, these studies so far were rather preliminary and limited in scope. A comprehensive study about the impact and strategies for choosing prompts is yet to be done Liu et al. (2021). Liu et al. (2021) studied the strategy of choosing semantically similar examples as prompts and showed that examples with similar representations as the queries serve as better prompting examples. However, their work demonstrated this finding solely on English tasks. While the sampling strategy of the in-context learning on non-English and cross-lingual tasks have not been explored, and their work are limited to random sampling Winata et al. (2021); Lin et al. (2021); Huang et al. (2022). Thus, understanding how useful each data sampling strategy can help us to apply cross-lingual transfer more effectively. In this paper, we conduct a comprehensive study of applying the semantic-based sampling strategy on a wide range of multilingual and cross-lingual tasks and state-of-the-art transformer LMs. From the result, we show the effectiveness of leveraging semantically similar samples as context and evaluate them in the downstream natural language understanding (NLU) tasks in four different languages: English, French, German, and Spanish. We explore the in-context learning not only on the monolingual setting, but also on the cross-lingual setting, where we retrieve samples from a different language than the language of the given text. We find that applying semantically similar few-shot samples on the models consistently outperforms random sampling performance, and retrieving samples from multilingual LM is able to select semantically similar samples across different languages. Furthermore, in most tasks, the performance of the model decreases when we use less similar samples as context showing the importance of samples selection in the multilingual and cross-lingual in-context learning. ## 2 Multilingual Language Model Retrieval We define and formalize the LM-based retrieval and then describe how we utilize the retrieved few-shot samples in the in-context learning setting. ### Preliminaries Let us define \(\mathcal{D}\) as the distribution over the dataset. We propose to utilize a pre-trained multilingual LM for retrieving samples from the training samples. Given a set of training samples \(X=\{X_{1},X_{2},...,X_{N}\}\) in source language \(L_{1}\) and a test sample \(Q\) in target language \(L_{2}\), we show the retrieval process in Figure 1. We compute the \(D\)-dimensional sentence-level embeddings \(E_{Q}\in\mathbb{R}^{D}\) and training samples \(E_{X_{i}}\in\mathbb{R}^{D}\) by aggregating the subword embeddings through average pooling over all subwords. We compute the distance \(d\) using a distance function \(sim(E_{Q},E_{X_{i}})\) from the embeddings of the query and the test sample and retrieve top-\(k\) nearest samples \(S=\{[S_{1,l_{1}},...,S_{k,l_{1}}],...,[S_{1,l_{|L|}},...,S_{k,l_{|L|}}]\}\), where \(L=\{l_{1},...,l_{|L|}\}\) is the set of all labels and \(S_{i,l_{j}}\) denotes the \(i^{th}\) closest sample to \(Q\) with ground truth label \(l_{j}\). ### In-Context Learning Let us define \(P\) as the prompt and LM \(\theta\). The prompt \(P=[I,S,Q]\) is a concatenation of task instruction \(I\), few-shot samples \(S\), and query \(Q\). We pass the prompt \(P\) as input to the model \(\theta\) and the model computes the probability distribution of predicting the next word \(p(l|P)\), where \(l\in L\) and \(L\) is the set of all labels. Then, we take the highest label probability as the predicted label \(\hat{l}\) and it is formulated as follows: \[\hat{l} =\operatorname*{arg\,max}_{l}p(l|P), \tag{1}\] \[p(l|P) =\prod_{t=1}^{T}p(l_{t}|P,l_{<t}), \tag{2}\] where \(l_{<t}\) is the previous predicted tokens and label \(l\) can be tokenized into \(T\) subwords tokens \(\{l_{1},...,l_{t},...,l_{T}\}\). StandardizationWe study the standardizing method on our representation to correct for rogue dimensions by following Timkey and van Schijndel (2021). By post-processing the train and test embeddings, we conjecture that we are able to retrieve few-shot examples whose similarity function with the query aligns more with human similarity judgements. ## 3 Experiments ### Datasets In this work, we experiment the proposed strategy on five datasets with a variety of monolingual and multilingual downstream NLU tasks, **intent detection:** SNIPS (Coucke et al., 2018); Multi-NLU (Schuster et al., 2019); MTOP (Li et al., 2021); **topic classification**: TREC (Li and Roth, 2002); **sentiment dataset:** SST2 (Wang et al., 2018);. We filter the training set if they are seen in the test set since we would to ensure that the test set is not seen in the training set. While examining the datasets we observe overlaps in all five of them, where identical samples are presented in both training and testing set. We decide to filter those samples from training set to avoid retrieval strategies taking advantage of the fact by selecting prompts identical to the query. The overlap rates for each dataset are shown in Table 7 in the Appendix. We can see that the rate is particularly high in Multi-NLU. ### Setup We perform few-shot learning by prompting an English decoder-only LM, GPT-J NEO (1.3GB) (Wang, 2021) and a multilingual LM, XGLM (1.7B) (Lin et al., 2021), and run inference with a single V100 32GB GPU. For retrieving prompts, we compute the sentence-level Figure 1: LM based retrieval. In this example, the context is in **English** and the query is in **French**. representation of the query and training samples with XLM-R\({}_{\text{BASE}}\)(Conneau et al., 2020) respectively, and use euclidean distance and cosine similarity, with and without normalizing the embeddings, to measure the semantic similarity. We compare the effectiveness among three retrieval strategies: random, nearest, and farthest. random means we select the few-shot samples randomly; nearest means we select the samples with the highest similarity scores; and farthest means we select the samples with the lowest similarity scores. Experiments are conducted on both monolingual and cross-lingual settings. The former picks prompts in the same language as the query, while the latter picks prompts of a language different from the query. ## 4 Results and Discussion We show the in-context learning results on different datasets in Figure 2. Each plot presents the few-shot learning performance of GPT-J NEO and XGLM with the three retrieval strategies, while varying the value of \(k\). In general, the nearest strategy consistently outperforms random and farthest, except in SST2 dataset (this observation is further discussed later in Figure 5). Interestingly, the multilingual 1.7B XGLM model does not perform as well as the English 1.3B GPT-J NEO model, under English, non-English and cross-lingual settings. Similarity MeasuresFigure 4 shows the performance with different similarity measures evaluated on MTOP (en-fr). There is no clear winner for which similarity measure performs the best, and it shows that euclidean distance can perform similarly as cosine distance, and empirically, there is no evidence that applying standardization is useful. Comparison kNN vs. In-Context LearningTo further understand the fact that the farthest strategy outperforms nearest with both LMs on the SST2 dataset, Figure 5 shows the results of the kNN performance and the performance gap of the nearest and farthest retrieval strategies (\(\Delta\) ACC). The fact that kNN performs relatively worse on SST2 suggests that in SST2 dataset, semantically similar examples do not necessary have the same labels, which explains why we observe a different performance gap between nearest and farthest on these two datasets. This is either due to the characteristic of the dataset or an indication that the semantic representation produced by XLM-R\({}_{\text{BASE}}\)(Conneau et al., 2020) is not accurate in SST2.
2302.13742
How ubiquitous is entanglement in quantum field theory?
It is well known that entanglement is widespread in quantum field theory, in the following sense: every Reeh-Schlieder state contains entanglement between any two spatially separated regions. This applies, in particular, to the vacuum of a non-interacting scalar theory in Minkowski spacetime. Discussions on entanglement in field theory have focused mainly on subsystems containing infinitely many degrees of freedom -- typically, the field modes that are supported within a compact region of space. In this article, we study entanglement in subsystems made of finitely many field degrees of freedom, in a free scalar theory in $D+1$-dimensional Minkowski spacetime. The focus on finitely many modes of the field is motivated by the finite capabilities of real experiments. We find that entanglement between finite-dimensional subsystems is {\em not common at all}, and that one needs to carefully select the support of modes for entanglement to show up. We also find that entanglement is increasingly sparser in higher dimensions. We conclude that entanglement in Minkowski spacetime is significantly less ubiquitous than normally thought.
Ivan Agullo, Béatrice Bonga, Patricia Ribes-Metidieri, Dimitrios Kranas, Sergi Nadal-Gisbert
2023-02-27T13:14:21Z
http://arxiv.org/abs/2302.13742v2
# How ubiquitous is entanglement in quantum field theory? ###### Abstract It is well known that entanglement is widespread in quantum field theory, in the following sense: every Reeh-Schlieder state contains entanglement between any two spatially separated regions. This applies, in particular, to the vacuum of a non-interacting scalar theory in Minkowski spacetime. Discussions on entanglement in field theory have focused mainly on subsystems containing infinitely many degrees of freedom --typically, the field modes that are supported within a compact region of space. In this article, we study entanglement in subsystems made of finitely many field degrees of freedom, in a free scalar theory in \(D+1\)-dimensional Minkowski spacetime. The focus on finitely many modes of the field is motivated by the finite capabilities of real experiments. We find that entanglement between finite-dimensional subsystems is _not common at all_, and that one needs to carefully select the support of modes for entanglement to show up. We also find that entanglement is increasingly sparser in higher dimensions. We conclude that entanglement in Minkowski spacetime is significantly less ubiquitous than normally thought. ## I Introduction Quantum field theory has revealed unexpected and non-intuitive lessons about the way nature works. Arguably, one of the most notorious results of this paradigm is the Reeh-Schlieder theorem [1]. It applies to free and interacting theories alike. To discuss its consequences in the simplest possible context, we will restrict to free, real scalar field theories in \(D+1\)-dimensional Minkwoski spacetimes. This restriction ensures that the concepts discussed here cannot be attributed to the interactions of the field theory under consideration; they are intrinsic properties of any quantum field theory. Consider operators of the form \(\hat{\Phi}_{F}:=\int dV\,F(x)\,\hat{\Phi}(x)\), where \(F(x)\) is a smooth function and \(dV\) the spacetime volume element. These are called smeared field operators, and \(F(x)\) are smearing functions (the smearing ensures that \(\hat{\Phi}_{F}\) is a well-defined operator in the Hilbert space1). It is well-known that the Hilbert space of the theory can be generated from states of the form Footnote 1: In the sense that it maps states to other states. This is not the case without smearing; for instance, \(\hat{\Phi}(x)\) acting on the vacuum produces a state with infinite norm, \(\langle 0|\hat{\Phi}(x)\hat{\Phi}(x)|0\rangle\to\infty\), which is clearly not part of the Hilbert space. Smeared field operators do not have this problem and are suitable candidates for the elementary observables of the theory, from which one can generate the full algebra of observables. \[|\Psi\rangle=\hat{\Phi}_{F_{1}}\hat{\Phi}_{F_{2}}\cdots\hat{\Phi}_{F_{N}}|0 \rangle\,, \tag{1}\] in the sense that any state can be approximated arbitrarily well by such states, for appropriate choices of smearing functions \(F_{1}(x),\cdots,F_{N}(x)\). This is not surprising, and simply tells us that we can create any excitation of the field by acting with an appropriate combination of operators. Intuitively, one can imagine creating an excitation with support in a small laboratory by acting with a suitable set of smeared operators supported within the laboratory. What is rather surprising --and this is the content of the Reeh-Schlieder theorem-- is that one can generate the entire Hilbert space from states of the form (1) _even if we restrict the smearing functions to be supported within an arbitrarily small open set of Minkowski spacetime_. In simple words, one can excite the field in an arbitrary corner of the universe by acting on the vacuum with operators supported exclusively within our small lab! (One cannot use this fact, however, to produce faster-than-light communication [2; 3; 4].) Although puzzling at first, this is reminiscent of the properties of maximally entangled states in quantum mechanics [2]. Consider two quantum mechanical systems with Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) of the same dimension \(n\), and let \(|\Psi\rangle\) be a pure, maximally entangled state. It is well known that _every_ state in \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) can be obtained by acting on \(|\Psi\rangle\) with an operator _restricted to subsystem \(A\)_: \[\forall|\alpha\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{B} \text{ there exist }\hat{O}_{A}\text{ such that}\] \[|\alpha\rangle=\hat{O}_{A}\otimes\hat{\mathbb{I}}_{B}|\Psi\rangle\,,\] where \(\hat{\mathbb{I}}_{B}\) is the identity operator in \(\mathcal{H}_{B}\) (see, for instance, [5]). 2
2307.01598
Fully general relativistic simulations of rapidly rotating quark stars: Oscillation modes and universal relations
(Abridged) Numerical simulation of strange quark stars (QSs) is challenging due to the strong density discontinuity at the stellar surface. In this paper, we report successful simulations of rapidly rotating QSs and study their oscillation modes in full general relativity. Building on top of the numerical relativity code \texttt{Einstein Toolkit}, we implement a positivity-preserving Riemann solver and a dust-like atmosphere to handle the density discontinuity at the surface. We demonstrate the robustness of our numerical method by performing stable evolutions of rotating QSs close to the Keplerian limit and extracting their oscillation modes. We focus on the quadrupolar $l=|m|=2$ $f$-mode and study whether they can still satisfy the universal relations recently proposed for rotating neutron stars (NSs). We find that two of the three proposed relations can still be satisfied by rotating QSs. For the remaining broken relation, we propose a new relation to unify the NS and QS data by invoking the dimensionless spin parameter $j$. The onsets of secular instabilities for rotating QSs are also studied by analyzing the $f$-mode frequencies. Same as the result found previously for NSs, we find that QSs become unstable to the Chandrasekhar-Friedman-Schutz instability when the angular velocity of the star $\Omega \approx 3.4 \sigma_0$ for sequences of constant central energy density, where $\sigma_0$ is the mode frequency of the corresponding nonrotating configurations. For the viscosity-driven instability, we find that QSs become unstable when $j\approx 0.881$ for both sequences of constant central energy density and constant baryon mass. Such a high value of $j$ cannot be achieved by realistic rotating NSs before reaching the Keplerian limit.
Kenneth Chen, Lap-Ming Lin
2023-07-04T09:38:49Z
http://arxiv.org/abs/2307.01598v2
Fully general relativistic simulations of rapidly rotating quark stars: Oscillation modes and universal relations ###### Abstract Numerical simulation of strange quark stars (QSs) is challenging due to the strong density discontinuity at the stellar surface. In this paper, we report successful simulations of rapidly rotating QSs and study their oscillation modes in full general relativity. Building on top of the numerical relativity code Einstein Toolkit, we implement a positivity-preserving Riemann solver and a dust-like atmosphere to handle the density discontinuity at the surface. The robustness of our numerical method is demonstrated by performing stable evolutions of rotating QSs close to the Keplerian limit and extracting their oscillation modes. We focus on the quadrupolar \(l=|m|=2\)\(f\)-mode and study whether they can still satisfy the universal relations recently proposed for rotating neutron stars (NSs). We find that two of the three proposed relations can still be satisfied by rotating QSs. For the remaining broken relation, we propose a new relation to unify the NS and QS data by invoking the dimensionless spin parameter \(j\). The onsets of secular instabilities for rotating QSs are also studied by analyzing the \(f\)-mode frequencies. Same as the result found previously for NSs, we find that QSs become unstable to the Chandrasekhar-Friedman-Schutz instability when the angular velocity of the star \(\Omega\approx 3.4\sigma_{0}\) for sequences of constant central energy density, where \(\sigma_{0}\) is the mode frequency of the corresponding nonrotating configurations. For the viscosity-driven instability, we find that QSs become unstable when \(j\approx 0.881\) for both sequences of constant central energy density and constant baryon mass. Such a high value of \(j\) cannot be achieved by realistic uniformly rotating NSs before reaching the Keplerian limit. The critical value for the ratio between the rotational kinetic energy and gravitational potential energy of rotating QSs for the onset of the instability, when considering sequences of constant baryon mass, is found to agree with an approximate value obtained for homogeneous incompressible bodies in general relativity to within \(4\%\). ## I Introduction ### Quark stars Do strange quark stars (QSs) exist in nature? The question remains unanswered since the hypothesis that strange quark matter composed of \(u\)-, \(d\)-, and \(s\)-quarks may be the ground state of baryonic matter was proposed as early as fifty years ago [1; 2; 3]. If strange quark matter is only metastable, then hybrid stars consisting of quark matter cores surrounded by nuclear matter in the envelope may also exist (e.g., [4; 5]). More recently, the possibility that quark matter containing only \(u\)- and \(d\)-quarks is the true ground state of baryonic matter for a baryon number larger than \(300\) has also been considered [6]. While there is still no evidence for their existence, QSs have been proposed to explain some compact-object observations in the past [7]. More recently, the low-mass (\(<1M_{\odot}\)) central object of the supernova remnant HESS J1731-347 is suggested to be a QS [8; 9; 10], as it is not possible to form such a low mass neutron star (NS) by conventional core-collapse supernova [11]. In the era of gravitational wave (GW) astronomy, the event GW190814 [12] has also been suggested to be a black hole-QS system [13]. Constraints on the equation of state (EOS) of quark matter have also been considered by assuming that the events GW170817 [14] and GW190425 [15] were due to merging QSs instead of NSs [16]. While the observed kilonova signal associated with GW170817 suggests that the event was due to a binary NS merger, this event by itself does not rule out the existence of QSs since NSs and QSs could coexist according to the two-families scenario [17; 18]. Better constraints on the properties of quark matter or even direct evidence for QSs might be possible as more GW events are expected to be observed in the coming decade. Numerical relativity simulations are indispensable tools for studying the GWs emitted from strongly dynamical spacetimes, such as the mergers of binary compact objects. While hydrodynamic simulations of NSs in full general relativity are performed routinely nowadays by different research groups (see [19; 20; 21] for recent reviews), only a few relativistic simulations have been obtained for QSs. The first binary QS simulation was done in 2009 [22] using the smooth particle hydrodynamics method and the conformally-flat approximation in general relativity. Fully general relativistic simulations of single and binary QSs [23; 24; 25] became available only in the past two years. In this paper, we add a contribution to this line of research by demonstrating our ability to evolve rapidly rotating QSs and study their oscillation modes. Our simulations were performed using the publicly available code Einstein Toolkit[26; 27; 28], with our own implementation of a positivity-preserving Riemann solver and a dust-like EOS for the "atmosphere." Apart from the fact that the study of oscillation modes of compact stars is important in its own right (see below), the demonstration of stable evolutions of rapidly rotating QSs would be an important milestone for us to achieve before attempting generic nonlinear dynamical situations, such as the gravitational collapse of a rapidly rotating unstable QS. The challenge to evolve a bare QS (without a thin nuclear matter crust), as described by the standard MIT bag-model EOS in a hydrodynamic simulation, is due to its sharp high-density surface where the pressure vanishes. The high-density surface is directly in contact with the numerical atmosphere, which is introduced to fill up the vacuum space with the purpose of stabilizing traditional grid-based hydrodynamic simulations. The low-density atmosphere is considered to have a negligible impact on the dynamics of compact stars when the evolution time is relatively short and comparable to the dynamical timescale, such as in the case of binary inspiral and merger; however, its small effects, if not properly handled, would accumulate and eventually kill a long-time simulation of a single stable star. The large contact discontinuity at the QS surface due to the density can be regarded as a special case of shock waves. In the context of shock-capturing hydrodynamic schemes, it is well known that low-order Godunov-type schemes [29; 30] that are strongly dissipative will smear the shock, while high-order schemes will usually introduce spurious oscillations that result in the erroneous reconstruction of density near the surface. The error introduced in NS modeling is typically small but serious for QSs and can cause significant violations of mass conservation. It is essential to preserve the positivity of density (and pressure) for a QS near its surface. A fine balance could be achieved by combining the high-resolution shock-capturing methods together with the so-called positivity-preserving (PP) Riemann solver, which was first introduced into the numerical relativity community by Radice, Rezzolla, and Galeazzi in [31]. The main idea is that one could always build a finite-volume PP scheme by integrating a high-order solver with a first-order one under a more restrictive Courant-Friedichs-Lewy (CFL) condition as shown by Shu _et al._[32; 33; 34; 35], since first-order Godunov-type schemes are known to have the PP property [36]. The PP scheme was originally designed to treat the low-density atmosphere of NSs. Better mass conservation and sharper surface density profiles were obtained. In this study, we applied the idea to QSs and achieved similar improvements, with a dustlike EOS designed for the atmosphere. As required by the continuity conditions, the atmospheric density may no longer be small but can be in the same order as the surface density. When we model the atmosphere by nearly pressureless dust particles, large truncation errors on the densities will not cause noticeable disturbance on pressure profiles. Our strategy to handle the sharp QS surface is different from those employed in recent simulations of QSs. In [24; 25], Zhou _et al._ modified the primitive-variable recovery procedure together with an addition of a thermal component to the cold MIT bag model EOS. On the other hand, Zhu and Rezzolla [23] introduced a thin crust described by a polytropic EOS at the QS surface. ### Oscillations of compact stars Pulsations of compact stars are potential sources of GWs, and their detection can provide important information for the uncertain properties of supranuclear EOS inside a traditional NS. The detected signals may even provide evidence for the existence of deconfined quark matter, which could exist in the core of a hybrid star model or in the form of a pure strange QS [3; 5]. The successful detection of GWs from merger events by the Laser Interferometer Gravitational Wave Observatory (LIGO)-Virgo Scientific Collaboration [37; 38; 39] has opened a new era of observational astronomy. Advanced LIGO, Virgo, Kamioka Gravitational Wave Detector, and the next-generation detectors such as the Einstein Telescope [40] would have sufficient sensitivities in the high-frequency band (\(\sim\) kHz) to probe the GWs emitted from pulsating compact stars. The most important oscillation modes that would have interests in GW astronomy are the quadrupolar (\(l=2\)) fundamental \(f\)-mode, the first few overtones of the pressure \(p\)-modes, rotational \(r\)-mode, and maybe the first spacetime \(w\)-modes [41; 42]. The \(f\)-mode is particularly relevant to the GW signals emitted from isolated and binary NS systems. On the one hand, the \(f\)-mode is expected to contribute strongly to the GWs emitted from a proto-NS [43; 44]. For a binary NS system, the dynamical tidal effects due to the coupling between the excited \(f\)-mode and tidal fields during the late inspiral phase are important to the dynamics and emitted GW from the system [45; 46]. While the oscillation modes of nonrotating compact stars can be formulated and computed as eigenvalue problems (see [41] for a review), the situation for rapidly rotating stars is more complicated as the effects of rotation cannot be treated perturbatively. A standard approach to studying the oscillation modes of a rapidly rotating NS in general relativity is to suitably perturb the star and follow its subsequent evolution using a hydrodynamic code, and the mode frequencies are then identified from the Fourier spectra of the fluid variables. Due to the complexity of general relativity, such simulations are usually performed under the Cowling approximations [47; 48; 49; 50] and the conformally flat assumptions [51; 52]. An exception is the work by Zink _et al._[53] in 2010 which investigated the \(f\)-modes of uniformly rotating polytropic stars using a nonlinear hydrodynamics code in full general relativity. More recently, Kruger and Kokkotas (hereafter KK) in [54; 55] studied the \(f\)-modes of rapidly rotating NSs with realistic EOSs taking into account the spacetime dynamics in a linearized theory. In this work, we shall study the oscillation modes of rapidly rotating QSs in full general relativity for the first time. Focusing on the quadrupolar (\(l=2\)) \(f\)-mode, the three \(m=0,\pm 2\) modes are degenerate for a nonrotating spherical star, where \(m\) is the azimuthal quantum number. They will split when the star rotates, similar to the Zeeman splitting in quantum mechanics, though the splitting does not increase linearly with the rotation rate due to the high nonlinearity of the system. The two nonaxisymmetric (\(m\neq 0\)) modes are usually called bar modes, and they are subject to various instabilities [56; 57]. In this work, we shall determine the onsets of secular instabilities of rapidly rotating QSs driven by GW and viscosity dissipations. For the GW-driven Chandrasekhar-Friedman-Schutz (CFS) instability [58; 59], the onset occurs at a neutral point, where the counterrotating \(m=2\) mode frequency \(\sigma_{i}\) observed in the inertial frame passes through zero (i.e., \(\sigma_{i}=0\)). In [54], KK found that the onset of CFS instability for a rotating NS occurs when the angular velocity \(\Omega\) of the star \(\Omega\approx 3.4\sigma_{0}\) for sequences of constant central energy density, where \(\sigma_{0}\) is the \(f\)-mode frequency of the corresponding nonrotating model. The conclusion is approximately insensitive to the chosen EOS models in their study. We shall see in our work that rapidly rotating QSs also satisfy this result as well. The bar modes are also subject to another type of instability which is driven by viscosity. The instability sets in when the \(m=-2\) corotating mode frequency \(\sigma_{c}\) in the rotating frame passes through zero (i.e., \(\sigma_{c}=0\)). The Newtonian analysis of the onset of this instability was studied by Chandrasekhar [60] for a sequence of uniformly rotating uniform-density Maclaurin spheroids. It was found that a new sequence of triaxial Jacobi ellipsoids branches off the Maclaurin sequence when the Newtonian ratio between the rotational kinetic energy \(T\) and gravitational potential energy \(|W|\) reaches the critical value \((T/|W|)_{\rm crit,Newt}=0.1375\). Above this critical value, the Maclaurin spheroids are subjected to the viscosity-driven instability and migrate towards the Jacobi sequence by dissipating energy while conserving angular momentum. A Jacobi ellipsoid is particularly relevant to GW astrophysics, as its time-varying mass quadrupole moment will continuously emit GW radiation. In [61], it is found that general relativity weakens the Jacobi-like bar-mode instability. Furthermore, a stiff EOS with an adiabatic index as large as 2.5 is required for a \(1.4M_{\odot}\) polytropic star to become unstable for \(\Omega\) lower than the Keplerian limit [62]. The onset of the instability is thus expected to be difficult to achieve (if not impossible) by realistic rotating NSs. On the other hand, rotating QSs would be the most promising candidates to achieve the instability as they can generally support higher rotation rates [63; 64] and are stiff enough to be approximated well by incompressible models [65]. The viscosity-driven instability of rotating QSs was already studied more than twenty years ago [66; 67; 63], and the instability onset was found to occur generally before the Keplerian limit. However, these studies were not based on the analysis of the oscillation modes, but by perturbing the stellar configuration during the iteration steps of the calculation of an axisymmetric equilibrium rotating star. If the perturbation grows during the iteration, then the star is declared to be unstable. In this work, we study for the first time the onset of the viscosity-driven instability by observing how the corotating mode frequency \(\sigma_{c}\) in the rotating frame passes through zero as the rotation rates of sequences of QSs approach the Keplerian limit. It should be noted, however, that there is no physical viscosity in our simulations as all stars are modeled by perfect fluids. While the oscillation modes were identified in our three-dimensional simulations, the spontaneous breaking of axisymmetry due to the instability was not observed in the dynamical timescale. ### Universal relations In the last decade, the discoveries of various approximate EOS-insensitive universal relations of compact stars (see [68; 69] for reviews) are not only of theoretical interest but also of importance in astrophysical applications, such as measuring masses and radii with x-ray pulse profile modeling [70], analyzing GW signals to constrain the maximum mass of NSs [71], and reducing the number of parameters in theoretical gravitational waveform models for binary NS inspirals [72; 73; 74; 75]. In contrast to the mass-radius relations of compact stars, universal relations connecting different physical quantities are generally insensitive to EOS models to about 1% level. Many of the investigations done on universal relations only focus on traditional NSs, though it is known that bare QSs also satisfy some of the relations established by NSs [76; 77; 78; 79]. Besides searching for new universal relations, which may provide astrophysical applications, it is also interesting to test existing universal relations against different physics inputs such as thermal effects relevant to hot newborn NSs [80; 81] or superfluid dynamics for cold NSs [82]. Attempts to find universal relations for the oscillation modes of compact stars dated back to the seminal work of Andersson and Kokkotas [83] more than twenty years ago, which was then followed by Benhar _et al._[84] and Tsui and Leung [85; 86]. While these earlier universal relations depend weakly on the EOS models to a certain accuracy, they are not as robust as those discovered later. The \(f\)-mode is now known to connect to the moment of inertia [76] and tidal deformability [79] by robust universal relations which are insensitive to the EOS models to within about 1% level, when the relevant physical quantities are suitably scaled (see, e.g., [87; 88; 89; 90] for recent work). However, these studies were based on nonrotating NSs and QSs only. Recently, KK [54] found three universal relations for the bar modes of rapidly rotating NSs using their newly developed code that takes into account spacetime dynamics in a linearized theory [55]. In this paper, we shall study whether their universal relations can also be applied to rapidly rotating QSs. We find that two of their relations can still be satisfied by bare QSs very well, but one of them is broken quite significantly already at moderate rotation rates. In addition to the \(f\)-mode, we also study the first \(p\)-mode of rotating QSs. For the class of QS models studied in this paper, we report fitting relations for the \(p\)-mode frequencies of both nonrotating and rotating stars. The plan of the paper is as follows. In Sec. II, we discuss the formulation and numerical methods employed in this work. Section III presents the numerical results, including tests that were performed to validate our simulations. Finally, we conclude the paper in Sec. IV. Unless otherwise noted, we adopt the unit convention \(c=G=M_{\odot}=1\), where \(c\) is the speed of light, \(G\) is the gravitational constant, and \(M_{\odot}\) is the solar mass. ## II Formulation and Numerical Methods Our simulations were performed using the publicly available code Einstein Toolkit which is built on top of the CACTUS computational infrastructure [91; 92]. The spacetime is evolved using the standard CCZ4 formulation of the Einstein equations [93; 94] implemented in the thorn code McLachlan [95; 96] of Einstein Toolkit. We choose the parameters of the CCZ4 formulation to be \(\kappa_{2}=0\) and \(\kappa_{3}=0.5\) in our simulations. As for \(\kappa_{1}\), we typically choose it to be \(0.05\). Although its optimal value can vary for different models, physical results are insensitive to these choices as long as the constraint violation does not grow and invalidate the simulations. The general relativistic hydrodynamics equations are solved using the thorn code GRHydro[97; 98]. The mesh-refinement driver CARPET[99; 100] is employed to provide an adaptive mesh refinement approach to increase resolution. The standard gauge conditions "1+log" slicing [101] and Gamma-driver shift condition [102] are adopted, where the damping coefficient which is introduced to avoid strong oscillations in the shift is chosen to be \(1/M\) (with \(M\) being the gravitational mass). Furthermore, a numerical dissipation of the Kreiss-Oliger type [103] is introduced for spacetime variables and gauge quantities following the suggestion of [104]. The formulation and numerical setup used in our simulations are quite standard choices for general relativistic hydrodynamic modelings, such as in the cases of binary neutron star mergers. In order to simulate rapidly rotating QSs for sufficient duration in our study, we implemented a positivity-preserving Riemann solver to the GRhydro thorn which will be discussed below. ### Positivity preserving Riemann solver The fluid-vacuum interface at the surface of a star in hydrodynamics modeling is subject to perturbations mainly due to truncation errors. These perturbations could be significant when the surface has a nonzero finite density, as in the situation for a bare QS. This is particularly the case for simulations using Cartesian coordinates, where the grid points do not match well with the smooth stellar surface. When a free boundary condition is used, the freely evolved vacuum would quickly encounter numerical problems as it may lead to nonphysical negative densities. The problem is tackled, in general, by introducing an artificial atmosphere with a floor density \(\rho_{f}\) in hydrodynamic simulations. It is therefore necessary and desirable to preserve the positivity of certain hydrodynamical variables, essentially the density and the pressure, in a free evolution scheme. For the Newtonian Euler equation, it has been shown that both the density and the pressure are guaranteed to be positive by a well-designed limiter when no source terms are present [35]. In [34], four types of source terms were tested for the discontinuous Galerkin schemes. However, in relativistic hydrodynamics, a rigorous strategy is still lacking for a generic EOS. Here we discuss the PP Riemann solver introduced in [31; 35], which we implemented and proved to provide stable evolutions of rapidly rotating QSs in the study. Let us first give an outline of the conservative form of the relativistic hydrodynamics equations to define the variables for further discussion. The standard \(3+1\) Arnowitt-Deser-Misner form [105] of spacetime metric is given by \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=(-\alpha^{2}+\beta_{i}\beta^{i})dt^{2}+2 \beta_{i}dtdx^{i}+\gamma_{ij}dx^{i}dx^{j},\] where \(g_{\mu\nu}\), \(\alpha\), \(\beta^{i}\), and \(\gamma_{ij}\) are the spacetime 4-metric, lapse function, shift vector, and spatial 3-metric respectively. The energy-momentum tensor of the matter inside the star is assumed to take the perfect fluid form [98] \[T_{\mu\nu}=\rho hu_{\mu}u_{\nu}+Pg_{\mu\nu}, \tag{1}\] where \(\rho\) is the rest-mass density, \(u^{\mu}\) is the fluid four-velocity, \(P\) is the pressure, \(h=1+\epsilon+P/\rho\) is the specific enthalpy, and \(\epsilon\) is the specific internal energy. The equations of motion for the fluid are the conservation law of baryon number, i.e., Eq. (2a), and the conservation of energy and momentum, i.e., Eq. (2b), \[\nabla_{\mu}(\rho u^{\mu}) = 0, \tag{2a}\] \[\nabla_{\mu}T^{\mu\nu} = 0, \tag{2b}\] which are solved by the Valencia formulation [106; 107] in the conservative form, \[\frac{\partial\mathbf{U}}{\partial t}+\frac{\partial\mathbf{F}^{i}}{\partial x ^{i}}=\mathbf{S}, \tag{3}\] with the conserved variables \[\mathbf{U}=[D,S_{j},\tau]=\sqrt{\gamma}\left[\rho W,\rho hW^{2}v_{j},\rho hW^{ 2}-P-\rho W\right], \tag{4}\] where \(\gamma\) is the determinant of \(\gamma_{ij}\), and the three-velocity is \(v^{i}=(u^{i}/u^{t}+\beta^{i})/\alpha\); \(W=(1-v^{i}v_{i})^{-1/2}\) is the Lorentz factor. For three-vectors like \(v^{i}\) and \(\beta^{i}\), their indices are raised and lowered by the 3-metric, e.g., \(v_{i}=\gamma_{ij}v^{j}\). The fluxes are \[\mathbf{F}^{i}=\alpha[D\tilde{v}^{i},S_{j}\tilde{v}^{i}+\sqrt{\gamma}P\delta^{i }_{j},\tau\tilde{v}^{i}+\sqrt{\gamma}Pv^{i}], \tag{5}\] and the source functions are \[\mathbf{S}=\alpha\sqrt{\gamma}[0,T^{\mu\nu}(\partial_{\mu}g_{\nu j}-\Gamma^{ \lambda}_{\mu\nu}g_{\lambda j}),\alpha(T^{\mu 0}\partial_{\mu}\ln\alpha-T^{\mu\nu} \Gamma^{0}_{\mu\nu})], \tag{6}\] where \(\tilde{v}^{i}=v^{i}-\beta^{i}/\alpha\) and \(\Gamma^{\lambda}_{\mu\nu}\) are the 4-Christoffel symbols. To illustrate the idea of the PP scheme, we first consider a source-free scalar conservation law in one dimension [31] \[\frac{\partial u}{\partial t}+\frac{\partial f(u)}{\partial x}=0. \tag{7}\] A theory [108; 109] states that a high-order temporal integration scheme such as the Runge-Kutta scheme which is a convex combination of forward Euler steps will maintain the total variation diminishing (TVD) property and the positivity of \(u\), provided this is true for the first-order forward Euler method. Methods of this kind are known as strong stability-preserving or TVD methods. Considering a discretization scheme using the forward Euler method \[\frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}=\frac{f_{i-1/2}-f_{i+1/2}}{\Delta x}, \tag{8}\] we can arrange it in the form \[u_{i}^{n+1}=\frac{1}{2}(u_{i}^{+}+u_{i}^{-}), \tag{9}\] where \[u_{i}^{+} = u_{i}^{n}+2\frac{\Delta t}{\Delta x}f_{i-1/2}, \tag{10a}\] \[u_{i}^{-} = u_{i}^{n}-2\frac{\Delta t}{\Delta x}f_{i+1/2}. \tag{10b}\] Sufficiently, when both \(u_{i}^{+}\) and \(u_{i}^{-}\) are positive, so will be \(u_{i}^{n+1}\). It is proven that positivity is guaranteed for the first-order Lax-Friedrichs (LF) flux [32] with a more restrictive CFL-like condition \(\Delta t/\Delta x\leq 1/2c\), where \(c\) is the largest speed of sound [35]. However, this low-order scheme is too dissipative to capture features of shocks. The principle of the PP solver is to combine it with a high-order (HO) scheme for optimization, \[f_{i+1/2}^{\rm PP}=\alpha f_{i+1/2}^{\rm HO}+(1-\alpha)f_{i+1/2}^{\rm LF}, \tag{11}\] where \(f_{i+1/2}^{\rm HO}\) is the HO flux, \(f_{i+1/2}^{\rm LF}\) is the LF flux, and \(\alpha\in[0,1]\) is an undetermined coefficient. In our simulations, we selected the Marquina solver [110; 111] as our HO flux. When \(\alpha=1\), the PP scheme fully restores to the powerful HO flux, which is applied for the bulk of a star. On the other hand, around the fluid-atmosphere interface at the stellar surface, the PP scheme then searches for the optimal value of \(\alpha\), compromising accuracy for positivity. The first component of Eq. (3), i.e., the continuity equation, is source-free [see Eq. (6)] and the PP scheme is applicable. Its conserved variable is \(D=\sqrt{\gamma}W\rho\), where \(\gamma\) and \(W\) are both definitely positive. Ensuring the positivity of \(D\) will serve our purpose of ensuring the positivity of \(\rho\). However, the pressure-related term, the conserved energy density \(\tau\), has a complex source term in Eq. (6). While the authors of [31] suggested enforcing a floor value on \(\tau\), empirically we found it adequate to apply the PP limiter also on \(\tau\). In our three-dimensional Cartesian-grid case, the CFL-like condition becomes \(\Delta t/\Delta x\leq 1/6c\), and the PP flux \(f_{i+1/2}^{\rm PP}\) is calculated component-by-component. This condition, which requires each interface to be non-negative, is too restrictive since only the positivity of their sum is really demanded. It could tolerate a small negative contribution (if any) from the source term. Let the conserved variable \(u\) represent either \(D\) or \(\tau\). The value of \(\alpha\) is determined as follows. If \(u_{i+1}^{+}(f_{i+1/2}^{\rm HO})\) is positive, then \(\alpha(u_{i+1}^{+})=1\), meaning that the original HO flux is used. Otherwise, \[\alpha(u_{i+1}^{+})=\frac{u_{i+1}^{+}(f_{i+1/2}^{\rm LF})}{u_{i+1}^{+}(f_{i+1/2 }^{\rm LF})-u_{i+1}^{+}(f_{i+1/2}^{\rm HO})}, \tag{12}\] and similarly for \(\alpha(u_{i}^{-})\). The PP property of the LF scheme ensures that a solution \(\alpha\geq 0\) always exists. We then have \[\alpha(u)=\min\left(\alpha(u_{i+1}^{+}),\alpha(u_{i}^{-})\right). \tag{13}\] This determines the PP flux \(f_{i+1/2}^{\rm PP}\) of one component. As there are five components in the flux \(\mathbf{F}^{i}\), different values of \(\alpha\) could be applied to different components. Empirically, we found that using the smaller value between \(\alpha(D)\) and \(\alpha(\tau)\) on all five components worked well. Practically, a smaller Courant factor or a more conservative choice of \(\alpha\) can always be chosen if there is an intolerably large violation of mass conservation, which indicates a poor preservation of positivity. The implementation of the PP solver allows us to set the floor density of the atmosphere to be \(\rho_{f}=10^{-18}\) which is about \(10^{-15}\) of the typical central density. In theory, the PP scheme allows the atmosphere to evolve freely to be as small as the round-off precision. In our typical simulations, the violation of the total mass conservation near \(t=2000\approx 9.85\) ms for a Courant factor \(0.16\) is of \(\mathcal{O}(0.1\%)\). More numerical tests are presented in Sec. III. ### Equation of state #### iii.2.1 For QSs and NSs As we are interested in extracting the oscillation modes of QSs excited by small perturbations, the thermal effects like shock heating will play a negligible role in the simulations. We thus assume the stars are described by zero-temperature EOS models. In order to model bare QSs, we parametrize the linear approximation of the MIT bag model EOS [112] by the square of the speed of sound \(c_{ss}\) and the bag constant \(B\) as \[P=c_{ss}e-(1+c_{ss})B, \tag{14}\] where \(P\) is the pressure for a given energy density \(e\). It will be convenient to further parametrize it by the ratio \(\kappa\equiv\rho/\rho_{S}\) between the rest-mass density \(\rho\) and the surface density at zero pressure \(\rho_{S}=(1+c_{ss})B/c_{ss}\). The full EOS is then given by \[\rho = \rho_{S}\kappa, \tag{15a}\] \[P = B(\kappa^{1+c_{ss}}-1),\] (15b) \[e = B\left(\frac{\kappa^{1+c_{ss}}}{c_{ss}}+1\right), \tag{15c}\] and \(\kappa\) is closely related to the enthalpy \(h\) through the relation \[h=\frac{e+P}{\rho}=\kappa^{c_{ss}}. \tag{16}\] In addition to a conventional choice of \(c_{ss}=1/3\) and \(B=B_{60}\equiv 60\ \mathrm{MeV/fm}^{3}\), hereafter denoted by "MIT1," we also include the following models: "MIT2" for \(c_{ss}=1\) and \(B/B_{60}=3\), "MIT3" for \(c_{ss}=2/3\) and \(B/B_{60}=3/2\) and "MIT4" for \(c_{ss}=1/2\) and \(B/B_{60}=3/2\), to cover a range of parameter space for QSs as we shall also explore the robustness of some EOS-insensitive universal relations. We also include a nuclear-matter EOS model for NSs as a benchmark for comparison. Instead of using the original tabular EOS data, we use a piecewise polytropic model [113] to represent analytically the SFHo EOS [114; 115], which was not included in the study of the universal relations of rapidly rotating NSs by KK [54]. In particular, we construct a five-piece model so that the pressure \(P\) and specific internal energy \(\epsilon\) are everywhere continuous and satisfy \[P(\rho) = K_{i}\rho^{\Gamma_{i}}, \tag{17a}\] \[\epsilon(\rho) = a_{i}+\frac{K_{i}}{\Gamma_{i}-1}\rho^{\Gamma_{i}-1}, \tag{17b}\] inside the range of rest-mass density \(\rho_{i-1}\leq\rho<\rho_{i}\) (\(i=1,2,3,4\)). The parameters of our EOS models are summarized in Tables 1 and 2 and the corresponding mass-radius relations for nonrotating QSs and NSs are shown in Fig. 1. The mass-radius relations of MIT bag models differ qualitatively from that of the SFHo EOS due to the well-known fact that QSs are self-bound objects. #### ii.2.2 For the atmosphere of QSs The surface of a bare QS modeled by the MIT bag model is identified by the vanishing pressure just like an ordinary NS, but it has a finite density \(\rho_{S}\), which is of the same order as the central density, and it requires novel treatments in dynamical modelings. In traditional hydrodynamic simulations for NSs, a low-density atmosphere is introduced to fill up all vacuum space outside the stars, such that a fluid element is reset to become part of the atmosphere when its density evolves to become smaller than a prescribed value of the atmospheric density or even negative. This approach works well in highly dynamical situations such as inspiraling binary stars when the stars move across the computational grid in a short \begin{table} \begin{tabular}{c c c c c} \(i\) & \(\rho_{i}\) & \(K_{i}\) & \(\Gamma_{i}\) & \(a_{i}\) \\ \hline 0 & 0.0 & \(6.0073\times 10^{-2}\) & 1.2524 & 0.0 \\ 1 & \(8.4981\times 10^{-7}\) & \(7.4003\times 10^{-6}\) & 0.6084 & \(1.1491\times 10^{-2}\) \\ 2 & \(4.2591\times 10^{-6}\) & \(5.4052\times 10^{-3}\) & 1.1416 & \(2.4695\times 10^{-3}\) \\ 3 & \(5.3619\times 10^{-5}\) & \(2.3751\times 10^{2}\) & 2.2288 & \(1.0860\times 10^{-2}\) \\ 4 & \(4.2591\times 10^{-4}\) & \(3.6338\times 10^{4}\) & 2.8769 & \(1.5677\times 10^{-2}\) \\ \end{tabular} \end{table} Table 2: Parameters of the piecewise polytropic representation of the nuclear-matter SFHo EOS. The parameters \((\rho_{i},K_{i},\Gamma_{i},a_{i})\) are expressed in the code units where \(G=c=M_{\odot}=1\). Figure 1: Gravitational mass is plotted against radius for nonrotating NSs modeled by the SFHo EOS and QSs modeled by four MIT bag models. Points labeled by “Seqs” correspond to the nonrotating configurations in the sequences of constant baryon mass or constant central density we used in the study of rotating stars. time scale, and the effects of the low-density atmosphere become relatively unimportant. However, in studying the oscillations of stable stars whose vacuum-fluid interface may move slowly, excessive oscillations may cause the star to extract (lose) mass from (to) the atmosphere and violate the conservation of mass and momentum. As the effects accumulate and amplify, they ultimately destabilize the evolution [31]. This situation only gets worse for QSs when the vacuum-fluid density discontinuities are many (ten) orders of magnitude larger than those in traditional NS cases. Immediately after starting the simulation, a large violation of mass conservation would be observed, and it soon rises up to the order of total mass, completely destroying the simulation. Furthermore, during a numerical simulation it can happen that fluid elements on the surface of a QS may evolve to a density smaller than the surface density \(\rho_{S}\), which is defined by the vanishing pressure of the MIT bag model. As their densities (\(\sim\rho_{S}\)) are typically many orders of magnitude larger than the density of the atmosphere, they cannot simply be treated as part of the atmosphere. This then poses a question of how to evolve such fluid elements and with what type of EOS so that the dynamics near the surface of a QS can be modeled correctly. Instead of arbitrarily modifying some atmospheric elements during evolution, we want to maintain the balance between inertial and gravitational forces on all fluid elements and enforce the conservation law around the vacuum-fluid interface. In other words, a scheme allowing for a free evolution of the atmosphere is required. To achieve this purpose, we introduce here a dustlike EOS to model fluid elements near the surface of a QS, whose rest-mass density is not necessarily small but whose pressure is always close to zero. Even though the truncation errors from finite differencing may cause a large density dislocation, its disturbance to the pressure profile would be minimal. The gravitational pull of the star tends to bring a dislocated fluid element back to its equilibrium position. In practice, after importing the initial data of a bare QS into the computational domain, an atmosphere with a floor rest-mass density \(\rho_{f}\) is set outside the star. In our simulations, the value of \(\rho_{f}\) is chosen to be \(10^{-18}\), about \(10^{-15}\) times the central rest-mass density of the star. During the evolution, we set a pressure cutoff or equivalently a density cutoff slightly larger than the surface density following our parametrization of the MIT bag model EOS: \[\rho_{\rm cutoff} = \rho_{S}(1+\xi), \tag{18a}\] \[P_{\rm cutoff} \approx B(1+c_{ss})\xi\approx c_{ss}\rho_{\rm cutoff}\xi, \tag{18b}\] where \(\xi\) is small. The specific enthalpy and energy are given by \(h=(1+\xi)^{c_{ss}}\approx 1+c_{ss}\xi\) and \(\epsilon\approx c_{ss}\xi^{2}/2\), respectively. It should be noted that the effective adiabatic index diverges near the stellar surface \[\Gamma=\frac{d\ln P}{d\ln\rho}=\frac{(1+c_{ss})\kappa^{1+c_{ss}}}{\kappa^{1+c _{ss}}-1}\approx\frac{1}{\xi}, \tag{19}\] meaning the surface is theoretically infinitely stiff. If the rest-mass density is above the cutoff density \(\rho_{\rm cutoff}\), then we will use Eq. (15). Otherwise, we switch to the following EOS for the fluid element: \[P(\rho) = c_{ss}\xi\rho, \tag{20a}\] \[e(\rho) = \left(1+\frac{c_{ss}}{2}\xi^{2}\right)\rho. \tag{20b}\] When \(\rho<\rho_{\rm cutoff}\), the transition of the EOS from the MIT bag model to dust does not mean a physical phase transition. The dust atmosphere can be considered as being motivated by the physical picture that when self-bound quark matter droplets, each with density \(\approx\rho_{\rm cutoff}\), are ejected from the stellar surface in dynamical evolution, they form a thin layer of atmosphere. In a finite-volume cell, the number of droplets is not large enough to form a fluid, but instead well described by a system of pressureless dust. Both the specific energy and enthalpy are enforced to be continuous across the surface discontinuity. The first law of thermodynamics, which requires \(de/d\rho=h\), is violated to the order of \(\mathcal{O}(\xi)\). By choosing a small \(\xi\), we expect our model to capture the dynamics near the surface of a bare QS, and eventually the error is dominated only by the finite-differencing error. We used \(\xi=10^{-12}\) in our simulations. The cutoff pressure is then also about \(10^{-12}\) of the central pressure, and the surface specific internal energy \(\epsilon\propto\xi^{2}\) is below the roundoff precision of the double precision floating point numbers. As we shall see in the following, the introduction of this dustlike EOS near the surface of a QS enables us to determine the radial oscillation modes of QSs accurately. ### Other numerical issues In addition to the Riemann solver (see Sec. II.1), which determines the numerical flux at the cell interfaces, one also needs a reconstruction method to interpolate the fluid variables. In our simulations, we implemented the classic piecewise parabolic method (PPM) reconstruction method [116]. It should be pointed out that the original PPM scheme applies a steepening procedure for density discontinuity only if the following condition is satisfied (see Eq. (3.2) in [116]): \[\Gamma K_{0}\frac{|\rho_{j+1}-\rho_{j-1}|}{\min\left(\rho_{j+1},\rho_{j-1} \right)}\geq\frac{|P_{j+1}-P_{j-1}|}{\min\left(P_{j+1},P_{j-1}\right)}, \tag{21}\] where \(K_{0}\) is a constant parameter. This condition determines whether the \(j\)th zone can be treated as being inside a discontinuity. However, this criterion does not work properly for QSs due to the divergence of the effective adiabatic indices \(\Gamma\approx 1/\xi\) of QSs near the surface. As a result, any pair of constants \(\Gamma\) and \(K_{0}\) in the PPM scheme would not properly detect discontinuities near the surface of a QS. In our simulations, we simply turned off this condition and always allowed the steepening procedure for QS models. This adjustment can sharpen surface density profiles and prolong our simulations. We have also tested the fifth-order monotonicity preservation scheme (MP5) [117; 118] and found no advantage regarding mass conservation, as we shall discuss in Sec. III.1. In this study, we do not attempt to extract the gravitational wave signals emitted from oscillating stars using the Newman-Penrose formalism, which is routinely employed in binary neutron star simulations. The outer boundary of the computational domain in our simulations can then be set closer to the stellar surface. Nevertheless, we found that the quality of the hydrodynamic modeling of a rapidly rotating QS, such as mass conservation, can be affected strongly if the outer boundary is too close to the stellar surface. In our simulations, we employ three refinement levels with a \(2:1\) refinement ratio for successive levels provided by the mesh refinement driver CARPET in order to maintain enough grid resolution inside the star, while the outer boundary can be put far away from the stellar surface to reduce its effects. The first refinement boundary is at a radius of \(1.2r_{\rm eq}\), the second at \(2.4r_{\rm eq}\), and the outer boundary at \(4.8r_{\rm eq}\), where \(r_{\rm eq}\) is the equatorial coordinate radius. In Sec. III.1, numerical results of three spatial resolutions \(\Delta x=0.12\) (\(\approx 177\) m), \(0.16\) (\(\approx 236\) m), and \(0.24\) (\(\approx 354\) m), in the case of a non-rotating QS are compared, where \(\Delta x\) is the grid size for the finest level. Since we could already extract the frequencies of radial oscillation modes of QSs up to the fifth overtone using \(\Delta x=0.24\), we produced our results with the default resolution \(\Delta x=0.16\). For a typical slowly rotating star, its radius will cover about 50 computational grids. Fast rotation can cause a large deformation of the star, and the ratio between the polar and equatorial radii can be below 0.5 for some extreme models. For these cases, there are about 30 and 60 cells along the polar and equatorial radii, respectively. To save computational resources, reflection symmetry about the equatorial plane is assumed and interesting modes (\(l=|m|=2\)) will not be affected by this choice. Octant symmetry is applied when bar modes are not concerned. In short summary, our main numerical results in this work are obtained by using the PP Riemann solver with the PPM reconstruction method. The time update is performed using the standard RK4 integrator. ### Initial data and perturbations We use the numerical code rotstar in the LORENE[119] library to construct uniformly rotating NS and QS models for our study. The code uses a multidomain spectral method [120; 121] and has been used for calculating rapidly rotating QSs [63; 64; 66; 67]. Sequences of constant baryon mass and constant central energy density are produced, whose corresponding nonrotating configurations are labeled by "Seqs" data points in Fig. 1. An important dimensionless parameter to characterize a rotating compact star is its spin parameter \(j=J/M^{2}\), where \(J\) and \(M\) are the angular momentum and gravitational mass of the star, respectively. In Fig. 2, the spin parameters of our constant baryon mass sequences are plotted against the angular velocity \(\Omega\), normalized by the corresponding maximal rotation limit \(\Omega_{\rm max}\). There is a gap separating the band of QS models from the two NS sequences in the figure. For the same baryon mass, the spin parameters of QSs are significantly larger than the NS counterparts, especially when \(\Omega/\Omega_{\rm max}\) approaches unity. It should be pointed out that, in contrast to the situation for rotating NSs, the maximum angular velocity \(\Omega_{\rm max}\) of a sequence of QSs with a given baryon mass is generally higher than the Keplerian limit \(\Omega_{K}\) by about 2%, a characteristic feature of self-bound objects that a QS can further gain angular momentum by slightly slowing down its rotation but increasing its oblateness (i.e., the moment of inertia) before reaching the Keplerian limit [66]. While the two NS sequences with a 10% difference in baryon mass match each other very well, the QS sequences for a given EOS model are seen to depend more sensitively on the mass. In particular, the spin parameter for QSs increases as the baryon mass decreases. As pointed out in [64], the spin parameter of QSs can even be larger than the Kerr bound \(j=1\) for rotating black holes. We study two such models in the MIT1 \(2M_{\odot}\) sequence, which are also degenerate models in terms of the rotational frequency, as shown in the inset of Fig. 2. Clearly, the maximal rotational frequency is a turning Figure 2: The spin parameter \(j\) is plotted against the angular velocity \(\Omega\) normalized by the corresponding maximum rotation limit \(\Omega_{\rm max}\) for constant baryon mass sequences. In the figure legend, each sequence is labeled by the EOS model and the (fixed) value of the baryon mass of the sequence. For NSs, the maximal rotational frequency is its Keplerian limit, \(\Omega_{K}=\Omega_{\rm max}\); but for QSs, \(\Omega_{\rm max}\) is larger than \(\Omega_{K}\) by about 2% [66] typically. The inset enlarges the two pairs of degenerate models of the MIT1 \(2M_{\odot}\) sequence in the sense that each pair has the same rotational frequency. The data set contains 115 models, including 21 SFHo models and 41 MIT1, 10 MIT2, 18 MIT3 and 25 MIT4 QS models. point. On the other hand, there is an upper bound of \(j\sim 0.7\) for uniformly rotating NSs, the value of which is relatively insensitive to EOS models [64; 122; 123]. Similarly, the sequences of constant central energy density are plotted in Fig. 3. In the figure, we plot \(j\) against the ratio \(\Omega/\sigma_{0}\), where \(\sigma_{0}\) is the \(f\)-mode frequency of the corresponding nonrotating star for each sequence. In contrast to Fig. 2, there is now no qualitative difference between the NS and QS, except that the latter can reach higher values of \(j\) and \(\Omega/\sigma_{0}\). The reason to normalize \(\Omega\) by \(\sigma_{0}\) is due to the fact that the \(f\)-mode frequencies of NSs for these sequences establish a universal relation with \(\Omega/\sigma_{0}\)[54]. On the other hand, for the sequences of constant baryon mass, the \(f\)-mode frequencies of NSs observed in the rotating frame are connected to \(\Omega/\Omega_{K}\) by another universal relation. We shall study whether the \(f\)-modes of rotating QSs for these two types of sequences still satisfy the universal relations found for NSs. Every data point in Figs. 2 and 3 represents a rotating star model we perturbed and evolved dynamically to \(t=2000\) (\(\approx 9.85\) ms), and their oscillation modes were then extracted for further analysis. To excite the quadrupolar nonaxisymmetric (\(l=|m|=2\)) oscillation modes of rotating stars, we add initial velocity perturbations following the suggestion of [51]. After importing the initial data for an equilibrium rotating star to the evolution code, we perturb the star by adding the velocity perturbations \[v^{\theta} = v^{r}\sin 2\theta(\cos 2\phi+\sin 2\phi), \tag{22a}\] \[v^{\phi} = -2v^{r}\sin\theta(\sin 2\phi-\cos 2\phi), \tag{22b}\] where the radial component \(v^{r}\) controls the perturbation strength, which we set to be \(v_{0}\sin[\pi r/2r_{s}(\theta)]\) for some small values of \(v_{0}\), and \(r_{s}(\theta)\) is the estimated coordinate radius along the \(\theta\) direction. A typical choice for \(v_{0}\) used in our simulations is 0.005, which contributes negligibly to the initial Hamiltonian constraint violation comparing to the numerical error from importing and interpolating the LORENE initial data to the CACTUS Cartesian grids. Although these perturbation functions are not the exact eigenmodes of rapidly rotating stars, they can effectively excite the fundamental \(f\)-modes and also the first pressure \(p\)-modes. ## III Numerical results ### Nonrotating QSs Before studying the oscillation modes of rotating QSs, let us first present various tests for nonrotating models to demonstrate that our numerical method is capable to provide a stable and accurate evolution for a bare QS. In particular, we focus on a nonrotating QS with a gravitational mass \(1.71M_{\odot}\) and a radius \(11.1\) km described by the MIT1 EOS. This star corresponds to the nonrotating configuration of a constant baryon mass (\(2M_{\odot}\)) sequence in our study of rotating QSs. As discussed before, we choose the PP Riemann solver with PPM reconstruction method (hereafter PP+PPM) as our default hydrodynamic scheme. Here we first compare the performance of the PP+PPM scheme with other standard Riemann solvers, the Harten-Lax-van Leer-Einfeldt (HLLE) [30; 124] and Marquina [110; 111] schemes, and MP5 [117; 118] reconstruction method. In Fig. 4, we plot the percentage changes of total baryon mass against time for the simulations using different combinations of the Riemann solvers and reconstruction methods. The simulations were performed with the same grid resolution \(\Delta x=0.16\) at the finest refinement level. It is seen that the Marquina+PPM scheme loses a large percentage of mass immediately at the beginning of the evolution. Similarly, the HLLE+MP5 scheme also has a large decrease in mass initially and the mass loss increases to \(5\%\) by about \(1.3\) ms. Replacing the MP5 method with a lower order (third-order) PPM method can improve the mass conservation as can be seen by comparing the HLLE+MP5 and HLLE+PPM schemes in the figure. The HLLE+PPM scheme gradually loses \(3\%\) of total mass by about \(10\) ms. In fact, we noticed that a higher-order reconstruction method actually causes more spurious oscillations near the sharp surface discontinuity. By comparison, it is seen clearly that our implemented PP solver, whether using the MP5 or PPM reconstruction method, performs much better than the other solvers and can conserve the total mass to within \(1\%\) up to \(10\) ms. While the PP+MP5 run still suffers an initial drop in mass, the PP+PPM scheme is nearly a flat line. The numerical results presented in this paper are obtained by Figure 3: The spin parameter \(j\) is plotted against the angular velocity \(\Omega\) normalized by the \(f\)-mode frequency \(\sigma_{0}\) of the corresponding nonrotating stars for constant central energy density sequences. In the figure legend, each sequence is labeled by the EOS model and the baryon mass of the nonrotating star in the sequence. The data set contains \(52\) models, including \(6\) SFHo models and \(37\) MIT1 and \(9\) MIT4 QS models. the PP+PPM scheme hereafter. An important quantity to monitor the quality of a numerical-relativity simulation is the Hamiltonian constraint. A small constraint violation is required for any trustworthy simulation. In Fig. 5, we plot the \(L^{2}\) norm of the Hamiltonian constraint against time for the evolution of the nonrotating QS using three different resolutions \(\Delta x=0.24\) (low), \(0.16\) (medium), and \(0.12\) (high). Thanks to the constraint damping and propagation properties of the CCZ4 formulation, the Hamiltonian constraint violation quickly drops to a steady plateau of \(\mathcal{O}(10^{-6})\), \(2\) orders of magnitude smaller than the initial Hamiltonian constraint violation, even in the low-resolution run. The figure shows that the violation decreases with increasing resolution. The inset of Fig. 5 plots the stable plateau values \(||H||_{s}\) of the constraint violation against \(\Delta x\), and demonstrates a linear-order convergence for \(||H||_{s}\). After checking the stability and accuracy of the evolution, we now turn to the oscillations of the nonrotating QS model. While the star is a static equilibrium configuration initially, finite-differencing errors can trigger the radial oscillation modes during the evolution. The frequencies of the oscillation modes can then be obtained by performing Fourier transforms (FT) of physical quantities such as the density and velocity. For nonrotating stars, the oscillation mode frequencies can alternatively be computed using a perturbative eigenmode analysis. For radial oscillation modes, we followed [125]; for non-radial quadrupolar modes, we followed [126; 76; 127]. Comparing the mode frequencies obtained from the simulation with the known eigenmode frequencies is an important test of the hydrodynamic simulation. For a general rotating star, we use the LINEOUT module in the open source visualization and data analysis software VisIt[128] to extract the physical quantities inside the star at data points along the line at polar angle \(\theta=\pi/4\) on the \(x\)-\(z\) plane, where \(z\) is the rotation axis. The Fourier transforms of the rest-mass density \(\rho\) at different data points are added up and the absolute value of their sum defines our final Fourier spectrum of \(\rho\). Similarly, we also consider the Fourier spectra of the velocity components defined by \(v^{r}=(v^{x}+v^{z})/\sqrt{2}\), \(v^{\theta}=(v^{x}-v^{z})/\sqrt{2}\), and \(v^{\phi}=v^{y}\), where \((v^{x},v^{y},v^{z})\) are the velocity components obtained in our Cartesian grid simulations. In practice, we found that the Fourier spectrum of a physical quantity obtained from the superposition of multidata points could improve the quality of the spectrum and was helpful in the mode identification. Let us first study the radial oscillation modes of the nonrotating QS model discussed above in Fig. 5 as a test for our simulations. In Fig. 6, we show the Fourier spectra of density FT(\(\rho\)) obtained from the evolutions using three different grid resolutions. The vertical dashed lines in the figure stand for the frequencies of the radial oscillation modes, ranging from the fundamental mode \(F_{0}\) to the tenth overtone \(F_{10}\), determined by the perturbative method as in [125]. It is seen that our simulation results can produce Fourier peaks matching the dashed lines very well. The amplitudes of the spectra are dominated by the fundamental mode \(F_{0}\) as expected. Higher overtones with much smaller amplitudes are still identifiable in the spectra. The Cartesian grid cannot match the stellar surface exactly, and hence many overtones can be excited due to numerical perturbations near the surface. Being able to identify the high-frequency overtones would be a good criterion for a proper simulation of a Figure 5: \(L^{2}\)-norm of the Hamiltonian constraint violation \(||H||\) in the evolutions of a nonrotating QS for three different grid resolutions. The inset plots the stable plateau values \(||H||_{s}\) of the constraint violation and demonstrates linear-order convergence for \(||H||_{s}\). Figure 4: Evolutions of the percentage changes of the total baryon mass of a nonrotating QS for five different combinations of the Riemann solvers and reconstruction methods using a medium grid resolution \(\Delta x=0.16\) (\(\approx 236\) m) at the finest refinement level. The default scheme PP+PPM employed in this study can preserve the mass conservation to high accuracy, about \(0.034\%\) at \(t\approx 9.85\) ms. stable QS. In order to show clearly the Fourier peaks of the high overtones, we enlarge the frequency range from the \(F_{3}\) to \(F_{10}\) overtones in the inset of Fig. 6. The high-resolution result (black curve) aligns very well with all overtones up to the ninth overtone with frequency \(F_{9}=43808\) Hz. Near the tenth overtone with frequency \(F_{10}=48215\) Hz, a small bump still exists at the correct position. It is seen that the peaks of the medium-resolution result (red curve) can also match up to the ninth overtone. However, the low-resolution result (blue curve) can only recover up to the fifth overtone with frequency \(F_{5}=26138\) Hz as the run suffers from more numerical dissipation. Let us end this subsection by discussing how we determined the mode frequencies from the Fourier spectra quantitatively. Obtaining accurate mode frequencies from a Fourier spectrum is important to our study of the universal relations of rotating QSs. In Fig. 6, the QS is evolved to \(t\approx 9.85\) ms for each resolution run and the resolution in the frequency of the Fourier spectrum is inversely proportional to this evolution time, meaning that the frequency resolution is \(101.5\) Hz. Figure 7 is the same plot of FT(\(\rho\)) as Fig. 6, but focuses around the fundamental \(F_{0}\) mode. It is seen that the width of the peak decreases as the resolution increases, and the medium (red line) and high (black line) resolution results agree very well. To extract the mode frequency from the high-resolution result, we fit a quadratic curve around the peak, and the mode frequency is approximated by the position at which the slope of the curve passes through zero as illustrated by the smooth solid line in Fig. 7. We obtain the fundamental mode frequency \(2745\) Hz from the high-resolution run using this method, which differs from the known normal-mode value \(F_{0}=2778\) Hz by about \(1.2\%\). In general, the radial oscillation modes are sensitive to the stellar profile, and the capability of our simulations to recover the correct mode frequencies to high overtones accurately suggests that we have modeled the sharp surface of the QS properly. ### Stability of rotating QSs To demonstrate that we can perform stable and accurate simulations of rapidly rotating QSs, we first show in Fig. 8 the \(L^{2}\)-norm of the Hamiltonian constraint violations for a sequence of \(2M_{\odot}\) baryon mass MIT1QSs with rotational frequencies ranging from \(300\) to \(1200\) Hz, where the maximal rotation limit is near \(1228\) Hz. The runs were performed with the same grid resolution \(\Delta x=0.16\), which is the default resolution we used for obtaining the oscillation modes of rotating stars. Similar to what we have seen for nonrotating QSs, the constraint violations quickly drop to stable plateau levels at the beginning of the simulations and remain flat until \(t\approx 9.85\) ms for all models including the most rapidly rotating one. Although the stable plateau values of the constraint violation get larger for faster rotation, it still maintains a relatively small value below \(10^{-5}\), an order of magnitude smaller than the initial Hamiltonian constraint violation, and does not grow noticeably even for the \(1200\) Hz model, Figure 6: Fourier spectra of the rest-mass density for the evolutions of a nonrotating QS using three different grid resolutions. The vertical dashed lines indicate the frequencies of the radial oscillation modes, from the fundamental \(F_{0}\) mode to the tenth overtone \(F_{10}\), determined by the perturbative normal-mode analysis. The inset enlarges the region between \(F_{3}\) and \(F_{10}\). Figure 7: Spectra around the first peak in Fig. 6. The smooth black curve (without data points) represents the slope of a quadratic curve fit to the high-resolution result (\(\Delta x=0.12\)). The fundamental mode frequency \(2745\) Hz obtained by the position at which the slope passes through zero agrees to the known normal-mode value (indicated by the vertical dashed line at \(F_{0}=2778\) Hz) to about \(1.2\%\). which is close to the maximal rotation limit of the sequence. One important challenge for us is to demonstrate our ability to simulate the sharp surface of rapidly rotating QSs for a long duration. Figure 9 compares the snapshots of density profiles at \(t\approx 9.78\) ms for the 300 Hz and 1200 Hz rotating QSs considered in Fig. 8. The top panels in the figure show the rest-mass densities of the stars in the first quadrant of the \(x\)-\(z\) plane, where the \(z\) axis is the rotation axis. The large color contrast from the large density gradient and the imperfect matching of the Cartesian grids to the star surfaces result in visible serrate edges at the surfaces. While the slowly rotating 300 Hz model (left panel) still maintains a spherical shape very well, the 1200 Hz model (right panel) is flattened at the pole and develops an oblate shape due to rapid rotation. It can be seen that some tiny amount of matter is ejected from the surface under the influence of centrifugal force near the equatorial plane. Nevertheless, the baryon mass of this rapidly rotating model remains very well conserved to within 0.1% error by the end of the simulation at \(t\approx 9.85\) ms, which is equivalent to about 12 rotation periods. This mass-shedding effect would unavoidably occur for rapidly rotating models close to the Keplerian limit. The effect would dampen the stellar pulsations and the amplitudes of the oscillation modes would gradually decrease for rapidly rotating stars [48]. It also affects the sharpness of the surface discontinuity near the equator. The dislocation of mass elements from their balance positions caused by the truncation brings constant disturbances to the stars and excites many oscillation modes. To clearly demonstrate the sharpness of the stellar surface, the middle and bottom panels in Fig. 9 show the density profiles of the two models along the \(x\) axis at \(t=0\) and \(t\approx 9.78\) ms on the equatorial plane in linear (middle panels) and logarithmic (bottom panels) scales, respectively. The MIT1 EOS has a surface density \(\rho_{S}\approx 6.93\times 10^{-4}\) in the code units, which drops to the floor density \(\rho_{f}=10^{-18}\) in the slowly rotating model over one cell, but only drops 3 orders of magnitude for the rapidly rotating model due to the mass-shedding effect. Along other directions, the surface density would drop to the floor density over one or two cells. To check the stability of the rotational velocity profile, we plotted \(v^{y}\) along the \(\theta=\pi/4\) direction on the \(x\)-\(z\) plane (\(\phi=0\)) in Fig. 10 for the 1200 Hz rapidly rotating model. The profiles at \(t=0\), 4.93 ms, and 9.85 ms overall agree very well, though small oscillations of the star surface across four grid cells can be seen. Figures 9 and 10 clearly demonstrate the stability of the density and velocity profiles of rapidly rotating QSs in our simulations. In particular, the sharp density jump at the stellar surface can also be maintained very well. ### Oscillation modes of rotating QSs In this section, we will focus on a sequence of MIT1 QS models with the same constant baryon mass \(2M_{\odot}\), but different rotational frequencies. The sequence can be considered as a quasiequilibrium evolution of a rapidly rotating QS being slowed down to lower rotational frequencies if angular momentum is effectively transported away. By studying the Fourier spectra of the fluid variables of these stars, such as the rest-mass density \(\rho\) and three-velocity components \(v^{r}\), \(v^{\theta}\), and \(v^{\phi}\), we can extract their oscillation mode frequencies. #### iii.3.1 Fourier spectra and mode selectivity In perturbation theory as in [127], when expanded in spherical harmonics \(Y_{lm}\), each oscillation mode is associated to a pair of indices \((l,m)\). For a spherical nonrotating star, the different orders of \(m\) are degenerate for a given \(l\), and it is enough to consider the \(m=0\) mode. For the \(l=2\) quadrupolar modes that we focus on in this work, the degeneracy is broken by rotation and the bar modes (\(m=\pm 2\)) split from the axisymmetric (\(m=0\)) mode, similar to the Zeeman effect in quantum mechanics. This phenomenon is clearly observed in Fig. 11 which shows the Fourier spectra of density FT(\(\rho\)) and velocity components FT(\(v^{r}\)), FT(\(v^{\theta}\)), and FT(\(v^{\phi}\)) for QS models with rotational frequencies 300 Hz (first row), 450 Hz (middle row), and 600 Hz (bottom row) of the chosen sequence. The positions of the \(f\)-mode (\(f_{0}=1897\) Hz), the first pressure mode (\(p_{0}=7868\) Hz), and the fundamental radial mode (\(F_{0}=2778\) Hz) for the nonrotating configuration of the sequence are labeled by the gray dashed lines. In each panel, the \(f_{0}\) and \(p_{0}\) gray lines are each sandwiched by two sharp Fourier peaks labeled by the red Figure 8: \(L^{2}\)-norms of the Hamiltonian constraint violations \(||H||\) for a sequence of \(2M_{\odot}\) baryon mass MIT1 QSs are plotted against time. The rotational frequencies of the chosen models span from 300 to 1200 Hz, where the maximal rotation limit of the sequence is near 1228 Hz. The results are obtained using the same resolution \(\Delta x=0.16\). and blue dashed lines, respectively. The separation between the red (blue) lines decreases and converges to the \(f_{0}\) (\(p_{0}\)) gray line as the rotation rate decreases towards zero. These peaks are the nonaxisymmetric \(m=\pm 2\) modes split from the \(f_{0}\) and \(p_{0}\) modes. The peak labeled by the left red line (and similarly for the left blue line) is the counterrotating \(m=2\) mode, while the right red line is the corotating \(m=-2\) mode. Our initial velocity perturbation is chosen to excite the \(m=\pm 2\) modes strongly, but not the axisymmetric \(m=0\) modes. However, as the rotation rate increases, small peaks corresponding to the \(m=0\) modes near the positions of \(f_{0}\) and \(p_{0}\) start to appear. The fundamental quasiradial mode is also strongly excited in our simulations as can be seen by the peaks near the \(F_{0}\) lines. It is not so surprising as radial oscillation modes can be easily excited due to finite-differencing errors as we have already seen for nonrotating QSs. The frequency of the quasiradial mode increases slightly with the rotation rate as can be seen from the spectra of \(\rho\) and \(v^{r}\). Nevertheless, it is still well approximated by its nonrotating counterpart \(F_{0}\) even for the model rotating at 600 Hz, as the ratio between the polar and equatorial radii of this star is about 0.92 and the rotation effect is relatively small. Another interesting feature of the spectra shown in Figure 10: Plot of the velocity profile \(v^{y}\) along the direction \(\theta=\pi/4\) for the 1200 Hz rotating model studied in Fig. 9 at \(t=0\), 4.93 ms, and 9.85 ms on the \(x\)-\(z\) plane. Figure 9: Snapshots of the rest-mass density in the first quadrant of the \(x\)-\(z\) plane for two MIT1 QS models with rotational frequencies 300 Hz (top left) and 1200 Hz (top right). The density profiles of the two models along the \(x\) axis are plotted in linear (middle panels) and logarithmic (bottom panels) scales. Fig. 11 is a selective effect of the appearance of different modes and their amplitudes in different spectra. For instance, the fundamental quasiradial mode establishes strong peaks in the spectra of \(\rho\) and \(v^{r}\), but not for \(v^{\theta}\) and \(v^{\phi}\), which may already be expected. Similarly, the peaks associated to the \(m=0\)\(p\)-mode can be observed in the spectra of \(v^{\theta}\) for the 450 Hz and 600 Hz models, while the corresponding peaks in the other spectra have much smaller amplitudes. #### iv.2.2 Onsets of secular instabilities As the rotation rate increases, the peaks of the interesting \(m=\pm 2\) bar modes become less distinct and their amplitudes can even be smaller than the \(m=0\) modes in some of the Fourier spectra. Figure 12 plots the Fourier spectra of the same sequence as in Fig. 11, but for four models with higher rotational frequencies up to 1225 Hz, which is very close to the maximum rotational frequency (1228 Hz) of this sequence. In each panel, the red (blue) lines still track the \(m=\pm 2\)\(f\)-mode (\(p\)-mode), though the gray lines for \(f_{0}\), \(F_{0}\), and \(p_{0}\) are not shown. The green line tracks the position of twice the rotation frequency of the star and its role will be explained below. It is clear that the Fourier spectra in Fig. 12 shows some qualitative differences comparing to those for the slower rotating models considered in Fig. 11. First of all, starting from the 1100 Hz model, the fundamental quasiradial mode now has large amplitudes not only in the spectra of \(\rho\) and \(v^{r}\), but also those of \(v^{\theta}\) as can be seen from the large peaks between the red and blue dashed lines in these spectra. As the rotation rate and oblateness of the star increase, the quasiradial modes couple \(v^{\theta}\) and \(v^{r}\), however, this coupling only becomes strong when the rotation rate is above 1000 Hz, which is close to the maximal rotation rate 1228 Hz of this sequence. In addition, the axisymmetric \(m=0\)\(f\)- and \(p\)-modes are also excited to relatively large amplitudes comparing to the case for slower rotating models. By tracking the mode positions and comparing the amplitudes in different spectra, the \(m=\pm 2\) modes can still be identified. In contrast to Fig. 11, the \(m=0\)\(p\)-mode, which is identified to be the peak between the two blue lines in each panel, establishes larger amplitudes than the \(m=\pm 2\) counterparts (blue lines) in the \(\rho\), \(v^{r}\), and \(v^{\theta}\) spectra when the rotation frequency is above 1000 Hz. However, the frequencies of the \(m=0\) modes are not sensitive to the rotation rate. Let us now focus on the \(m=\pm 2\)\(f\)-mode (red dashed lines) and see how the onsets of secular instabilities for them are identified. As already been seen in Fig. 11, the frequency of the counterrotating \(m=2\) mode, which is tracked by the left red line in each panel, decreases as the rotation rate increases. However, further increasing the rotation rate from 900 Hz as illustrated in Fig. 12 will push the mode to cross zero and become negative. Since the Fourier spectrum has even symmetry, the counterrotating mode appears to be "reflected" by the zero point Figure 11: Fourier spectra of fluid variables \(\rho\), \(v^{r}\), \(v^{\theta}\), and \(v^{\phi}\) of the sequence of MIT1 QSs with constant baryon mass \(2M_{\odot}\) rotating at \((300,450,600)\) Hz. The gray dashed lines label the quadrupole fundamental mode \(f_{0}=1897\) Hz, the first pressure mode \(p_{0}=7868\) Hz, and the fundamental quasiradial mode \(F_{0}=2778\) Hz of the corresponding nonrotating model. In each panel, two red (blue) dashed lines track the \(m=\pm 2\)\(f\)-modes (\(p\)-modes). and then shifts towards the right. The reflection occurs when the rotation frequency is at about 1000 Hz, which stands for the onset of the CFS instability (see Sec. I) for this sequence. For the \(m=-2\) corotating mode, which is tracked by the right red line in each panel, its frequency increases initially along the sequence and then starts to decrease when the rotation rate increases above 900 Hz. We find that this sequence passes the viscosity-driven instability point (see Sec. I) when the rotation rate is about 1200 Hz. This instability sets in when the frequency \(\sigma_{c}\) of the corotating mode in the rotating frame goes through zero. Since \(\sigma_{c}\) is related to the inertial-frame mode frequency \(\sigma_{i}\) and the angular velocity \(\Omega\) of the star by \(\sigma_{c}=\sigma_{i}+m\Omega/2\pi\), the instability sets in when \(\sigma_{i}=2\Omega/(2\pi)\) (for \(m=-2\)). In Fig. 12, the quantity \(2\Omega/(2\pi)\) is tracked by the green line in each panel, and hence the instability sets in when the right red line crosses the green line, as illustrated in the 1200 Hz model in the figure. As pointed out in Sec. I, the viscosity-driven instability of rotating QSs was studied before by perturbing the stellar configuration during the iteration steps in the construction of an axisymmetric equilibrium rotating star [63; 66; 67]. Our study represents the first investigation based on the analysis of the oscillation modes. ### Universal relations of \(f\)-modes #### iv.4.1 Comparison to the universal relations for NSs KK [54] recently proposed three universal relations for the \(l=|m|=2\)\(f\)-modes of rapidly rotating NSs. Here we shall study whether rapidly rotating QSs also satisfies these relations. We first compare our extracted mode frequencies from a total of 161 rotating NS and QS models with their relation given by Eq. (6) in [54], which relates the scaled mode frequency \(\hat{\sigma}_{i}\equiv\bar{M}\sigma_{i}/\text{kHz}\) in the inertial frame to the scaled angular velocity \(\hat{\Omega}\equiv\bar{M}\Omega/\text{kHz}\) and the effective compactness \(\eta_{45}\equiv\sqrt{M^{3}/I_{45}}\) by \[\hat{\sigma_{i}}=\left(c_{1}+c_{2}\hat{\Omega}+c_{3}\hat{\Omega}^{2}\right)+ \left(d_{1}+d_{3}\hat{\Omega}^{2}\right)\eta_{45}, \tag{23}\] where \(\bar{M}\equiv M/M_{\odot}\) and \(I_{45}\equiv I/(10^{45}\ \text{g}\cdot\text{cm}^{2})\) are the star's scaled gravitational mass and moment of inertia. The fitting coefficients \(c_{i}\) and \(d_{i}\) Figure 12: Fourier spectra of fluid variables \(\rho\), \(v^{r}\), \(v^{\theta}\), and \(v^{\phi}\) of the sequence of MIT1 QSs with constant baryon mass \(2M_{\odot}\) rotating at \((900,1100,1200,1225)\) Hz. Similar to Fig. 11, the red and blue dashed lines track the \(m=\pm 2\)\(f\)- and \(p\)-modes. The green dotted line in each panel tracks the position of twice the rotation frequency of the star. The maximal rotation rate of this sequence is about 1228 Hz. See text for the identification of the onsets of CFS and viscosity-driven instabilities from the spectra. are given by \((c_{1},c_{2},c_{3})=(-2.14,-0.201,-7.68\times 10^{-3})\) and \((d_{1},d_{2},d_{3})=(3.42,0,1.75\times 10^{-3})\) for the counterrotating branch. For the corotating branch, \((c_{1},c_{2},c_{3})=(-2.14,0.220,-14.6\times 10^{-3})\) and \((d_{1},d_{2},d_{3})=(3.42,0,6.86\times 10^{-3})\). As each branch of data lies on a surface in the three dimensional \(\eta_{45}\)-\(\hat{\sigma}\)-\(\hat{\Omega}\) parameter space, to have a clear visualization of the data, we define \[\hat{\Sigma}_{i}\equiv\hat{\sigma}_{i}-c_{1}-(d_{1}+d_{3}\hat{\Omega}^{2}) \eta_{45}, \tag{24}\] and plot it against \(\hat{\Omega}\) in Fig. 13. In the figure, the lower (upper) branch of data consists of the counterrotating (corotating) modes. The predictions from Eq. (23), which is Eq. (6) in [54], are labeled by the gray lines. It is noted that the nuclear matter SFHo EOS was not used in [54], and hence our NS data can serve as an independent check for the universal relation. It is seen that the \(f\)-modes of rapidly rotating QSs can also be described by this relation very well. The root-mean-square of the residuals is \(0.111\) for the counterrotating branch and \(0.0897\) for the corotating branch. We next examine another universal relation for the mode frequency \(\sigma_{i}\) observed in the inertial frame for sequences of constant central energy density, given by Eq. (4) in [54], \[\frac{\sigma_{i}}{\sigma_{0}}=1+a_{1}\left(\frac{\Omega}{\sigma_{0}}\right)+a _{2}\left(\frac{\Omega}{\sigma_{0}}\right)^{2}, \tag{25}\] where \((a_{1},a_{2})=(-0.193,-0.0294)\) for the counterrotating branch and \((a_{1},a_{2})=(0.220,-0.0170)\) for the corotating branch, and the angular velocity \(\Omega\) is normalized by the \(f\)-mode frequency \(\sigma_{0}\) of the corresponding nonrotating star. Our extracted mode frequencies also match closely to Eq. (25) as shown in Fig. 14. The root-mean-square of the residuals is \(0.0341\) for the counterrotating branch and \(0.0794\) for the corotating branch. It is noted that the data points for the corotating branch (i.e., the upper branch in Fig. 14) have larger deviations from Eq. (25) for high rotation rates close to the Keplerian limit, the region where Eq. (25) does not fit well even for NS data as can be seen from Fig. 1 in [54]. The purple horizontal dashed line represents the zero-frequency line, on which the counterrotating mode becomes unstable to the CFS instability. We find that QSs become unstable when the rotation rate \(\Omega\approx 3.4\sigma_{0}\), which agrees with the finding for NSs [54]. Finally, we consider the universal relation for the \(f\)-mode frequency \(\sigma_{c}\) observed in the rotating frame for sequences of constant baryon mass, given by Eq. (5) in [54], \[\frac{\sigma_{c}}{\sigma_{0}}=1+b_{1}\left(\frac{\Omega}{\Omega_{\rm max}} \right)+b_{2}\left(\frac{\Omega}{\Omega_{\rm max}}\right)^{2}, \tag{26}\] where \((b_{1},b_{2})=(0.517,-0.542)\) for the counterrotating branch and \((b_{1},b_{2})=(-0.235,-0.491)\) for the corotating branch. In contrast to [54], we normalize the angular velocity \(\Omega\) by its maximum rotation limit \(\Omega_{\rm max}\) instead of the the Keplerian limit \(\Omega_{K}\), since \(\Omega_{\rm max}\) can be larger than \(\Omega_{K}\) by about \(2\%\) for QSs as we have discussed. The ambiguity between the two values does not arise in [54] as \(\Omega_{\rm max}=\Omega_{K}\) for NSs. Figure 15 plots \(\sigma_{c}/\sigma_{0}\) Figure 14: Plot of the scaled mode frequencies \(\sigma_{i}/\sigma_{0}\) observed in the inertial frame for sequences of constant central energy density. Data points contain the same models as in Fig. 3. The predictions from Eq. (25) (see also Eq. (4) in [54]) for the counterrotating and corotating \(f\)-modes are given by the lower and upper gray lines, respectively. The purple horizontal dashed line represents the zero-frequency line, on which the counterrotating mode becomes unstable to the CFS instability. Figure 13: Plot of \(\hat{\Sigma}_{i}\) [see Eq. (24)] against the scaled angular velocity \(\hat{\Omega}\) for a total of 167 star models, including 27 SFHo NSs, and 78 MIT1, 10 MIT2, 18 MIT3, and 34 MIT4 QSs. The predictions from Eq. (23) (see also Eq. (6) in [54]) for the counterrotating and corotating \(f\)-modes are given by the lower and upper gray lines, respectively. against \(\Omega/\Omega_{\rm max}\) for 109 NS and QS models from various sequences of constant baryon mass. Let us recall that the mode frequencies observed in the rotating and inertial frames are related by \(\sigma_{c}=\sigma_{i}+m\Omega/2\pi\). Contrary to Figs. 13 and 14, the corotating modes are now represented by the lower branch of data in Fig. 15. Our SFHo NS data still satisfy Eq. (26) very well, but the QS data deviate a lot from the fitting relations. However, it should be pointed out that the spread of the data around the upper gray line at high rotation rates is similar to that of the original NS data used in [54] to produce the fitting curve (see Fig. 2 in [54]). For the corotating modes (lower branch), the QS data deviate significantly from Eq. (26). While realistic NS models generally cannot rotate fast enough to reach the onset of viscosity-driven instability, marked by the purple horizontal line where \(\sigma_{c}=0\) in the figure, Fig. 15 shows that the QS data cross the purple line shortly before reaching the maximum rotation rate. In retrospect, the deviation between the NS and QS data at high rotation rates may be associated with the fact that there is an upper bound of the spin parameter \(j\sim 0.7\) for realistic NSs when \(\Omega\approx\Omega_{\rm max}\)[64], while there is no such bound for QSs (see also Fig. 2). As Eq. (26) was originally proposed to fit realistic NSs only [54], the equation would not be able to cover QS models with \(j\gtrsim 0.7\). We shall show below that a better universal relation for the corotating modes satisfied by NSs and QSs can be obtained by invoking the spin parameter directly. #### iv.2.2 Critical values of the spin parameter, energy ratio, and eccentricity We now investigate further the onset of the viscosity-driven instability for rotating QSs. As it is expected to be difficult for realistic NSs to rotate fast enough to achieve this instability before reaching the Keplerian limit, the onset of this instability is a special (if not unique) phenomenon for rapidly rotating QSs among stellar objects. To determine the onset of the instability, which is close to the maximal rotation rate where physical quantities become sensitive to the angular frequency, we first propose a fitting relation which relates the corotating mode frequency \(\sigma_{c}\) in the rotating frame to the spin parameter \(j\) with relatively small variance. We first define (in the code units) a scaled frequency for the corotating mode \[\tilde{\Sigma}_{c}=\frac{2\pi M\sigma_{c}}{-0.0047+0.133\ \eta+0.575\ \eta^{2}}, \tag{27}\] where \(\eta=\sqrt{M^{3}/I}\) is the effective compactness originally introduced in [76] for nonrotating stars, but here it is generalized to rotating stars. The denominator on the right-hand side of Eq. (27) is motivated by the universal relation between \(\eta\) and the scaled \(f\)-mode angular frequency \(2\pi M\sigma_{0}\) for nonrotating NSs and QSs [76]. Note that we have corrected a typographical error in the coefficient of \(\eta^{2}\) in Eq. (6) of [76]. In Fig. 16, we plot \(\tilde{\Sigma}_{c}\) against the spin parameter \(j\) for both constant central energy density and constant baryon mass sequences. Comparing to the corotating modes plotted in Fig. 15, the NS and QS data are now "unified" and can be fit by \[\tilde{\Sigma}_{c}=-0.477j^{2}-0.714j+1. \tag{28}\] Figure 16: Normalized frequency \(\tilde{\Sigma}_{c}\) [Eq. (27)] is plotted against the spin parameter \(j\) in the rotating frame. It contains both constant central density and constant baryon mass sequences, including 167 models as in Fig. 13. The quadratic fitting curve [Eq. (28)] crosses the zero-frequency point at \(j\approx 0.881\). Figure 15: Plot of the scaled mode frequencies \(\sigma_{c}/\sigma_{0}\) observed in the rotating frame for sequences of constant baryon mass. Data points contain the same models as in Fig. 2. The predictions from Eq. (26) (see also Eq. (5) in [54]) for the counter-rotating and corotating \(f\)-modes are given by the upper and lower gray lines, respectively. The purple horizontal dashed line represents the zero-frequency line, on which the corotating mode becomes unstable to the viscosity-driven instability. The root-mean-square of the residuals is \(0.0251\). The rapidly rotating QS data with \(j\gtrsim 0.7\) behave as if they are merely an extension of NS data to higher spin parameters. Our fitting curve crosses the zero-frequency point at \(j\approx 0.881\), which represents the onset of the viscosity-driven instability for both sequences of constant central energy density and constant baryon mass. Traditionally, the onset of the instability is characterized by the critical value of the ratio between the rotational kinetic energy and gravitational potential energy \(T/|W|\) and the eccentricity \(\zeta=(1-(r_{\rm p}/r_{\rm eq})^{2})^{1/2}\), where \(r_{\rm p}\) and \(r_{\rm eq}\) are the polar and equatorial coordinate radii, respectively. As discussed in Sec. I, the Newtonian limit \((T/|W|)_{\rm crit,Newt}=0.1375\) was obtained for MacLaurin sequences [60], while general relativity weakens the instability by increasing the critical energy ratio [61]. An approximate relation for the critical energy ratio was obtained in [61] for constant baryon mass sequences of homogeneous incompressible bodies in general relativity, \[(T/|W|)_{\rm crit}=(T/|W|)_{\rm crit,Newt}+0.126\ \chi\left(1+\chi\right), \tag{29}\] where \(\chi=M/R\), \(M\), and \(R\) are the compactness, gravitational mass, and radius of the corresponding nonrotating model, respectively. The difference between the relativistic and Newtonian critical values of \(T/|W|\) is about \(20\%\) for compactness \(\chi\approx 0.2\). QSs described by the MIT bag model can be approximated very well by homogeneous incompressible bodies [67]. To check whether our QS data can also be approximated by Eq. (29), we plot the scaled corotating mode frequency \(\sigma_{c}/\sigma_{0}\) in the rotating frame against the normalized energy ratio \(\lambda\equiv(T/|W|)/(T/|W|)_{\rm crit}\) for constant baryon mass sequences in Fig. 17. The trend of the numerical data can be fitted by \[\frac{\sigma_{c}}{\sigma_{0}}=1+0.130(e^{-27.3\lambda}-1)-1.10\lambda+0.256 \lambda^{2}. \tag{30}\] The root-mean-square of the residuals is \(0.0233\). In addition to a quadratic fitting, an exponential function is included to take into account the fast initial decrease. The fitting curve crosses the zero point at \(\lambda\approx 1.04\), meaning that the critical value for our QS models is only \(4\%\) higher than the approximate value of homogeneous incompressible bodies predicted by Eq. (29). On the other hand, the critical value of eccentricity depends weakly on the compactness, it should thus be close to the Newtonian critical value \(\zeta_{\rm crit,Newt}=0.8127\)[61, 67]. In Fig. 18, we plot the scaled mode frequency \(\sigma_{c}/\sigma_{0}\) against the eccentricity \(\zeta\). Its fitting curve is \[\frac{\sigma_{c}}{\sigma_{0}}=1-0.622\zeta-0.799\zeta^{3}, \tag{31}\] with the root-mean-square of the residuals being \(0.0207\). The fit predicts the onset of the instability at \(\zeta\approx 0.842\), which is about \(3.6\%\) higher than the Newtonian value. ### Fitting relations of \(p\)-modes We end this section by also providing fitting relations for the first \(p\)-modes of rotating QSs which were strongly excited in our simulations. As it is well known that \(p\)-modes are more EOS sensitive [129] and dependent strongly on the density and pressure profiles, universal relations are not expected to exist for them. Since our generalized MIT bag model EOS contains only two parameters, namely the bag constant \(B\) and the square of Figure 17: Plot of the scaled corotating mode frequency \(\sigma_{c}/\sigma_{0}\) in the rotating frame against the normalized energy ratio \(\lambda=(T/|W|)/(T/|W|)_{\rm crit}\) for constant baryon mass sequences. The data set contains 94 QS models used in Fig. 15. The fitting curve [Eq. (30)] crosses the zero-frequency line at \(\lambda\approx 1.04\). Figure 18: Plot of the scaled corotating mode frequency \(\sigma_{c}/\sigma_{0}\) in the rotating frame against the eccentricity \(\zeta\) for constant baryon mass sequences. The data set contains 94 QS models used in Fig. 15. The fitting curve [Eq. (31)] crosses the zero-frequency line at \(\zeta\approx 0.842\). the speed of sound \(c_{ss}\), it is possible to find fitting relations for the \(p\)-modes by invoking these parameters. Furthermore, it is found that dimensionless frequencies like \(Mp\) (with \(p\) being the \(p\)-mode frequency) are independent of the bag constant in the MIT bag model [130] as it is just a scaling factor, and hence only \(c_{ss}\) will be relevant to our fitting relations. This can be illustrated by considering the \(p\)-mode frequency \(p_{0}\) of nonrotating QSs. We found that the scaled frequency \(Mp_{0}\) of our nonrotating QS models can be fitted well by \[Mp_{0} = (a_{1}c_{ss}+a_{2})\chi^{2}+(a_{3}c_{ss}^{2}+a_{4}c_{ss}+a_{5} \tag{32}\] \[+\ a_{6}/c_{ss})\chi+a_{7}c_{ss}^{2},\] where \(M\) and \(\chi=M/R\) are the gravitational mass and compactness, respectively. The seven fitting parameters are \(a_{1}=-2.700\), \(a_{2}=-0.5845\), \(a_{3}=0.2183\), \(a_{4}=1.202\), \(a_{5}=0.2664\), \(a_{6}=-0.006893\) and \(a_{7}=-0.09141\). This relation is obtained by fitting to nonrotating QS data with \(M\geq 1.4M_{\odot}\) and different values of \(c_{ss}\) ranging from \(1/10\) to \(1\) as shown in Fig. 19. The root-mean-square of the residuals is \(0.000450\). For the \(l=2\)\(p\)-modes of rotating QSs, we use the following ansatz for the \(m=2\) (\(m=-2\)) \(p\)-mode frequencies \(p_{i}^{+}\) (\(p_{i}^{-}\)) observed in the inertial frame for constant baryon mass sequences: \[\hat{p}_{i}^{\pm}=1\mp F(\chi,c_{ss},\bar{\Omega})\bar{\Omega}+G(\chi,c_{ss}) \bar{\Omega}^{2}, \tag{33}\] where \(\hat{p}_{i}^{\pm}=p_{i}^{\pm}/p_{0}\), \(\bar{\Omega}=\Omega/(2\pi p_{0})\), and \(p_{0}\) is the \(p\)-mode frequency of the corresponding nonrotating star with gravitational mass \(M\) and compactness \(\chi\). The two functions \(F(\chi,c_{ss},\bar{\Omega})\) and \(G(\chi,c_{ss})\) are given by \[F(\chi,c_{ss},\bar{\Omega})=b_{1}\sqrt{\frac{\chi}{c_{ss}}}+b_{2}\frac{\chi}{ c_{ss}}\bar{\Omega}, \tag{34}\] and \[G(\chi,c_{ss})=(b_{3}c_{ss}^{2}+b_{4})\chi+(b_{5}c_{ss}^{2}+b_{6}), \tag{35}\] where the fitting parameters are \(b_{1}=2.22\), \(b_{2}=-2.24\), \(b_{3}=82.1\), \(b_{4}=44.6\), \(b_{5}=-28.8\), and \(b_{6}=-11.4\). To illustrate the fitting relation, we plot \(\hat{p}_{i}^{\pm}-G\bar{\Omega}^{2}\) against \(\sqrt{\chi/c_{ss}}\bar{\Omega}\) in Fig. 20. The numerical data can be fitted well by Eq. (33) with the root-mean-square of residues being \(0.00741\) for the upper branch and \(0.00701\) for the lower branch. It should be noted that the above fitting parameters are obtained by excluding those rapidly rotating degenerate models close to the maximum rotation limit illustrated in Fig. 2. In reality, it is not expected to be able to detect the \(p\)-modes of compact stars from their emitted GW signals anytime soon, even with the next generation of detectors. However, it might still be interesting to consider how one could (in principle) make use of these fitting relations. As an illustration, let us first ignore the rotational effects and assume that the \(f\)-mode frequency (and its damping time) and the \(p\)-mode frequency of a nonrotating compact star are observed. Applying the universal relations for the \(f\)-mode of nonrotating stars in [76], which are valid for both NSs and QSs, the mass \(M\) and radius \(R\), and hence the compactness \(\chi\) of the star, can then be inferred approximately. Equation (32) can then be solved for the single variable \(c_{ss}\) and one can check whether the observation data is consistent to our generalized MIT bag model. For instance, an inferred value of \(c_{ss}=1/3\) would mean that the star is consistent with a QS described by the canonical MIT bag model. On the other hand, an inferred value of \(c_{ss}\), which is far away from our fitting range of Eq. (32), would serve as strong evidence against our QS models. Figure 19: Plot of scaled frequency \(Mp_{0}\) of nonrotating models against compactness \(\chi=M/R\) for the class of MIT bag EOSs with the square of the speed of sound \(c_{ss}\) ranging from \(1/10\) to \(1\). The gray fitting curves are based on Eq. (32). Similarly, if the angular velocity \(\Omega\) and the two frequencies \(p_{i}^{\frac{1}{2}}\) are observed for a rotating compact star, then Eq. (33) can be used to relate the three parameters \(p_{0}\), \(c_{ss}\), and \(\chi\). If the star is slowly rotating so that its compactness can be approximated by the value of the nonrotating counterpart, then one can solve for \(p_{0}\) and \(c_{ss}\). We can then compare the inferred value of \(p_{0}\) to the observed frequency of the \(m=0\) axisymmetric \(p\)-mode (if it is available), which is well approximated by \(p_{0}\) for a slowly rotating star, and determine whether the observed compact star is consistent with our QS models. ## IV Conclusion The sharp high-density surface of a bare QS presents a great challenge for grid-based hydrodynamical modeling of the star. In this paper, building on top of the numerical relativity code Einstein Toolkit, we have implemented a numerical method based on a positivity-preserving Riemann solver and a dustlike EOS for the atmosphere to perform stable evolutions of rapidly rotating QSs in general relativity. Our work represents a new addition to the list of just a few fully general relativistic simulations of QSs available up to today [23, 24, 25]. The fidelity of our method has been tested and confirmed by comparing the oscillation mode frequencies of nonrotating QSs extracted from simulations with the results obtained from perturbative calculations. The \(f\)-mode of rapidly rotating QSs are investigated in details. In particular, we find that two of the universal relations for the \(l=|m|=2\) nonaxisymmetric modes proposed originally for rotating NSs [54] are still valid for QSs (see Figs. 13 and 14). However, the QS data deviate significantly from another universal relation for the corotating modes observed in the rotating frame (see Fig. 15). In addition to the \(f\)-modes, we have also studied the first \(p\)-modes of rotating QSs. For QSs described by our generalized MIT bag model, we report fitting relations for the \(p\)-mode frequencies of both nonrotating and rotating stars. We also find that, when considering sequences of constant central energy density, the onset of the CFS instability for QSs occurs when the angular velocity \(\Omega\approx 3.4\sigma_{0}\), which agrees with the finding for NSs [54]. In addition to the CFS instability, we have also studied the viscosity-driven instability of QSs. We find that the onset of the instability for rotating QSs occurs when the spin parameter \(j\approx 0.881\) for both sequences of constant central energy density and constant baryon mass. For QS sequences of constant baryon mass, we also find that the critical value of the ratio between the rotational kinetic energy and gravitational potential energy \(T/|W|\) for the onset of the instability agrees with the value predicted for homogeneous incompressible bodies in general relativity to within 4%, and the critical value of the eccentricity \(\zeta\) is only 3.6% larger than the Newtonian value [61]. Realistic NSs are generally not expected to be able to rotate fast enough to trigger this instability before reaching the Keplerian limit. This can be seen from Fig. 15 that the NS data for the frequencies of the corotating modes \(\sigma_{c}\) observed in the rotating frame do not cross zero before the Keplerian limit. The universal relation between the spin parameter and \(\tilde{\Sigma}_{c}\), which is just a rescaled \(\sigma_{c}\), proposed by us in Eq. (28) can unify the NS and QS data and also predict the onset of the instability to occur at \(j\approx 0.881\) as shown in Fig. 16. The fact that realistic NSs cannot trigger the instability can be associated to the existence of an upper bound \(j\sim 0.7\) for uniformly rotating NSs [64]. ###### Acknowledgements. We thank Hoi-Ka Hui for useful discussions and Shu-Yan Lau for sharing his oscillation code for us to compute the mode frequencies of nonrotating stars for benchmarking. This work is partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region (Project No. 14304322). We also acknowledge the support of the CUHK Central High Performance Computing Cluster, on which our simulations were carried out.
2305.17583
On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models
Deep neural networks (DNNs) lack the precise semantics and definitive probabilistic interpretation of probabilistic graphical models (PGMs). In this paper, we propose an innovative solution by constructing infinite tree-structured PGMs that correspond exactly to neural networks. Our research reveals that DNNs, during forward propagation, indeed perform approximations of PGM inference that are precise in this alternative PGM structure. Not only does our research complement existing studies that describe neural networks as kernel machines or infinite-sized Gaussian processes, it also elucidates a more direct approximation that DNNs make to exact inference in PGMs. Potential benefits include improved pedagogy and interpretation of DNNs, and algorithms that can merge the strengths of PGMs and DNNs.
Boyao Li, Alexandar J. Thomson, Matthew M. Engelhard, David Page
2023-05-27T21:32:28Z
http://arxiv.org/abs/2305.17583v3
# On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models ###### Abstract Deep neural networks (DNNs) lack the precise semantics and definitive probabilistic interpretation of probabilistic graphical models (PGMs). In this paper, we propose an innovative solution by constructing infinite tree-structured PGMs that correspond exactly to neural networks. Our research reveals that DNNs, during forward propagation, indeed perform approximations of PGM inference that are precise in this alternative PGM structure. Not only does our research complement existing studies that describe neural networks as kernel machines or infinite-sized Gaussian processes, it also elucidates a more direct approximation that DNNs make to exact inference in PGMs. Potential benefits include improved pedagogy and interpretation of DNNs, and algorithms that can merge the strengths of PGMs and DNNs. ## 1 Introduction Deep neural networks (DNNs), including large language models, offer state-of-the-art prediction performance, but they are difficult to interpret due to their complex multilayer structure, large number of latent variables, and the presence of nonlinear activation functions (Buhrmester et al., 2021). To gain a precise statistical interpretation for DNNs, much progress has been made in linking them to probabilistic graphical models (PGMs). Variational autoencoders (VAEs) (Kingma and Welling, 2014) are an early example; more recent examples relate recurrent neural networks (RNNs) with hidden Markov models (HMMs) (Choe et al., 2017) and convolutional neural networks (CNNs) with Gaussian processes (GPs) (Garriga-Alonso et al., 2018). When such a connection is possible, benefits include: * Clear statistical semantics for a trained DNN model beyond providing the conditional distribution over output variables given input variables. Instead, PGMs provide a joint distribution over all variables including latent variables. * Ability to make inferences about how evidence of some nodes influences probabilities at others, including how later nodes influence earlier ones, as in Bayes nets or Markov nets. * Ability to understand weight initializations of DNNs as representing prior distributions and trained DNNs as representing posterior distributions, or ensembles of models. * Proposal of new algorithms by importing algorithmic approaches from PGMs into DNNs. In this paper, we establish a correspondence between DNNs and PGMs. Given an arbitrary DNN, we first construct an infinite-width tree-structured PGM. We then demonstrate that during training, the DNN executes approximations of precise inference in the PGM during the forward propagation. We prove our result in the case of sigmoid activations and then indicate how the proof can be expanded to other activation functions, provided that some form of normalization is employed. These findings provide immediate benefits such as those listed above. This work stands apart from most theoretical analyses of DNNs, which typically view DNNs purely as _function approximators_ and prove theorems about the quality of function approximation. Here we instead show that DNNs may be viewed as statistical models, specifically PGMs. This work is also different from the field of _Bayesian neural networks_, where the goal is to seek and model a probability distribution over neural network parameters. In our work, the neural network itself defines a joint probability distribution over its variables (nodes). Our work therefore is synergistic with Bayesian neural networks but more closely related to older work to learn stochastic neural networks via expectation maximization (EM) (Amari, 1995) or approximate EM (Song et al., 2016). Although the approach is different, our motivation is similar to that of Dutordoir et al. (2021) and Sun et al. (2020) in their work to link DNNs to deep Gaussian processes (GPs) (Damianou and Lawrence, 2013). By identifying the forward pass of a DNN with the mean of a deep GP layer, they aim to augment DNNs with advantages of GPs, notably the ability to quantify uncertainty over both output and latent nodes. What distinguishes our work is that we make the DNN-PGM approximation explicit and include _all_ sigmoid DNNs, not just unsupervised belief networks or other specific cases. Background: Comparison to Bayesian Networks and Markov Networks Syntactically a Bayesian network (BN) is a directed acyclic graph, like a neural network, whose nodes are random variables. Semantically, a BN represents a full joint probability distribution over its variables as \(P(\vec{v})=\prod_{i}P(v_{i}|pa(v_{i}))\), where \(\vec{v}\) is a complete setting of the variables, and \(pa(v_{i})\) denotes the parents of variable \(v_{i}\). If the conditional probability distributions (CPDs) \(P(v_{i}|pa(v_{i}))\) are all logistic regression models, we refer to the network as a sigmoid BN. It is well known that given sigmoid activation and a cross-entropy error, training a single neuron by gradient descent is identical to training a logistic regression model. Hence, a neural network under such conditions can be viewed as a "stacked logistic regression model", and also as a Bayesian network with logistic regression CPDs at the nodes. Technically, the sigmoid BN has a distribution over the input variables (variables without parents), whereas the neural network does not, and all nodes are treated as random variables. These distributions are easily added, and distributions of the input variables can be viewed as represented by the joint sample over them in our training set. A Markov network (MN) syntactically is an undirected graph with potentials \(\phi_{i}\) on its cliques, where each potential gives the relative probabilities of the various settings for its variables (the variables in the clique). Semantically, it defines the full joint distribution on the variables as \(P(\vec{v})=\frac{1}{Z}\prod_{i}\phi_{i}(\vec{v})\) where the partition function \(Z\) is defined as \(\sum_{\vec{v}}\prod_{i}\phi_{i}(\vec{v})\). It is common to use a loglinear form of the same MN, which can be obtained by treating a setting of the variables in a clique as a binary feature \(f_{i}\), and the natural log of the corresponding entry for that setting in the potential for that clique as a weight \(w_{i}\) on that feature; the equivalent definition of the full joint is then \(P(\vec{v})=\frac{1}{Z}e^{\sum_{i}w_{i}f_{i}(\vec{v})}\). For training and prediction at this point the original graph itself is superfluous. The potentials of an MN may be on subsets of cliques; in that case we simply multiply all potentials on subsets of a clique to derive the potential on the clique itself. If the MN can be expressed entirely as potentials on edges or individual nodes, we call it a "pairwise" MN. An MN whose variables are all binary is a binary MN. A DNN of any architecture is, like a Bayesian network, a directed acyclic graph. A sigmoid activation can be understood as a logistic model, thus giving a conditional probability distribution for a binary variable given its parents. Thus, there is a natural interpretation of a DNN with sigmoid activations as a Bayesian network (e.g., Bayesian belief network).* As reviewed in theorem 1, this Bayes net in turn is equivalent to (represents the same probability distribution) as a Markov network where every edge of weight \(w\) from variable \(A\) to variable \(B\) has a potential of the following form: \begin{tabular}{|c|c|c|} \hline & \(B\) & \(\neg B\) \\ \hline \(A\) & \(e^{w}\) & \(1\) \\ \hline \(\neg A\) & \(1\) & \(1\) \\ \hline \end{tabular} **Theorem 1**.: _Let \(N\) be a Bayesian belief network whose underlying undirected graph has treewidth 1, and let \(w_{AB}\) denote the coefficient of variable \(A\) in the logistic CPD for its child \(B\). Let \(M\) be a binary pairwise Markov random field with the same nodes and edges (now undirected) as \(N\). Let \(M\)'s potentials all have the value \(e^{w_{AB}}\) if the nodes \(A\) and \(B\) on either side of edge \(AB\) are true, and the value \(1\) otherwise. \(M\) and \(N\) represent the same joint probability distribution over their nodes._ We don't claim theorem 1 is new, but we provide a proof in Appendix A because it captures several components of common knowledge to which we couldn't find a single reference. For space reasons, we assume the reader is already familiar with the Variable Elimination (VE) algorithm for computing the probability distribution over any query variable(s) given evidence (known values) at other variables in the network. This algorithm is identical for Bayes nets and Markov nets. It repeatedly multiplies together all the potentials (in a Bayes net, conditional probability distributions) involving the variable to be eliminated, and then sums that variable out of the resulting table, until only the query variable(s) remain. Normalization of the resulting table yields the final answer. VE is an exact inference algorithm, meaning its answers are exactly correct. ## 3 The Construction of Tree-structured PGMs Although both a binary pairwise Markov network (MN) and a Bayesian network (BN) share the same sigmoid functional structure as a DNN with sigmoid activations, it can be shown that the DNN does not in general define the same probability for the output variables given the input variables: forward propagation in the DNN is very fast but yields a different result than VE in the MN or BN, which can be much slower because the inference task is NP-complete. Therefore, if we take the distribution \(\mathcal{D}\) defined by the BN or MN to be the correct meaning of the DNN, the DNN must be using an approximation \(\mathcal{D}^{\prime}\) to \(\mathcal{D}\). Procedurally, the approximation can be shown to be exactly the following: the DNN repeatedly treats the _expectation_ of a variable \(V\), given the values of \(V\)'s parents, as if it were the actual _value_ of \(V\). Thus previously binary variables in the Bayesian network view and binary features in the Markov network view become continuous. While this procedural characterization of the approximation of \(\mathcal{D}^{\prime}\) to \(\mathcal{D}\) is precise, we prefer in the PGM literature to characterize approximate distributions such as \(\mathcal{D}^{\prime}\) with an alternative PGM that precisely corresponds to \(\mathcal{D}^{\prime}\); for example, in some variational methods we may remove edges from a PGM to obtain a simpler PGM in which inference is more efficient. Treewidth-1 (tree-structured or forest-structured) PGMs are among the most desirable because in those exact inference by VE or other algorithms becomes efficient. We seek to so characterize the DNN approximation here. To begin, we consider the Bayesian network view of the DNN. Our first step in this construction is to copy the shared parents in the network into separate nodes whose values are not tied. The algorithm for this step is as follows: 1. Consider the observed nodes in the Bayesian network that correspond to the input of the neural network and their outgoing edges. 2. At each node, for each outgoing edge, create a copy of the current node that is only connected to one of the original node's children with that edge. Since these nodes are observed at this step, these copies do all share the same values. The weights on these edges remain the same. 3. Consider then the children of these nodes. Again, for each outgoing edge, make a copy of this node that is only connected to one child with that edge. In this step, for each copied node, we then also copy the entire subgraph formed by all ancestor nodes of the current node. Note that while weights across copies are tied, the values of the copies of any node are not tied. However, since we also copy the subtree of all input and intermediary hidden nodes relevant in the calculation of this node for each copy, the probability of any of these copied nodes being true remains the same across copies. 4. We repeat this process until we have separate trees for each output node in the original deep neural network graph. This process ultimately creates a graph whose undirected structure is a tree or forest. In the directed structure, trees converge at the output nodes. The probability of any copy of a latent node given the observed input is the same across all the copies, but when sampling, their values may not be the same. The preceding step alone is still not sufficient to accurately express the deep neural network as a PGM. Recall that in the Markov network view, we have seen that the neural network makes a mean-field approximation where it uses the expected value of a node in place of its actual value. The following additional step in the construction yields this same behavior. This next step of the construction creates \(L\) copies of every non-output node in the network while also copying the entire subtrees of each of these nodes, as was done in step 1. The weight of a copied edges is then set to its original value divided by \(L\). As \(L\) approaches infinity, we show that the gradient in this PGM construction matches the gradient in the neural network exactly. This second step in the construction can be thought of intuitively by considering the behavior of sampling in the Bayesian network view. Since we make \(L\) copies of each node while also copying the subgraph of its ancestors, these copied nodes all share the same probabilities. As \(L\) grows large, even if we sampled every copied node only once, we would expect the average value across these \(L\) copies to match the probability of an individual copied node being true. Given that we set the new weights between these copies and their parents as the original weights divided by \(L\), the sum of products (new weights times parent values) yields the average parent value multiplied by the original weight. As \(L\) goes to infinity, we remove sampling bias and the result exactly matches the value of the sigmoid activation function of the neural network, where this expectation in the PGM view is passed repeatedly to the subsequent neurons. The formal proof of this result, based on variable elimination, is found below. There, we show the following: **Theorem 2**.: _In the PGM construction, as \(L\rightarrow\infty\), \(P(H=1|\vec{x})\rightarrow\sigma(\sum_{j=1}^{M}w_{j}g_{j}+\sum_{i}^{N}\theta_ {i}\sigma(p_{i}))\), for an arbitrary latent node \(H\) in the DNN that has observed parents \(g_{1},...,g_{M}\) and latent parents \(h_{1},...,h_{N}\) that are true with probabilities \(\sigma(p_{1}),...,\sigma(p_{N})\). \(w_{1},...,w_{M}\) and \(\theta_{1},...,\theta_{N}\) are the weights on edges between these nodes and \(H\)._ In order to prove that as \(L\) goes to infinity, this PGM construction does indeed match the neural network's forward propagation, we consider an arbitrary latent node \(H\) with \(N\) unobserved parents \(h_{1},...,h_{N}\) and \(M\) observed parents \(g_{1},...,g_{M}\). The edges between these parents and \(H\) then each have a weight \(\theta_{i}\), \(1\leq i\leq N\), for the unobserved nodes, and \(w_{j}\), \(1\leq j\leq M\), for the observed nodes. The network as a whole has observed evidence \(\vec{x}\). For the rest of this problem we use a Markov network view of the neural network. The potentials for these nodes in the network are as follows: Figure 1: The first step of the PGM construction where shared latent parents are separated into copies along with the subtree of their ancestors. Copies of nodes H1 and H2 are made in this example. \begin{tabular}{|c|c|} \hline \(h_{i}\) & \(\neg h_{i}\) \\ \hline \(e^{p_{i}}\) & 1 \\ \hline \end{tabular} Since \(g_{j}\) are observed, their values are found in \(\vec{x}\). \begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(h_{i}\) & \(e^{\theta_{i}}\) & 1 \\ \hline \(\neg h_{i}\) & 1 & 1 \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(g_{j}\) & \(e^{w_{j}}\) & 1 \\ \hline \(\neg g_{j}\) & 1 & 1 \\ \hline \end{tabular} Suppose, then, using the second step of our construction, we make \(L\) copies of all the nodes that were parents of \(H\) in the Bayesian network view of the DNN, \(h_{1}^{1},...,h_{1}^{L},...,h_{N}^{1},...,h_{N}^{L}\) and \(g_{1}^{1},...,g_{1}^{L},...,g_{M}^{1}\) with weights \(\theta_{1}/L,...,\theta_{N}/L\) and \(w_{1}/L,...,w_{M}/L\) respectively. The potential on \(H\) and these copied nodes is then: \begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(h_{i}^{k}\) & \(e^{\theta_{i}/L}\) & 1 \\ \hline \(\neg h_{i}^{k}\) & 1 & 1 \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(g_{j}^{k}\) & \(e^{w_{j}/L}\) & 1 \\ \hline \(\neg g_{j}^{k}\) & 1 & 1 \\ \hline \end{tabular} where \(1\leq i\leq N\), \(1\leq j\leq M\), and \(1\leq k\leq L\). The potentials for each of the copied nodes are the same as the nodes they were originally copied from. We then have that, \[P(H,h_{1}^{1},...,h_{1}^{L},...,h_{N}^{1},...,h_{N}^{L},g_{1},...,g_{1}^{L},...,g_{M}^{1},...,g_{M}^{L}|\vec{x})\] \[=\frac{1}{Z}\times\prod_{j=1}^{M}\prod_{k=1}^{L}e^{(w_{j}/L)H \times g_{j}^{k}}\times\prod_{i=1}^{N}\prod_{k=1}^{L}e^{(\theta_{i}/L)H\times h _{i}^{k}}\] \[=\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times e^{ H(\frac{\theta_{1}}{L}\sum_{k=1}^{L}h_{1}^{k}+...+\frac{\theta_{N}}{L}\sum_{k=1}^{L}h_{ N}^{k})}\.\] Summing out an arbitrary, copied latent node, \(h_{\alpha}^{\beta}\): \[\sum_{h_{\alpha}^{\beta},\neg h_{\alpha}^{\beta}}P(H,h_{1}^{1},...,h_{1}^{L},...,h_{N}^{1},...,h_{N}^{L}|\vec{x})\] \[=\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times\sum_ {h_{\alpha}^{\beta},\neg h_{\alpha}^{\beta}}\prod_{i=1}^{N}\prod_{k=1}^{L}e^{( \theta_{i}/L)H\times h_{i}^{k}}\] \[=\left(\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H} \times e^{p_{\alpha}}e^{(\theta_{\alpha}/L)H}\prod_{\begin{subarray}{c}i=1,.., N\\ (i,k)\neq(\alpha,\beta)\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H\times h_{i}^{k}}\right.\] \[\left.+\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H} \times\prod_{\begin{subarray}{c}i=1,..,N\\ (i,k)\neq(\alpha,\beta)\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H \times h_{i}^{k}}\right)\] \[=\left(\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H} \times(e^{p_{\alpha}}e^{(\theta_{\alpha}/L)H}+1)\times\prod_{\begin{subarray} {c}i=1,..,N\\ (i,k)\neq(\alpha,\beta)\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H\times h _{i}^{k}}\right)\.\] Summing out all \(L\) copies of \(h_{\alpha}\): \[\left(\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times(e^{p_{ \alpha}}e^{(\theta_{\alpha}/L)H}+1)^{L}\times\prod_{\begin{subarray}{c}i=1,.., N\\ i\neq\alpha\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H\times h_{i}^{k}} \right)\.\] Summing out the \(L\) copies of each latent parent would then yield: \[\frac{1}{Z}e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times\prod_{i}^{N}(e^{p_{i}}e^ {(\theta_{i}/L)H}+1)^{L}\,\] which, in turn, gives us: \[P(H=1|\vec{x}) =\frac{e^{\sum_{j=1}^{M}w_{j}g_{j}\times 1}\times\prod_{i}^{N}(e^{p_ {i}}e^{(\theta_{i}/L)\times 1}+1)^{L}}{e^{\sum_{j=1}^{M}w_{j}g_{j}\times 1}\times \prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)\times 1}+1)^{L}+e^{\sum_{j=1}^{M}w_{j}g_{j} \times 0}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)\times 0}+1)^{L}}\] \[=\frac{e^{\sum_{j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^ {(\theta_{i}/L)}+1)^{L}}{e^{\sum_{j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_ {i}}e^{(\theta_{i}/L)}+1)^{L}+\prod_{i}^{N}(e^{p_{i}}+1)^{L}}\] \[=\left(\frac{1}{1+\frac{\prod_{i}^{N}(e^{p_{i}}+1)^{L}}{e^{\sum_ {j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)}+1)^{L}}} \right)\;.\] We then consider: \[\lim_{L\rightarrow\infty}\frac{\prod_{i}^{N}(e^{p_{i}}+1)^{L}}{e ^{\sum_{j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)}+1) ^{L}}\] \[=\lim_{L\rightarrow\infty}e^{-\sum_{j=1}^{M}w_{j}g_{j}+\sum_{i=1 }^{N}L\times\log(e^{p_{i}}+1)-\sum_{i=1}^{N}L\times\log(e^{p_{i}}e^{(\theta_{ i}/L)}+1)}\;,\] \[\lim_{L\rightarrow\infty}-\sum_{j=1}^{M}w_{j}g_{j}+\sum_{i=1}^{N} L\times\log(e^{p_{i}}+1)-\sum_{i=1}^{N}L\times\log(e^{p_{i}}e^{(\theta_{i}/L)}+1)\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{L\rightarrow\infty}\frac{\sum_ {i=1}^{N}\left(\log(e^{p_{i}}+1)-\log(e^{p_{i}}e^{(\theta_{i}/L)}+1)\right)}{ 1/L}\] This limit clearly has the indeterminate form of \(\frac{0}{0}\). Consider, then the following change of variables, \(S=1/L\), and subsequent use of l'Hospital's rule. \[-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\frac{\sum_{i=1}^{N} \left(\log(e^{p_{i}}+1)-\log(e^{p_{i}}e^{(\theta_{i}S)}+1)\right)}{S}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\frac{\frac{\partial}{ \partial S}\sum_{i=1}^{N}\left(\log(e^{p_{i}}+1)-\log(e^{p_{i}}e^{(\theta_{i}S) }+1)\right)}{\frac{\partial}{\partial S}S}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\frac{\sum_{i=1}^{N} \frac{-1}{e^{p_{i}}e^{(\theta_{i}S)}+1}\times e^{p_{i}}e^{\theta_{i}S}\times \theta_{i}}{1}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\sum_{i=1}^{N} \frac{-e^{p_{i}}e^{\theta_{i}S}\times\theta_{i}}{e^{p_{i}}e^{\theta_{i}S}+1}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}-\sum_{i}^{N}\frac{e^{p_{i}}}{e^{p_{i} }+1}\times\theta_{i}=-\sum_{j=1}^{M}w_{j}g_{j}-\sum_{i}^{N}\sigma(p_{i})\theta _{i}\;.\] Therefore, \[\lim_{L\rightarrow\infty}\frac{\prod_{i}^{N}(e^{p_{i}}+1)^{L}}{e^{\sum_{j=1}^ {M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)}+1)^{L}}=e^{- \sum_{j=1}^{M}w_{j}g_{j}-\sum_{i}^{N}\sigma(p_{i})\theta_{i}}\;,\] and, \[\lim_{L\rightarrow\infty}P(H=1|\vec{x})=\frac{1}{1+e^{-\sum_{j=1}^{M}w_{j}g_{ j}-\sum_{i}^{N}\sigma(p_{i})\theta_{i}}}=\sigma(\sum_{j=1}^{M}w_{j}g_{j}+ \sum_{i}^{N}\sigma(p_{i})\theta_{i})\;.\] This is exactly the result of the deep neural network. Suppose then that \(z\) is a hidden node whose parents in the Bayesian network view are all observed. By our PGM construction, we have that the potential for \(z\) true is \(e^{\sum_{x\in\vec{x}}w_{zx}x}\) where \(w_{zx}\) is the weight between nodes \(z\) and \(x\), and, for \(z\) false, 1. This clearly matches the deep neural network's sigmoid activation in this 'first layer'. Consider, then, the nodes whose parents in the Bayesian network view are either one of these first layer hidden nodes, or an observed node. By our PGM construction, we have shown that so long as nodes in the previous layer are either observed or have sigmoid conditional probabilities, as is the case here, the conditional probability of any nodes that immediately follow will also have a sigmoid conditional probability. Repeating this argument up to the output nodes gives us that the conditional probability in this PGM construction and the activation values of the DNN match for any layer in the DNN. Consider then the view of the DNN where each layer is defined such that the values of the activation functions for all neurons in a given layer can be calculated using neurons of the preceding layers. The DNN is structured then such that the first layer depends only on observed evidence and all layers can be calculated sequentially from that starting point. We have already established that nodes with only observed parents have this sigmoid conditional probability. Given the structure of the DNN and Theorem 2, we then have that the corresponding layers in our PGM construction of the DNN can be similarly computed sequentially from that first layer and have conditional probabilities that exactly match the DNN's activations. ## 4 Implications and Extensions We are not claiming that one should actually carry out the PGM construction used in the preceding section, since that PGM is infinite. Rather, its contribution is to let us understand precisely the approximation that SGD in a DNN is making; although a DNN itself can be understood as a BN or MN, SGD is not using that BN or MN but rather the infinite tree-structured one. While that PGM is infinite, is it built using the original as a template in a straightforward fashion, and hence is easy to understand. Beyond this contribution to comprehensibility and pedagogy, are there other applications? One application is an ability to use standard PGM algorithms such as Markov chain Monte Carlo (MCMC) to sample latent variables given observed values of input and output variables, such as for producing confidence intervals or understanding relationships among variables. One could already do so using Gibbs sampling in the BN or MN directly represented by the DNN itself (which we will call the "direct PGM"), but then one wouldn't be using the BN or MN that SGD in the DNN actually used during training. For that, our result has shown that one instead needs to use Gibbs sampling in the infinite tree-structured PGM, which is impractical. Nevertheless, for any variable \(V\) in the original DNN, on each iteration a Gibbs sampler takes infinitely many samples of \(V\) given infinitely many samples of each of the members of \(V\)'s Markov blanket in the original DNN. By treating the variables of the original DNN as continuous, with their values approximating their sampled probabilities in the Gibbs sampler, we can instead apply Hamiltonian Monte Carlo or other MCMC methods for continuous variables in the much smaller DNN structure. We explore this approach empirically rather than theoretically in the next section. Another, related application of our result is that one could further fine-tune the trained DNN using other PGM algorithms, such as contrastive divergence. We also explore this use in the next section. One might object that most results in this paper use sigmoid activation functions. Nair and Hinton showed that rectified linear units (ReLU) can be thought of as a combination of infinitely many sigmoid units with varying biases (Nair and Hinton, 2010). Hence our result in the previous section can be extended to ReLU activations by the same argument. More generally, with any non-negative activation function that can yield values greater than one, while our BN argument no longer holds, the MN version of the argument can be extended. An MN already requires normalization to represent a probability distribution. While Batch Normalization and Layer Normalization typically are motivated procedurally, to keep nodes from "saturating," and consequently to keep gradients from "exploding" or "vanishing," as the names suggest, they also can be used to bring variables into the range \([0,1]\) and hence to being considered as probabilities. Consider an idealized variant of these that begins by normalizing all the values coming from a node \(h\) of a neural network, over a given minibatch, to sum to \(1.0\); the argument can be extended to a set of \(h\) and all its siblings in a layer (or other portion of the network structure) assumed to share their properties. It is easily shown that if the parents of any node \(h\) in the neural network provide to \(h\) approximate probabilities that those parent variables are true in the distribution defined by the Markov network given the inputs, then \(h\) in turn provides to its children an approximate probability that \(h\) is true in the distribution defined by the Markov network given the inputs. Use of Batch or Layer Normalization is only approximate and hence adds an additional source of approximation to the result of the preceding section. Detailed consideration of other activation functions is left for further work; in the next section we return to the sigmoid case. ## 5 Alternative Training Algorithms: The Sigmoid Case To illustrate the potential utility of the infinite tree-structured PGM view of a DNN, in this section we pursue one of its potential implications in depth. We have already noted we can view forward propagation in an all-sigmoid DNN as exact inference in a tree-structured BN, such that the CPD of each hidden variable is a logistic regression. In other words, each hidden node is a Bernoulli random variable, with parameter \(\lambda\) being a sigmoid activation (i.e. logistic function) applied to a linear function of the parent nodes. This view suggests alternative learning algorithms such as contrastive divergence (CD) that use sampling methods for high-dimensional binary random variables such as Gibbs sampling. Doing so has a natural advantage over SGD, which samples the values of the hidden variables using only the evidence at the input values. Instead, Gibbs sampling uses _all_ the available evidence, both at input and output variables. MCMC has now advanced beyond Gibbs sampling with methods such as Hamiltonian Monte Carlo (HMC), but HMC will sample values in \(\{0,1\}\) rather than in \([0,1]\). To use HMC as proposed, we define hidden variables using the recently-developed _continuous_ Bernoulli distribution (Loaiza-Ganem and Cunningham, 2019), where the single parameter \(\lambda\) of the distribution is defined as in the Bernoulli case. Whereas the Bernoulli density is unnormalized when viewed (improperly) as a density over \((0,1)\), the continuous Bernoulli distribution is a proper density. Somewhat counter-intuitively, the expected value of this distribution is _not_ equal to the parameter \(\lambda\), which has important implications for inference. This option leads to learning algorithms that are able to take advantage of sampling methods effective for high-dimensional continuous random variables, including HMC. With respect to this BN, whose variables correspond exactly with those of the DNN, we can see that our previous CD-1 algorithm, which is standard SGD for DNNs, samples settings of the input variables, computes expectations on all the remaining variables (latent and output variables), and adjusts the weights (CPDs) toward maximizing the probability conditional log likelihood, or minimizing cross-entropy. If instead of one gradient step we continued to convergence, the resulting algorithm _almost_ would be the well-known Expectation Maximization (EM) algorithm for training any BN's parameters from a data set with missing or hidden variables, given the BN structure. We say _almost_ because this analysis reveals one important "error" or shortcoming in the SGD algorithm: in the E step where we compute the expected values of the latent variables, it _entirely ignores_ the values of the output variables. In other words, we can view SGD as an approximation to EM in which evidence from output variables is ignored during the E step, and therefore gradients must be backpropagated across _all_ layers to account for this evidence when updating the weights in the M step. This strategy is effective in later layers, which are closer to the output evidence, but highly ineffective in earlier layers. This limitation has been recognized as the vanishing gradient problem and addressed through ad-hoc initialization and pre-training strategies as well as Nesterov momentum. Nesterov momentum can be seen as "looking ahead" one step of SGD when computing the gradient, which partially incorporates evidence at output variables into the expectations at the hidden nodes. More precisely and formally correcting this shortcoming is not easy: computing the correct expectation is NP-complete, and the most obvious algorithm always requires time exponential in the number of latent variables. In such situations in PGMs, researchers have replaced the E step with MCMC sampling (e.g., Gibbs) (Hinton et al., 2006; Song et al., 2016). However, running these MCMC chains to convergence is impractical, therefore it is common in practice to take a small number \(k\) of steps in the chain between gradient steps, which again gives rise to the CD-1 or CD-\(k\) algorithm. However, a different training algorithm for DNNs, motivated by the natural correspondence between DNNs and BNs described above - and which correctly accounts for evidence from the output variables through proper EM updates - converges to the correct answer in fewer training epochs compared to SGD. The Continuous Bernoulli Bayes net (CBBN) is similar to the sigmoid BN (_i.e._, "stacked logistic regression"), but with logistic regression CPDs replaced by their continuous Bernoulli analogues. Equivalently, it is a feedforward, stochastic neural network in which hidden variables are continuous Bernoulli distributed. Consider a Bayesian network composed of input variables \(\mathbf{x}=\mathbf{h}_{0}\), a sequence of layers of hidden variables \(\mathbf{h}_{1},...,\mathbf{h}_{L}\), and output variables \(\mathbf{y}\). Each pair of consecutive layers forms a bipartite subgraph of the network as a whole, and the variables \(\mathbf{h}_{i}=(h_{i1},...,h_{iM_{i}})\) follow a multivariate continuous Bernoulli distribution with parameters \(\mathbf{\lambda}_{i}=(\lambda_{i1},...,\lambda_{iM_{i}})\) that depend on variables in the previous layer \(\mathbf{h}_{i-1}\) as follows: \[h_{ij}\sim\mathcal{CB}(\lambda_{ij}),\,\text{where}\,\,\mathbf{\lambda}_{i}= \sigma(\mathbf{W}_{i-1}\mathbf{h}_{i-1}+\mathbf{b}_{i-1}). \tag{1}\] \(\sigma:\mathbb{R}\rightarrow(0,1)\) is a non-linearity - here the logistic function - that is applied element-wise, and \(\mathbf{\theta}_{i}=(\mathbf{W}_{i},\mathbf{b}_{i})\) are parameters to be learned. For a complete setting of the variables \(\{\mathbf{x},\mathbf{h},\mathbf{y}\}\), where \(\mathbf{h}=\{\mathbf{h}_{1},...,\mathbf{h}_{L}\}\), and parameters \(\mathbf{\theta}=\{\mathbf{\theta}_{i}\}_{i=0}^{L}\), the likelihood \(p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{\theta})\) may be decomposed as: \[p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{\theta})=p(\mathbf{y}|\mathbf{h}_{L};\mathbf{ \theta}_{L})\cdot\prod_{i=1}^{L}\prod_{j=1}^{M_{i}}p_{\mathcal{CB}}(h_{ij}| \lambda_{ij}(\mathbf{h}_{i-1};\mathbf{\theta}_{i-1})), \tag{2}\] where \(p_{\mathcal{CB}}(\cdot|\cdot)\) denotes the continuous Bernoulli density, and a specific form for \(p(\mathbf{y}|\mathbf{h}_{L},\mathbf{\theta}_{L})\) has been omitted to allow variability in the output variables. In our experiments, \(\mathbf{y}\) is a Bernoulli or categorical random variable parameterized via the logistic or softmax function, respectively. ### Learning via Contrastive Divergence with Hamiltonian Monte Carlo Sampling Let \(\mathbf{h}^{(0)},\mathbf{h}^{(1)},\mathbf{h}^{(2)},...\) denote a chain of MCMC samples of the complete setting of hidden variables in our CBBN. As previously noted, we allow hidden variables \(h_{ij}\in(0,1)\) for \(i\in\{1,...,L\}\) and \(j\in\{1,...,M_{i}\}\), and use Hamiltonian Monte Carlo (HMC) to generate the next state due to its fast convergence. Since HMC samples are unbounded, we sample the _logit_ associated with \(h_{ij}\in(0,1)\), i.e. \(\sigma^{-1}(h_{ij})\in(-\infty,\infty)\), rather than sampling the \(h_{ij}\) directly. The HMC trajectories are defined by Hamilton's Equations: \[\frac{d\rho_{i}}{dt}=\frac{\partial H}{\partial\mu_{i}} \frac{d\mu_{i}}{dt}=-\frac{\partial H}{\partial\rho_{i}} \tag{3}\] where \(\rho_{i},\mu_{i}\) are the \(i\)th component of the position and momentum vector. The Hamiltonian \(H\) is \[H=H(\mathbf{\rho},\mathbf{\mu})=U(\mathbf{\rho})+\frac{1}{2}\mathbf{\mu}^{T}M^{- 1}\mathbf{\mu} \tag{4}\] where \(M\) is a positive definite and symmetric mass matrix, and \(M^{-1}\) could represent a diagonal estimate of the covariance. Defining the position \(\mathbf{\rho}=\mathbf{h}\), the complete set of hidden variables of our CBBN, we have that the potential energy \(U\) is the negative log-likelihood associated with equation (2): \[U(\mathbf{h})=-\log p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{\theta})=-\log p(\mathbf{ y}|\mathbf{h}_{L};\mathbf{\theta}_{L})-\sum_{i=1}^{L}\sum_{j=1}^{M_{i}}\log p_{ \mathcal{CB}}(h_{ij}|\lambda_{ij}(\mathbf{h}_{i-1};\mathbf{\theta}_{i-1})). \tag{5}\] We set the leap frog size \(L>0\), step size \(\Delta t>0\). A description of the HMC trajectories (_i.e._, evolution of \(\mathbf{h}\)) is provided in the supplementary material. The initial state of the chain \(\mathbf{h}^{(0)}\) is drawn with a simple forward pass through the network, ignoring the output variables; in other words, we have \(h_{ij}^{(0)}\sim\mathcal{CB}(\sigma(\mathbf{W}_{i-1}^{(0)}\mathbf{h}_{i-1}^{(0)}+\mathbf{ b}_{i-1}^{(0)})_{j})\) for \(i\in\{1,...L\}\), where \(\mathbf{h}_{0}=\mathbf{x}\) are the input variables, and the values of \(\mathbf{W}_{i}^{(0)}\) and \(\mathbf{b}_{i}^{(0)}\) are manually set or drawn from a standard normal or uniform distribution. We update \(\mathbf{h}\) through a number of burn-in steps before beginning to update our parameters to ensure that \(\mathbf{h}\) is first consistent with evidence from the output variables. After \(k\) steps, corresponding to CD-\(k\), we define the loss based on equation (2): \[\mathcal{L}(\mathbf{\theta}^{(n)})=-\log p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{ \theta}^{(n)}). \tag{6}\] We then apply the following gradients to update the parameters \(\{\mathbf{W}_{i}^{(n)}\}_{i=0}^{L}\) and \(\{\mathbf{b}_{i}^{(n)}\}_{i=0}^{L}\): \[\mathbf{W}_{i}^{(n+1)}=\mathbf{W}_{i}^{(n)}-\eta\frac{\partial\mathcal{ L}}{\partial\mathbf{W}_{i}^{(n)}} \mathbf{b}_{i}^{(n+1)}=\mathbf{b}_{i}^{(n)}-\eta\frac{\partial\mathcal{L}}{ \partial\mathbf{b}_{i}^{(n)}} \tag{7}\] where \(\eta\) is the learning rate. Algorithm 1 (see supplementary material) summarizes this procedure. ### Experimental Results The preceding algorithm shows the potential of the DNN-as-PGM view to generate new algorithms and approaches, but does it work? Taking the view of neural net as BN, we start with the hard problem of learning the exclusive-or function, but with the "right" prior (up to magnitude) on the BN parameters. This algorithm is also CD-k - here we use CD-1 - but under this alternative correspondence between PGM and neural network. We therefore call it CD-HMC to distinguish it from the earlier CD-k algorithm that is identical to SGD. As shown in Table 1, CD-HMC converges in half as many training epochs as SGD using the same minibatch size, and each epoch takes little longer than for SGD. But what if we did not know the correct prior? Using a hidden layer of thirty variables makes it highly probable that there exists a pair latent variables that, together with the inputs and output, have their weights randomly initialized to the correct prior. The empirical results below bear this out: results are similar to experiments using the correct prior (Table 1). If more than one combination of two such latent variables exist, the resulting trained model becomes an ensemble of accurate posteriors. The argument scales to more complex target functions by using more hidden layers in the neural network. These empirical results support the following simple view of why overparameterization works: more parameters, from more latent units and layers, provides a higher probability of having the correct prior embedded somewhere. And if it is embedded more than once, all the better since the final logistic regression layer simply learns a weighted vote among the various possible ensemble components. Further empirical support for this view can be found elsewhere Frankle and Carbin (2019), but for the first time here we view initialization as encoding multiple priors. This in turn suggests viewing the trained DNN as encoding a posterior distribution, from which we can make inferences about uncertainty. In addition to the experiments on learning the exclusive-or function, we also explore our method on two other datasets. One is a synthetic dataset generated using the _Make Moons_ from _sklearn_ (1k data, 30% noise). The other is MNIST, where we randomly choose 2k images of the digits 0 and 1. Using networks with one hidden layer of 32 or 128 hidden variables and sigmoid activation, we test CD-HMC on both datasets and compare it to SGD and CD-Gibbs, in which we return to logistic regression CPDs and use Gibbs sampling. The training and test datasets are split (80:20 ratio), and each model is trained for 400 epochs with weights updated by gradient descent (learning rate=0.01). For CD-HMC and CD-Gibbs, we draw 500 "burn-in" samples for each data point before the first weight update. The results below illustrate CD-HMC has similar accuracy to SGD and CD-Gibbs on the test set and converges in fewer epochs on _Make Moons_ dataset (Figure 2). Table 2 also shows that CD-HMC has a higher test loss than SGD under all the settings for networks and datasets. This is likely due to variability in sampling in contrast to SGD, which is deterministic. As the size of the dataset increases, the training time of CD-HMC and CD-Gibbs becomes considerably longer than SGD due to the high cost of drawing high-dimensional samples. However, HMC samples all hidden nodes together, whereas CD-Gibbs samples them one by one, and as a result, CD-HMC is much faster. As these results suggest, and consistent with prior work on learning in sigmoid BNs, it is difficult (or perhaps impossible) to match the computational efficiency of SGD while \begin{table} \begin{tabular}{c|c|c|c|c} Network & Algorithm & Accuracy & Training Steps (Epochs) & Training time (s) \\ \hline \hline \multirow{2}{*}{A} & SGD (BP) & 100\% & 1693 & 19.8 \\ & CD-HMC & 100\% & 263 & 14.0 \\ \hline \multirow{2}{*}{B} & SGD (BP) & 100\% & 4000 & 40 \\ & CD-HMC & 100\% & 445 & 22.8 \\ \end{tabular} \end{table} Table 1: Comparing SGD (CD-1) and new algorithm CD-HMC on learning exclusive-or using 30 hidden variables and a random initialization, given correct initialization (A), or all possible priors (B). \begin{table} \begin{tabular}{c|c|c|c|c|c} Network & Algorithm & \multicolumn{2}{c|}{_Make Moons_} & \multicolumn{2}{c}{MNIST} \\ \cline{3-6} & & Test Accuracy & Test Loss & Test Accuracy & Test Loss \\ \hline \hline \multirow{3}{*}{32 nodes} & SGD (BP) & 80\% & 0.3753 & 100\% & 0.0021 \\ & CD-HMC & 80.5\% & 0.5131 & 99.25\% & 0.2606 \\ & CD-Gibbs & 80.5\% & 0.6425 & 100\% & 0.4663 \\ \hline \multirow{3}{*}{128 nodes} & SGD (BP) & 80.5\% & 0.3751 & 100\% & 0.0017 \\ & CD-HMC & 80.5\% & 0.4822 & 99.75\% & 0.1815 \\ \cline{1-1} & CD-Gibbs & 81\% & 0.6408 & 99.75\% & 0.4733 \\ \end{tabular} \end{table} Table 2: SGD, CD-HMC and CD-Gibbs performance on synthetic data (_Make Moons_, 1k samples, noise=0.3) and MNIST (2k images, \(\{0,1\}\) only) with 32 or 128 hidden units and random initialization. maintaining the view of DNNs as PGMs. However, a hybrid approach - for example, using SGD (CD-1) as pre-training and then applying CD-HMC - preserves the benefits of understanding the DNN as a PGM, or even as a distribution (ensemble) of PGMs, while also allowing exact inference and quantifying uncertainty. ## 6 Limitations and Future Work Limitations of the present work and directions for future work include establishing formal results about how closely batch- and layer-normalization approximate Markov network normalization when using non-sigmoid activations, establishing theoretical results relating HMC in the neural network to Gibbs sampling in the large treewidth-1 Markov network, and obtaining empirical results for HMC with non-sigmoid activations. Also of great interest is comparing HMC and other PGM algorithms to Shapley values, Integrated Gradients, and other approaches for assessing the relationship of some latent variables to each other Figure 2: Test accuracy of SGD, CD-HMC and CD-Gibbs on synthetic data (_Make Moons_, 1k, noise=0.3) and MNIST (2k images, \(\{0,1\}\) only). or to inputs and/or outputs in a neural network. Finally, the large treewidth-1 PGM is a substantial approximation to the direct PGM of a DNN. After training the DNN and hence the large treewidth-1 model, can we fine-tune with a less-approximate approach, perhaps based on loopy belief propagation or other approximate algorithms often used in PGMs? ## Acknowledgements The authors would like to thank Sayan Mukherjee, Samuel I. Berchuck, Youngsoo Baek, Andrew Allen, William H. Majoros, David Carlson, Juan Restrepo and Houssam Nassif for their helpful discussion about the theoretical work. We are also grateful to Mengyue Han and Jinyi Zhou for their technical support. This project is in part supported by Impact of Genomic Variation on Function (IGVF) Consortium of the National Institutes of Health via grant U01HG011967.
2308.09741
Yang-Mills as a Liouville Theory
We propose a description of the gluon scattering amplitudes as the inverse Mellin transforms of the conformal correlators of light operators in two-dimensional Liouville theory tensored with WZW-like chiral currents on the celestial sphere. The dimensions of operators are Mellin dual to gluon light cone energies while their positions are determined by the gluon momentum directions. Tree-level approximation in Yang-Mills theory corresponds to the semiclassical limit of Liouville theory. By comparing subleading corrections, we find $b^2= (8\pi^2)^{-1}\beta_0 \,g^2(M)$, where $b$ is the Liouville coupling constant, $g(M)$ is the Yang Mills coupling at the renormalization scale $M$ and $\beta_0$ is the one-loop coefficient of the Yang-Mills beta function.
Stephan Stieberger, Tomasz R. Taylor, Bin Zhu
2023-08-18T18:00:01Z
http://arxiv.org/abs/2308.09741v1
# Yang-Mills as a Liouville Theory ###### Abstract We propose a description of the gluon scattering amplitudes as the inverse Mellin transforms of the conformal correlators of light operators in two-dimensional Liouville theory tensored with WZW-like chiral currents on the celestial sphere. The dimensions of operators are Mellin dual to gluon light cone energies while their positions are determined by the gluon momentum directions. Tree-level approximation in Yang-Mills theory corresponds to the semiclassical limit of Liouville theory. By comparing subleading corrections, we find \(b^{2}=(8\pi^{2})^{-1}\beta_{0}\,g^{2}(M)\), where \(b\) is the Liouville coupling constant, \(g(M)\) is the Yang Mills coupling at the renormalization scale \(M\) and \(\beta_{0}\) is the one-loop coefficient of the Yang-Mills beta function. In a recent paper [1], we established an intriguing connection between the tree-level gluon scattering amplitudes and the correlators of two-dimensional Liouville theory on the celestial sphere. The gluon amplitudes were evaluated in the presence of a dilaton source and transformed into "celestial" amplitudes [2; 3] by taking Mellin transforms with respect to the light cone energies of scattered gluons. The dimensions of Liouville operators were Mellin duals of such energies. Their positions were determined by the celestial map between the directions of light-like momenta and points on two-dimensional celestial sphere.1 The celestial amplitudes matched the Liouville correlators evaluated in the limit of small Liouville coupling, \(b\to 0\), which corresponds to the infinite central charge limit. This construction has been recently generalized in Ref.[21] to celestial amplitudes in \(\mathcal{N}=1\) supersymmetric Yang-Mills theory coupled to dilatons. Footnote 1: See reviews of celestial holography in Refs.[4; 5; 6; 7]. Most of the recent work has focused on extracting CFT data of the putative celestial CFT from scattering amplitudes in four dimensions, e.g., celestial OPEs [8; 9], infinite-dimensional algebras [10; 11; 12], differential equations [13; 14; 15], and connections to twistor theory [16; 17; 18; 19; 20]. In the present work we proceed in the opposite direction. We start from the operators associated with gluons, constructed as the products of holomorphic Wess-Zumino-Witten (WZW) curens times the so-called light Liouville operators. The current part carries the information about gluon gauge charges and spins. The Liouville part determines their dimensions. The three-point correlation functions of such operators factorize into a relatively simple, exactly known WZW correlators times the three-point correlators of light Liouville operators. The latter ones are known exactly from DOZZ formula [22; 23] and can be expressed in terms of Zamolodchikovs' \(\Upsilon\) function [23]. We perform inverse Mellin transformations on the two-dimensional correlators. By using the celestial map, we construct the corresponding gluon scattering amplitudes. We can recover the gluon amplitudes, at the tree level and beyond, without the dilaton background, by taking the limit of inverse Mellin transforms in which the dilatons decouple. This procedure can be performed exactly at the leading order in the Liouville coupling (\(b\to 0\)), corresponding to the tree level approximation in Yang-Mills theory. We also go beyond the leading order and identifiy some corrections pointing towards a direct relation between the Liouville and Yang-Mills couplings. The Lagrangian density of two-dimensional Liouville theory is given by \[\mathcal{L}=\frac{1}{\pi}\frac{\partial\phi}{\partial z}\frac{\partial\phi}{ \partial\bar{z}}+\mu e^{2b\phi}\, \tag{1}\] where \(z\) and \(\bar{z}\) are the complexified (Euclidean) spacetime coordinates, \(b\) is the dimensionless Liouville coupling constant and \(\mu\) is the "cosmological constant" scale parameter. The theory has a "background charge at infinity," \[Q=b+\frac{1}{b}, \tag{2}\] which is related to the central charge by \[c=1+6Q^{2}. \tag{3}\] he "light" primary field operators have the form: \[V_{\sigma}(z,\bar{z})=e^{2\sigma b\phi(z,\bar{z})}, \tag{4}\] with the exponents parametrized by \(b\)-independent parameters \(\sigma\). Their conformal dimensions are given by \[d(\sigma)=2\sigma+2b^{2}\sigma(1-\sigma). \tag{5}\] We introduce spin and gauge charges into the two-dimensional system by including a WZW-like holomorphic sector. The WZW current \(J^{a}(z)\), with \(a\) labeling the adjoint representation of the Lie group, has chiral weights \((h,\bar{h})=(1,0)\). We also include another operator in the adjoint representation, \(\widehat{J}^{a}(z)\), with \((h,\bar{h})=(-1,0)\). The only property of this chiral system2 relevant to our discussion is the form of the three-point correlator Footnote 2: For a more detailed discussions of this chiral system, see Ref.[19]. \[\left\langle\widehat{J}^{a_{1}}(z_{1})\widehat{J}^{a_{2}}(z_{2})J^{a_{3}}(z_ {3})\right\rangle=f^{a_{1}a_{2}a_{3}}\frac{z_{12}^{3}}{z_{23}z_{31}}, \tag{6}\] where \(z_{ij}=z_{i}-z_{j}\) and \(f^{a_{1}a_{2}a_{3}}\) are the structure constants. We construct the operators associated with the positive helicity gluons in the following way: \[O_{\Delta}^{+a}(z,\bar{z})=F_{+}(\Delta,\mu,b)\,J^{a}(z)e^{2\sigma(\Delta-1)b \phi(z,\bar{z})}\,, \tag{7}\] where \(F_{+}(\Delta,\mu,b)\) is a normalization factor and \(2\sigma(\Delta-1)\) ensures dimension \(\Delta-1\) of the Liouville operator. At the leading order \(\mathcal{O}(b^{0})\), \(2\sigma(\Delta-1)=\Delta-1\). Similarly, for the negative helicity gluon, \[O_{\Delta}^{-a}(z,\bar{z})=F_{-}(\Delta,\mu,b)\,J^{a}(z)e^{2\sigma(\Delta+1)b \phi(z,\bar{z})}\,, \tag{8}\] Note that the normalization factors \(F_{\pm}(\Delta,\mu,b)\) depend on the dimensions \(\Delta\), therefore they contribute to inverse Mellin transforms in a nontrivial way. We are interested in the "MHV" correlator \[\left\langle O_{\Delta_{1}}^{-a_{1}}(z_{1},\bar{z}_{1})O_{\Delta _{2}}^{-a_{2}}(z_{2},\bar{z}_{2})O_{\Delta_{3}}^{+a_{3}}(z_{3},\bar{z}_{3}) \right\rangle=f^{a_{1}a_{2}a_{3}}\frac{z_{12}^{3}}{z_{23}z_{31}}F_{1-}F_{2-}F_ {3+}\times \tag{9}\] \[\times(z_{12}\bar{z}_{12})^{\frac{\Delta_{3}-\Delta_{1}-\Delta_ {2}-3}{2}}(z_{23}\bar{z}_{23})^{\frac{\Delta_{1}-\Delta_{2}-\Delta_{3}+1}{2} }(z_{13}\bar{z}_{13})^{\frac{\Delta_{2}-\Delta_{1}-\Delta_{3}+1}{2}}\times C (\alpha_{1},\alpha_{2},\alpha_{3})\,\] where the three-point Liouville coefficient is given by the famous DOZZ formula [22; 23]: \[C(\alpha_{1},\alpha_{2},\alpha_{3})= \Big{[}\pi\mu\gamma(b^{2})b^{2-2b^{2}}\Big{]}^{(Q-\sum\alpha_{i} )/b}\times \tag{10}\] \[\frac{\Upsilon_{0}\Upsilon(2\alpha_{1})\Upsilon(2\alpha_{2}) \Upsilon(2\alpha_{3})}{\Upsilon(\alpha_{1}+\alpha_{2}+\alpha_{3}-Q)\Upsilon( \alpha_{1}+\alpha_{2}-\alpha_{3})\Upsilon(\alpha_{2}+\alpha_{3}-\alpha_{1}) \Upsilon(\alpha_{3}+\alpha_{1}-\alpha_{2})}\,,\] in our case specified to the case of light operators with \(\alpha_{i}=\sigma_{i}b\). Here, \(\Upsilon\) is the function defined in Zamolodchikovs' Ref.[23]. The semiclassical (\(b\to 0\)) limit of the three-point correlator of light Liouville fields has been studied before by Harlow, Maltz and Witten [24]. We use the following formulas from Ref.[24]: \[\Upsilon(x-1/b) =\gamma(x/b-1/b^{2})^{-1}b^{1+2/b^{2}-2x/b}\,\Upsilon(x)\,, \tag{11}\] \[\Upsilon_{0} =\frac{\mathcal{C}}{b^{1/2}}\exp\left(-\frac{1}{4b^{2}}\log b+ \dots\right)\,,\] (12) \[\Upsilon_{b}(\sigma b) =\frac{\mathcal{C}b^{1/2-\sigma}}{\Gamma(\sigma)}\exp\left(- \frac{1}{4b^{2}}\log b+\dots\right)\,, \tag{13}\] where \(\mathcal{C}\) is a constant and \(\gamma(x)=\Gamma(x)/\Gamma(1-x)\). In this way, we find \[C(\sigma_{1}b,\sigma_{2}b,\sigma_{3}b)= \frac{\pi\tilde{\mu}\gamma(1/b^{2})\,\gamma(\sum\sigma_{i}-1-1/b ^{2})\,[\pi\mu\gamma(b^{2})b^{-2b^{2}}]^{1-\sum\sigma_{i}}}{b^{5}} \tag{14}\] \[\times\frac{\Gamma(\sigma_{1}+\sigma_{2}+\sigma_{3}-1)\Gamma( \sigma_{1}+\sigma_{2}-\sigma_{3})\Gamma(\sigma_{2}+\sigma_{3}-\sigma_{1}) \Gamma(\sigma_{3}+\sigma_{1}-\sigma_{2})}{\Gamma(2\sigma_{1})\Gamma(2\sigma_ {2})\Gamma(2\sigma_{3})}\,,\] where the "dual" cosmological constant \(\tilde{\mu}\) is related to \(\mu\) as follows \[\pi\tilde{\mu}\gamma(1/b^{2})=(\pi\mu\gamma(b^{2}))^{1/b^{2}}\,. \tag{15}\] Our goal is to apply the celestial map to the inverse Mellin transform, \[\mathcal{A}_{3G}(\omega_{i},z_{i},\bar{z}_{i})=M^{\Delta_{1}+ \Delta_{2}+\Delta_{3}-3}\left(\frac{1}{2\pi i}\right)^{3} \int_{c-i\infty}^{c+i\infty}d\Delta_{1}d\Delta_{2}d\Delta_{3}\, \omega_{1}^{-\Delta_{1}}\,\omega_{2}^{-\Delta_{2}}\,\omega_{3}^{-\Delta_{3}} \tag{16}\] \[\times\left\langle O_{\Delta_{1}}^{-a_{1}}(z_{1},\bar{z}_{1})O_{ \Delta_{2}}^{-a_{2}}(z_{2},\bar{z}_{2})O_{\Delta_{3}}^{+a_{3}}(z_{3},\bar{z}_ {3})\right\rangle\,,\] where the integrations are performed on the complex plane along the lines of real constant \(c>0\); at the end, we will take the limit of \(c\to 0^{+}\). Note that connecting two to four dimensions necessitates introducing a "renormalization" scale \(M\) in order to ensure the correct mass dimension \(-3\) of the three-gluon amplitude. As mentioned before, the integrands depend on the normalization constants \(F_{\pm}\). We will see below that the following choice leads to the desired result in the semiclassical limit: \[F_{+}(\Delta,\mu,b)= [\pi\mu\gamma(b^{2})b^{-2b^{2}}]^{\sigma(\Delta-1)}\ \Gamma[2\sigma(\Delta-1)]\, \tag{17}\] \[F_{-}(\Delta,\mu,b)= [\pi\mu\gamma(b^{2})b^{-2b^{2}}]^{\sigma(\Delta+1)-1/2}\ \Gamma[2\sigma(\Delta+1)]. \tag{18}\] Then as \(b\to 0\), when \(2\sigma=\Delta-1\) for positive helicity gluon and \(2\sigma=\Delta+1\) for negative helicity gluon, the leading term becomes \[\mathcal{A}_{3G}^{(0)}(\omega_{i},z_{i},\bar{z}_{i})=\frac{\pi\,\tilde{\mu}}{b \,M^{2}}f^{a_{1}a_{2}a_{3}}\frac{z_{12}^{3}}{z_{23}z_{31}}I^{(0)}(\omega_{1}, \omega_{2},\omega_{3})\, \tag{19}\] where \[I^{(0)}(\omega_{1},\omega_{2},\omega_{3})= \left(\frac{1}{2\pi i}\right)^{3}\,\int_{c-i\infty}^{c+i\infty}d \Delta_{1}d\Delta_{2}d\Delta_{3}\,M^{\Delta_{1}+\Delta_{2}+\Delta_{3}-1} \omega_{1}^{-\Delta_{1}}\,\omega_{2}^{-\Delta_{2}}\,\omega_{3}^{-\Delta_{3}} \tag{20}\] \[\times\Gamma\left(\tfrac{\Delta_{1}+\Delta_{2}+\Delta_{3}-1}{2} \right)\Gamma\left(\tfrac{\Delta_{1}+\Delta_{3}-\Delta_{2}-1}{2}\right)\Gamma \left(\tfrac{\Delta_{2}+\Delta_{3}-\Delta_{1}-1}{2}\right)\Gamma\left(\tfrac {\Delta_{1}+\Delta_{2}-\Delta_{3}+3}{2}\right)\] \[\times(z_{12}\bar{z}_{12})^{\tfrac{\Delta_{3}-\Delta_{1}-\Delta_{ 2}-3}{2}}(z_{23}\bar{z}_{23})^{\tfrac{\Delta_{1}-\Delta_{2}-\Delta_{3}+1}{2}}( z_{13}\bar{z}_{13})^{\tfrac{\Delta_{2}-\Delta_{1}-\Delta_{3}+1}{2}}\] It is convenient to use the integral representation \[\Gamma(z)=\int_{0}^{+\infty}dt\,e^{-t}\,t^{z-1}\,. \tag{21}\] to rewrite the inverse Mellin transform as \[I^{(0)}(\omega_{1},\omega_{2},\omega_{3})= \frac{1}{M}\,\left(\frac{1}{2\pi i}\right)^{3}\,\int_{c-i\infty}^{ c+i\infty}d\Delta_{1}d\Delta_{2}d\Delta_{3}\,\int_{0}^{+\infty}dt_{0}dt_{1} dt_{2}dt_{3}\,e^{\Delta_{1}x_{1}}e^{\Delta_{2}x_{2}}e^{\Delta_{3}x_{3}} \tag{22}\] \[\times\,e^{-t_{0}-t_{1}-t_{2}-t_{3}}\,\frac{t_{0}^{-\frac{1}{2}}\, t_{1}^{-\frac{1}{2}}\,t_{2}^{-\frac{1}{2}}\,t_{3}^{\frac{3}{2}}}{t_{0}\,t_{1}\,t_{2} \,t_{3}}(z_{12}\bar{z}_{12})^{-\frac{3}{2}}(z_{23}\bar{z}_{23})^{\frac{1}{2}}( z_{13}\bar{z}_{13})^{\frac{1}{2}}\] where \[x_{1} =\frac{1}{2}\ln\left(\frac{M^{2}t_{0}\,t_{1}\,t_{3}\,z_{23}\bar{z }_{23}}{\omega_{1}^{2}\,t_{2}\,z_{12}\bar{z}_{12}\,z_{13}\bar{z}_{13}}\right) \tag{23}\] \[x_{2} =\frac{1}{2}\ln\left(\frac{M^{2}t_{0}\,t_{2}\,t_{3}\,z_{13}\bar{ z}_{13}}{\omega_{2}^{2}\,t_{1}\,z_{12}\bar{z}_{12}\,z_{23}\bar{z}_{23}}\right)\] (24) \[x_{3} =\frac{1}{2}\ln\left(\frac{M^{2}t_{0}\,t_{1}\,t_{2}\,z_{12}\bar{ z}_{12}}{\omega_{3}^{2}\,t_{3}\,z_{23}\bar{z}_{23}\,z_{13}\bar{z}_{13}}\right)\,. \tag{25}\] In terms of these variables, \[t_{1}=\frac{\omega_{1}\,\omega_{3}\,e^{x_{1}+x_{3}}\,|z_{13}|^{2}}{M^{2}t_{0}} \,\ t_{2}=\frac{\omega_{2}\,\omega_{3}\,e^{x_{2}+x_{3}}\,|z_{23}|^{2}}{M^{2}t_{0 }}\,\ t_{3}=\frac{\omega_{1}\,\omega_{2}\,e^{x_{1}+x_{2}}\,|z_{12}|^{2}}{M^{2}t_{0 }}. \tag{26}\] After changing the integration variables from \(t_{1},t_{2},t_{3}\) to \(x_{1},x_{2},x_{3}\) and performing inverse Mellin transforms, we obtain \[I^{(0)}(\omega_{1},\omega_{2},\omega_{3})=\frac{2\,\omega_{1}\omega_{2}}{ \omega_{3}M^{2}}\int_{0}^{+\infty}\,dt_{0}\,e^{-t_{0}-\frac{Q^{2}}{M^{2}t_{0} }}\,t_{0}^{-2}. \tag{27}\] where \[Q^{2}=\omega_{1}\omega_{2}|z_{12}|^{2}+\omega_{1}\omega_{3}|z_{13}|^{2}+ \omega_{2}\omega_{3}|z_{23}|^{2}\,. \tag{28}\] According to the celestial map, \[\sqrt{\omega_{i}\omega_{j}}z_{ij}=\langle ij\rangle\,\qquad\omega_{i}\omega_{j} |z_{ij}|^{2}=2p_{i}\cdot p_{j}\, \tag{29}\] therefore \[Q^{2}=(p_{1}+p_{2}+p_{3})^{2}\, \tag{30}\] and \(Q=p_{1}+p_{2}+p_{3}\) can be identified as the total momentum of the gluon system. After inserting the result (27) into Eq.(19) and using \[\int_{0}^{+\infty}\,dt_{0}\,e^{-t_{0}-\frac{Q^{2}}{M^{2}t_{0}}}\,t_{0}^{-2}=2 \sqrt{\frac{M^{2}}{Q^{2}}}\,K_{1}\left(2\sqrt{\frac{Q^{2}}{M^{2}}}\right)\,, \tag{31}\] where \(K_{1}\) is a modified Bessel function, we obtain \[\mathcal{A}^{(0)}_{3G}(\omega_{i},z_{i},\bar{z}_{i})=\frac{4\pi\,\tilde{\mu}}{b\,M ^{4}}f^{a_{1}a_{2}a_{3}}\frac{\langle 12\rangle^{3}}{\langle 23\rangle\langle 31 \rangle}\sqrt{\frac{M^{2}}{Q^{2}}}\,K_{1}\left(2\sqrt{\frac{Q^{2}}{M^{2}}} \right). \tag{32}\] Note that Bessel integrals (31) had already appeared in AdS amplitudes [25]. Here they appear in the inverse Mellin transform of the WZW-Liouville correlator (32), which at this point seems to be different from the three-gluon amplitude of Ref.[1] evaluated in Minkowski space. In the latter case, the amplitude was evaluated in the presence of a dilaton background, which was taken into account by one insertion of the dilaton source. It contained the pole \((Q^{2})^{-1}\) originating from the massless dilaton propagator connecting the source to the gluon system. The single source approximations, however, can be justified only in the limit of small \(Q^{2}\). In this limit, the Bessel function can be expanded as \[2\sqrt{\frac{M^{2}}{Q^{2}}}\,K_{1}\left(2\sqrt{\frac{Q^{2}}{M^{2}}}\right)= \frac{M^{2}}{Q^{2}}+\ldots\,, \tag{33}\] therefore \[\mathcal{A}^{(0)}_{3G}(\omega_{i},z_{i},\bar{z}_{i})=\frac{2\pi\,\tilde{\mu}} {bM^{2}Q^{2}}f^{a_{1}a_{2}a_{3}}\frac{\langle 12\rangle^{3}}{\langle 23 \rangle\langle 31\rangle}+\ldots \tag{34}\] We want to match this correlator with the tree-level amplitude \[\mathcal{A}^{(0^{\prime})}_{3G}(\omega_{i},z_{i},\bar{z}_{i})=\frac{g}{\Lambda \Lambda^{\prime}}f^{a_{1}a_{2}a_{3}}\frac{1}{Q^{2}}\frac{\langle 12\rangle^{3}}{ \langle 23\rangle\langle 31\rangle}+\ldots\, \tag{35}\] where \(g\) is the Yang-Mills coupling constant, \(\Lambda^{-1}\) is the canonical coupling of the dilaton to the gauge field strength and \(\Lambda^{\prime}\) determines the strength of the point-like dilaton source, \(\mathcal{J}(x)=\delta^{(4)}(x)/\Lambda^{\prime}\). The semiclassical limit of the Liouville correlator is equal to the tree-level amplitude provided that the Yang-Mills and dilaton parameters are related to the Liouville parameters and the renormalization scale in the following way: \[\frac{gM^{2}}{\Lambda\Lambda^{\prime}}=\frac{2\pi\,\tilde{\mu}}{b}. \tag{36}\] The relation between Liouville correlators and Yang-Mills amplitudes can be extended beyond the semiclassical limit. The limit of \(Q^{2}\to 0\) singles out gluon amplitudes with one insertion of the dilaton source. These amplitudes contain the dilaton propagator and the coupling of the off-shell dilaton to the gluon system. It is well known, however, that the dilaton decouples in the zero-momentum limit [26; 27; 28]. Namely, the Feynman matrix element with one zero momentum dilaton is given by the Feynman matrix element evaluated in the absence of dilatons - in our case in pure Yang-Mills theory. This observation leads to _Proposition:_ \[\mathcal{M}_{3G}(\omega_{i},z_{i},\bar{z}_{i})=\lim_{Q\to 0}\, \frac{Q^{2}}{(2\pi i)^{3}}\!\!\int_{c-i\infty}^{c+i\infty}d\Delta_{1}d\Delta_{2 }d\Delta_{3}\,M^{\Delta_{1}+\Delta_{2}+\Delta_{3}-1}\omega_{1}^{-\Delta_{1}} \,\omega_{2}^{-\Delta_{2}}\,\omega_{3}^{-\Delta_{3}}\\ \times\left\langle O_{\Delta_{1}}^{-a_{1}}(z_{1},\bar{z}_{1})O_{ \Delta_{2}}^{-a_{2}}(z_{2},\bar{z}_{2})O_{\Delta_{3}}^{+a_{3}}(z_{3},\bar{z}_ {3})\right\rangle\,, \tag{37}\] where \({\cal M}_{3G}\) is the _exact_ three-gluon MHV Feynman matrix element (of mass dimension 1) in Yang-Mills theory. The equation should be supplemented with a prescription how to replace two-dimensional Liouville parameters on the r.h.s. by four-dimensional Yang-Mills parameters on the l.h.s. All what we can extract at the leading perturbative order is written in Eq.(36). We need an exact and more direct relation, however, between Liouville and Yang-Mills couplings. It can be extracted by going beyond the leading order on the Yang-Mills and Liouville sides of Eq.(37). In Yang-Mills theory, next-to-leading corrections originate from one-loop diagrams and are of order \({\cal O}(g^{2})\) as compared to the tree level. In Liouville theory, they are of order \({\cal O}(b^{2})\) and originate from various sources. First of all, the \(\Upsilon\) function has been expanded in Ref.[24] to the order \({\cal O}(b^{2}\ln b^{2})\) only, and more work is needed to reach higher precision. Furthermore, there is a similar uncertainity in the normalization factors \(F_{\pm}\). In addition, DOZZ formula is written in terms of the exponents \(\sigma_{i}\) while the inverse Mellin transforms involve integrations over the dimensions \(\Delta_{i}\). Eq.(5) implies that at the subleading order \[\sigma_{1} =\frac{\Delta_{1}+1}{2}+\frac{b^{2}}{4}(\Delta_{1}+1)(\Delta_{1} -1)\] \[\sigma_{2} =\frac{\Delta_{2}+1}{2}+\frac{b^{2}}{4}(\Delta_{2}+1)(\Delta_{2} -1)\] \[\sigma_{3} =\frac{\Delta_{3}-1}{2}+\frac{b^{2}}{4}(\Delta_{3}-1)(\Delta_{3} -3) \tag{38}\] We leave full analysis of subleading Liouville corrections to future work, nevertheless already at this point, we can get a preliminary insight by discussing some consequences of Eq.(38). After repeating the steps leading to Eq.(22), but now with the exponents related to dimensions by Eq.(38), we obtain \[I^{(1)}(\omega_{1},\omega_{2},\omega_{3})= \frac{1}{M}\,\left(\frac{1}{2\pi i}\right)^{3}\,\int_{c-i\infty}^ {c+i\infty}d\Delta_{1}d\Delta_{2}d\Delta_{3}\,\int_{0}^{+\infty}dt_{0}dt_{1} dt_{2}dt_{3}\,e^{\Delta_{1}x_{1}}e^{\Delta_{2}x_{2}}e^{\Delta_{3}x_{3}}\] \[\times\,e^{-t_{0}-t_{1}-t_{2}-t_{3}}\,\frac{t_{0}^{-\frac{1}{2}} \,t_{1}^{-\frac{1}{2}}\,t_{2}^{-\frac{1}{2}}\,t_{3}^{\frac{3}{2}}}{t_{0}\,t_{ 1}\,t_{2}\,t_{3}}(z_{12}\bar{z}_{12})^{-\frac{3}{2}}(z_{23}\bar{z}_{23})^{ \frac{1}{2}}(z_{13}\bar{z}_{13})^{\frac{1}{2}} \tag{39}\] \[\times\left(\frac{t_{0}\,t_{1}\,t_{3}}{t_{2}}\right)^{\frac{b^{2 }}{4}(\Delta_{1}+1)(\Delta_{1}-1)}\left(\frac{t_{0}\,t_{2}\,t_{3}}{t_{1}} \right)^{\frac{b^{2}}{4}(\Delta_{2}+1)(\Delta_{2}-1)}\left(\frac{t_{0}\,t_{1} \,t_{2}}{t_{3}}\right)^{\frac{b^{2}}{4}(\Delta_{3}-1)(\Delta_{3}-3)}\] The difference between the present case and Eq.(22) is that the integrals over dimensions \(\Delta_{i}\) become Gaussian instead of delta functions. After performing these integrals and changing the variables from \(t_{1},t_{2},t_{3}\) to \(x_{1},x_{2},x_{3}\), we obtain \[I^{(1)}(\omega_{1},\omega_{2},\omega_{3})=\frac{2\,\omega_{1} \omega_{2}}{M^{2}\,\omega_{3}}\,e^{\left[-\frac{b^{2}}{4}\ln(\frac{2\,p_{1}\cdot p _{2}}{M^{2}})-\frac{b^{2}}{4}\ln(\frac{2\,p_{2}\cdot p_{3}}{M^{2}})-\frac{b^{2 }}{4}\ln(\frac{2\,p_{1}\cdot p_{3}}{M^{2}})\right]}\] \[\times\int_{-\infty}^{+\infty}dx_{1}dx_{2}dx_{3}\,\prod_{i=1}^{3} \frac{1}{\sqrt{\epsilon_{i}(x_{i})}}e^{\frac{-x_{i}^{2}}{\epsilon_{i}(x_{i})} +x_{i}(1-\frac{b^{2}}{2})} \tag{40}\] \[\times\int_{0}^{+\infty}dt_{0}\,t_{0}^{-2}\,e^{-t_{0}-\frac{e^{x_{ 1}+x_{3}}\,\omega_{1}\omega_{2}\,|x_{13}|^{2}}{M^{2}\,t_{0}}-\frac{e^{x_{2}+x_{3 }}\,\omega_{2}\,|x_{23}|^{2}}{M^{2}\,t_{0}}-\frac{e^{x_{1}+x_{2}}\,\omega_{1 }\omega_{2}\,|x_{12}|^{2}}{M^{2}\,t_{0}}}\, \tag{41}\] where \[\epsilon_{1}(x_{1})= \pi b^{2}\Big{[}\ln\Big{(}\frac{2\,p_{1}\cdot p_{3}\,p_{1}\cdot p_{2 }}{M^{2}\,p_{2}\cdot p_{3}}\Big{)}+2x_{1}\Big{]}, \tag{42}\] \[\epsilon_{2}(x_{2})= \pi b^{2}\Big{[}\ln\Big{(}\frac{2\,p_{2}\cdot p_{3}\,p_{1}\cdot p_{ 2}}{M^{2}\,p_{1}\cdot p_{3}}\Big{)}+2x_{2}\Big{]},\] \[\epsilon_{3}(x_{3})= \pi b^{2}\Big{[}\ln\Big{(}\frac{2\,p_{1}\cdot p_{3}\,p_{2}\cdot p _{3}}{M^{2}\,p_{1}\cdot p_{2}}\Big{)}+2x_{3}\Big{]}.\] Since \(\epsilon_{i}(x_{i})\sim b^{2}\), we can use the expansion \[\frac{1}{\sqrt{4\pi\epsilon}}e^{\frac{-x^{2}}{4\epsilon}}=e^{\epsilon\partial _{x}^{2}}\,\delta(x)\,. \tag{43}\] which yields the delta functions fixing \(x_{i}=0\) at the leading order. After expanding the remaining factors, we obtain \[I^{(1)}(\omega_{1},\omega_{2},\omega_{3}) =I^{(0)}(\omega_{1},\omega_{2},\omega_{3}) \tag{44}\] \[\times\Big{[}1-\frac{b^{2}}{4}\ln\Big{(}\frac{2\,p_{1}\cdot p_{2 }}{M^{2}}\Big{)}-\frac{b^{2}}{4}\ln\Big{(}\frac{2\,p_{2}\cdot p_{3}}{M^{2}} \Big{)}-\frac{b^{2}}{4}\ln\Big{(}\frac{2\,p_{1}\cdot p_{3}}{M^{2}}\Big{)} \Big{]}+\ldots\.\] The presence of logarithmic corrections in Liouville theory indicates that the arbitrary mass scale \(M\), introduced as a parameter linking Liouville and Yang-Mills theories, plays the role of renormalization scale in four dimensions. Assuming that this is indeed the case, we can extract a more precise relation between Liouville and Yang-Mills couplings by comparing Eq.(44) with the one-loop correction to the scattering amplitude of one dilaton with three gluons. The one-loop corrections to the dilaton-gluon amplitudes have been studied before in Ref.[29; 30; 31]. We are interested in the ultraviolet divergent part only, which after renormalization leads to the logarithmic running of the gauge coupling \(g(Q^{2})\) and of the dilaton coupling \(1/\Lambda\). For three gluons [29; 30; 31]: \[\frac{g}{\Lambda}(Q^{2})=\frac{g}{\Lambda}(M^{2})\left[1-\frac{3g^{2}(M^{2})} {2(4\pi)^{2}}\beta_{0}\,\ln\Big{(}\frac{Q^{2}}{M^{2}}\Big{)}\right]\,. \tag{45}\] where \(\beta_{0}=11c_{A}/3\) (\(c_{A}\) is the Casimir operator in the adjoint representation of the gauge group) is the one-loop coefficient of the Yang-Mills beta function. By comparing the renormalization scale dependence of Eqs. (44) and (45), we find \[b^{2}=\frac{\beta_{0}\,g^{2}(M)}{8\pi^{2}}. \tag{46}\] This relation should be taken with a grain of salt though, because it is based on a partial analysis only of the subleading Liouville corrections. We admit that the Proposition (37), together with the relation (46) contain very strong statements. Does it make sense talking about _exact_ gluon scattering amplitudes at all? Evidently, Yang-Mills theory confines gluons and has a mass gap. Nevertheless, gluon-like states (jets) are physically observable and according to our proposal, they are described by light Liouville operators. Massive glueballs are probably described by some other type of operators and their amplitudes have more string-like character. ### Acknowledgements TRT is supported by the National Science Foundation under Grants Number PHY-1913328 and PHY-2209903, by the NAWA Grant "Celestial Holography of Fundamental Interactions" and by the Simons Foundation Grant MP-SCMPS-00001550-05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. BZ is supported by the Royal Society.
2302.03551
Tension Estimation and Localization for a Tethered Micro Aerial Robot
This work focuses on the study of tethered fights of a micro quadcopter, with the aim of supplying continuous power to a small-sized aerial robot. Multiple features for facilitating the interaction between a tethered micro quadcopter and a ground base are described in this paper. Firstly, a tether model based on the catenary curve is presented that describes a quadcopter tethered to a point in space. Furthermore, a method capable of estimating the tension applied to the quadcopter, based only on the inertial information from the IMU sensors and the motor thrusts, is presented. Finally, a novel method for localizing the quadcopter by exploiting the tension imposed by the tether and the shape of the tether is described. The proposed methods are evaluated both in simulation and in real world prototype.
Ricardo Martins, Meysam Basiri
2023-02-07T16:01:15Z
http://arxiv.org/abs/2302.03551v1
# Tension Estimation and Localization ###### Abstract **This work focuses on the study of tethered fights of a micro quadcopter, with the aim of supplying continuous power to a small-sized aerial robot. Multiple features for facilitating the interaction between a tethered micro quadcopter and a ground base are described in this paper. Firstly, a tether model based on the catenary curve is presented that describes a quadcopter tethered to a point in space. Furthermore, a method capable of estimating the tension applied to the quadcopter, based only on the inertial information from the IMU sensors and the motor thrusts, is presented. Finally, a novel method for localizing the quadcopter by exploiting the tension imposed by the tether and the shape of the tether is described. The proposed methods are evaluated both in simulation and in real world prototype.** ## 1 Introduction Micro aerial robots are known by their versatility and ability to get information from higher altitudes allowing them to have many important applications [1, 2, 3, 4]. However, such robots have a low payload capacity and due to the small on-board battery short flight times are reached. In some applications where the robot is only required to operate inside a limited air space, such as to inspect a solar panel installation [3], an industrial structure [5] or a vessel [6] or to operate inside a house [7], continuous power can be supplied trough a tether that is attached to a ground station allowing beyond battery missions [8]. The ground station can also be mobile, such as a mobile service robot [9] or an unmanned ground vehicle [10], to further extend the operation area of the tethered aerial robot and to perform missions cooperatively with the aerial/ground multi robot system [11]. The integration of a tethered micro aerial robot can also enhance the capabilities of mobile ground robots by providing an aerial perspective of the surrounding environment, facilitating tasks such as path planning and obstacle avoidance [12, 13]. To implement a tethered solution, the cable characteristics for the tether must be carefully considered taking into account the resistance and the varying power drop across the cable [14, 15]. Although it is possible to only power an aerial robot through a tether, however, a small on-board battery could also be used to allow robustness against power blackouts or damages to the tether [16]. Ground robots, with their ability to carry large payloads, including batteries, sensors, and computers, offer a unique opportunity for collaboration with aerial robots [10]. By combining their strengths, the two types of robots can undertake tasks that would otherwise be impossible individually. Aerial robots, for example, can provide valuable aerial sensing and enhance the ground robots' localization capabilities [17]. Conversely, ground robots can serve as mobile charging stations or power sources, supplying energy to their aerial counterparts via an attached tether [17, 18, 19] Despite the benefits mentioned above, there are multiple challenges that must be considered to facilitate autonomous operation of tethered aerial robots. The constraints in the motion of the robot imposed by the tether and the varying tension that the tether applies to the robot must be considered by the flight controller. For this purpose, it is important to have an accurate estimation of this varying tension. In this paper a method to estimate the tension on a tethered aerial robot is described that is only based on the IMU sensor readings. As the IMU measurements have high noise levels which prevents them from being used directly for accurate tension estimations, a filtering approach is used to assist with the estimation. Furthermore, we show that the robot position estimation can be improved by exploiting the properties of the tether and using the tension applied to the UAV. The tension on the end-points of the tether is related to its shape, which means that the relation between the tether model and the tension applied to the quadcopter can be expressed mathematically. ## 2 Proposed Methodologies ### Characterization of the shape of the tether #### 2.1.1 Catenary Curve In case of a non-rigid tether connecting the quadcopter to a ground-base, the shape of the tether approximately outlines a catenary curve. The catenary model consists of a hanging cable, with no stiffness, sagging under its own weight and supported only by its ends (see figure 1). The point \((x_{1},\,y_{1})\) corresponds to the origin and the point (\(x_{2}\), \(y_{2}\)) to the position of the quadcopter. The shape of the catenary can be defined according to a mathematical model, in which expression 1 presents the **equation of the catenary**. \[y=a.cosh\Big{(}\frac{x-x_{0}}{a}\Big{)}+C \tag{1}\] Parameter \(\langle x_{0}\rangle\) is the abscissa of the lowest point. Parameter \(\langle a\rangle\) corresponds to the \(y\) coordinate of the lowest point of the curve (\(x=x_{0}\)) regarding the catenary frame \(\{C\}\), and it must always be positive.1. Parameter \(\langle C\rangle\) is an offset between the world frame \(\{W\}\) and the catenary frame \(\{C\}\), which depends on the tether's parameter \(\langle a\rangle\) and the \(y\) coordinate of the lowest point regarding the world frame (\(y_{0}\)). Footnote 1: A negative value of parameter \(\langle a\rangle\) would only have a physical meaning if the shape of the catenary was concave instead of convex. \[C=y_{0}-a \tag{2}\] Expressions 3 and 4 introduce the tether parameters \(\langle s_{1}\rangle\) and \(\langle s_{2}\rangle\), which represent the arc-length from the tether lowest point to the origin and to the UAV, respectively. \[s_{1}=a.sinh\Big{(}\frac{|x_{1}-x_{0}|}{a}\Big{)} \tag{3}\] \[s_{2}=a.sinh\Big{(}\frac{|x_{2}-x_{0}|}{a}\Big{)} \tag{4}\] Equations 5 and 6 respectively present the horizontal and vertical tension on the end-points of the catenary curve. Both the horizontal and vertical tension depend on the tether's parameters - \(\langle a\rangle\) or \(\langle s\rangle\) - and on the weight of the tether, where \(\omega\) is the weight per length unit [20]. \[H=\omega.a \tag{5}\] \[T_{V}=\omega.s \tag{6}\] The absolute value of the tension results from the euclidean norm of the horizontal and vertical tensions. \[|T|=\sqrt{T_{V}^{2}+H^{2}} \tag{7}\] The quadcopter flying in an \(R^{3}\) space allows to define the horizontal tension (\(H\)) in terms of a component along the \(x\) direction and another along the \(y\) direction, as shown in figure 2. The previous circumstance relates the tension along the \(x\) and \(y\) direction according equation 8 and 9, respectively. \[Tx=cos(\beta).|H| \tag{8}\] \[Ty=sin(\beta).|H| \tag{9}\] The parameters of the catenary cannot be mathematically computed by only knowing the coordinates of the two tether end-points. Thus, these parameters - \(\langle a\rangle\), \(\langle x_{0}\rangle\), \(\langle C\rangle\), \(\langle s_{1}\rangle\), and \(\langle s_{2}\rangle\) - are obtained gathering additional information from knowing the full length of the tether. This assumption leads to a possible and determined system of equations 10-14, which can be numerical solved generating the curve parameters. \[y_{1}=a.cosh(\frac{x_{1}-x_{0}}{a})+C \tag{10}\] \[y_{2}=a.cosh(\frac{x_{2}-x_{0}}{a})+C \tag{11}\] \[s_{total}=s_{2}+s_{1} \tag{12}\] \[s_{1}=a.sinh\Big{(}\frac{|x_{1}-x_{0}|}{a}\Big{)} \tag{13}\] \[s_{2}=a.sinh\Big{(}\frac{|x_{2}-x_{0}|}{a}\Big{)} \tag{14}\] Subtracting equation 10 from equation 11, and making use of the hyperbolic cosine properties, it follows: \[\Delta Y=2a.sinh(\frac{\Delta x}{a}).sinh(\frac{x_{average}-x_{0}}{a}), \tag{15}\] where \(\Delta x=\frac{x2-x1}{2}\), \(x_{average}=\frac{x2+x1}{2}\) and \(\Delta y=y_{2}-y_{1}\). The expression of the length of the tether \(\langle s_{total}\rangle\) is re-written by replacing equation 13 and 14 into expression 12, and using the hyperbolic sine properties. \[s_{total}=2a.sinh(\frac{\Delta x}{a}).cosh(\frac{x_{average}-x_{0}}{a}), \tag{16}\] Equation 17 presents a useful relation between \(\langle x_{0}\rangle\) and \(\langle a\rangle\), which results from the division of \(\Delta Y\) by \(\langle s_{total}\rangle\). \[x_{0}=x_{average}-a.tanh^{-1}(\frac{\Delta Y}{s_{total}}) \tag{17}\] The insertion of equation 17 in equation 15 allows to obtain equation 18, and using the Newton-Raphson method on this last produces the value of parameter Figure 1: Catenary curve and related parameters; world \(\{W\}\) and catenary \(\{C\}\) frames. Figure 2: Horizontal tension decomposition. \(\langle a\rangle\). However, when \(\Delta Y=0\) it is not possible to compute parameter \(\langle a\rangle\) since equation 18 does not depend on \(\langle a\rangle\) to be valid. Nevertheless, from the knowledge that \(\Delta Y=0\) it comes that \(x_{0}\) is known and corresponds to \(x_{average}\), which means that equation 16 can compute the parameter \(\langle a\rangle\). An alternative approach is to force the value of \(\Delta Y\) to be not null by adding a small offset. \[\Delta Y-2.a.sinh(\frac{\Delta x}{a}).sinh(tanh^{-1}(\frac{\Delta Y}{s_{total }}))=0 \tag{18}\] The expansion of the Taylor series derives the initial estimation of parameter \(\langle a\rangle\). \[sinh(x)=\frac{e^{x}-e^{-}x}{2}=\sum_{n=0}^{\inf}\frac{x^{2n+1}}{(2n+1)!}=x+ \frac{x^{3}}{3!}+\frac{x^{5}}{5!}+... \tag{19}\] By applying this approximation to expression 18, one can re-write this last equation as a result of a \(5^{th}\) order approximation for the hyperbolic sine, according to equation 20. \[\bigg{(}\frac{\Delta Y}{2.sinh\big{(}tanh^{-1}(\frac{\Delta Y}{s_{tot}}) \big{)}}-\Delta x\bigg{)}a^{4}-\frac{\Delta x^{3}}{3!}a^{2}-\frac{\Delta x^{5 }}{5!}=0 \tag{20}\] The assumption that \(\alpha=a^{2}\) reduces equation 20 to the \(2^{nd}\) order. Furthermore, the quadratic formula presented in equation 21 produces the solution for this \(2^{nd}\) order expression \[x^{2}=\frac{-b+/-\sqrt{b^{2}-4.a.c}}{2.a}, \tag{21}\] where \[a=\frac{\Delta Y}{2.sinh(tanh^{-1}(\frac{\Delta Y}{s_{total}}))}-\Delta x,b= \frac{-\Delta X^{3}}{3!},c=\frac{-\Delta x}{5!}. \tag{22}\] Reverting the variable substitution produces the desired value for \(\langle a\rangle\), according to equation 20. \[a=\sqrt{(}\alpha), \tag{23}\] Since equation 20 is a \(4^{th}\) order equation it produces 4 roots - in the relevant domain, two of them are complex roots and two of them real roots, one positive and other negative. Only the positive real root has a physical meaning and so it is the only one to be taken into account. The replacement of \(\langle x_{0}\rangle\) and \(\langle a\rangle\) into equation 10 or into equation 11 allows to compute the \(\langle C\rangle\) parameter. #### 2.1.2 Validation of the catenary model The malleability of the tether is one of the most important feature to ensure that the tether outlines a catenary curve. The validation of the catenary model was done by overlaying the real silicon tether with the theoretical model of the catenary curve, as shown in figure 3. ### _Tension Estimation_ The tension estimate applied to the quadcopter is computed using the quadcopter's thrust and the inertial information from its IMU sensors. However, these sensor readings present a high level of noise, which means that it is impossible to determine the tension applied to the quadcopter with the desired accuracy. To filter the undesired noise, and assuming that the noise is white Gaussian, a Kalman filter was implemented. Equations 24 and 25 describe a linear system, in which \(w_{k}\) and \(v_{k}\) are the process and the observation noise, respectively; \(x_{k}\) is the system's state vector; \(y_{k}\) is the observation vector of the system's states; \(u_{k}\) is the system's input. \[x_{k}=Ax_{k-1}+Bu_{k}+w_{k} \tag{24}\] \[y_{k}=Cx_{k}+D+v_{k} \tag{25}\] The Kalman filter starts by propagating the process model, where \(\hat{x}_{k}^{-}\) is the state estimate. \[\hat{x}_{k}^{-}=A\hat{x}_{k-1}+Bu_{k} \tag{26}\] Afterwards, it uses the information from the sensor measurements to improve the estimate obtained from the model propagation. The final state estimate is given by: \[\hat{x}_{k}=\hat{x}_{k}^{-}+K_{k}(y_{k}-C\hat{x}_{k}^{-}) \tag{27}\] The variable \(K_{k}\) is the Kalman gain, which adjusts the relation between the process and observation estimates. The tension that is applied to the quadcopter may be applied through human interaction, which implies infinite possibilities for the way that the wire is pulled. The implemented solution considers a model where the tension remains the same, which can be extended to situations where the wire is not abruptly pulled and do not have considerable oscillations. The state estimate (\(\hat{x_{k}}\)) is a \(\Re^{3}\) vector, including the tension along \(x\), \(y\) and \(z\) directions, and \(y_{k}\) is the observation vector, which includes the tension measurements. \[\hat{x}_{k}=\begin{bmatrix}Tx_{k}\\ Ty_{k}\\ Tz_{k}\end{bmatrix}y_{k}=\begin{bmatrix}Tx_{k}^{obs}\\ Ty_{k}^{obs}\\ Tz_{k}^{obs}\end{bmatrix} \tag{28}\] Figure 3: Catenary model validation The tension measurements \(T^{obs}\) are obtained through equation 29, where \(\vec{a}\) corresponds to the acceleration vector, \(\vec{g}\) to the gravity vector, \(R(\eta)\) to the rotation matrix between the world and the quadcopter frames (see figure 4), and \(F_{p}\) to the total thrust of the quadcopter. The superscript \({}^{obs}\) refers to the on-board sensor reading of the quadcopter. \[T^{obs}=m(\vec{a}^{obs}+\vec{g})-R(\eta)^{obs}F_{p}^{obs}+F_{ext} \tag{29}\] ### Position Estimation This section2 uses the relation between the tension that the catenary model exerts to the quadcopter and its parameters to compute the position estimate of the quadcopter. The curve parameters \(<a>\) and \(<s_{2}>\) are computed according to equations 30 and 31, where the horizontal \(H\) and vertical \(T_{v}\) tensions are obtained using the methods presented in section 2.2. Footnote 2: In section 2.1 the catenary model was evaluated for a \(\Re^{2}\) space. In this section, the quadcopter is moving in a \(\Re^{3}\), and so the \(y\) and \(x\) coordinates in section 2.1 are replaced by \(z\) and \(r\) coordinates, where \(r=\sqrt{x^{2}+y^{2}}\) (see figure 4) \[a =\frac{H}{\omega} \tag{30}\] \[s_{2} =\frac{T_{v}}{\omega} \tag{31}\] The curve parameters \(<a>\) and \(<s_{2}>\) are then used to compute the spatial coordinates of the end of the tether attached to the UAV, along with the knowledge of the tether's full length \(<s_{tot}>\). \[s_{tot}=s_{2}+s_{1} \tag{32}\] This way, replacing \(x_{0}\) by \(r_{0}\) and \(x_{1}\) by \(r_{i}\) in equation 13, and using the relation presented in equation 32, one can derive equation 33. \[r_{0}=r_{i}+a.sinh^{-1}(\frac{s_{tot}-s_{2}}{a}) \tag{33}\] Furthermore, replacing \(x_{0}\) by \(r_{0}\) and \(x_{2}\) by \(r\) in equation 14, the radial distance \(r\) comes as: \[r=r_{0}+a.sinh^{-1}(\frac{s_{2}}{a}) \tag{34}\] At last, equation 35 uses the catenary's expression and computes the quadcopter's altitude, \[z=a.cosh(\frac{r-r_{0}}{a})+C \tag{35}\] where equation 36 computes parameter \(\langle C\rangle\). \[C=z_{i}-a.cosh(\frac{r_{i}-r_{0}}{a}) \tag{36}\] If the horizontal tension \(H\) is null, equations 34 and 35 have a mathematical indetermination of type \(0\times\infty\). The limits of those equations are presented in equations 37 and 38. \[\lim_{a\to 0}z=\lim_{a\to 0}a.cosh(\frac{r}{a})+C=z_{i}+|s_{2}|-|s_{1}| \tag{37}\] \[\lim_{a\to 0}r=\lim_{a\to 0}a.sinh^{-1}(\frac{s_{1}}{a})+a.sinh^{-1}(\frac{s_{2}}{a})=0 \tag{38}\] ## 3 Experimental results ### Estimation of the vertical tension The validation of the vertical tension estimate is done by performing a vertical tethered takeoff, which represents the simplest experiment to compute its ground-truth value \(Tv_{gt}\). This last one is calculated through the height of the quadcopter \(z\), and the weight per length unit of the the tether \(\omega\). \[Tv_{gt}=\omega.z \tag{39}\] Figure 6 illustrates the vertical tension estimates. The tension estimation procedure does not take into account the force that the ground exerts on the quadcopter while the propellers are not spinning,i.e., that are not counter balancing its weight. Thus, in the initial instants, the vertical tension estimate does not correspond to its ground-truth values and the 0,3N value approximately corresponds to the weight of the quadcopter. ### Estimation of the horizontal tension To validate the horizontal tension estimate, its ground-truth value is computed using a small mass (coin) attached to a wire. Figure 7 illustrates the scheme of the test-bench used to compute the ground-truth values for the horizontal tension. Equation 40 computes Figure 4: World frame \(W\), body frame \(B\), radial vector \(\vec{R}_{r}\), and its horizontal projection \(\vec{r}\) Figure 5: Illustration of the vertical tension estimation experiment the ground-truth value of the horizontal tension, in which T corresponds to the weight of the mass and \(\gamma\) is computed according to equation 41 - \(z_{q}\) and \(z_{a}\) are the height of the quadcopter and the height of the vertical arm, respectively, and \(r_{q}\) is the quadcopter's radial coordinate, assuming the inertial frame presented in figure 7. \[H=cos(\gamma).T \tag{40}\] \[\gamma=tan^{-1}(\frac{z_{q}-z_{a}}{r_{q}}) \tag{41}\] Figure 8(a) displays the tension estimate and the ground-truth value for an attached mass of 4,1g, and figure 8(b) shows the result for a tethered vertical takeoff, where the tether direction is mainly vertical. As figure 8(b) displays, the horizontal tension estimate is nearly null - less than 0.01N. ### Tension following The tension applied to the quadcopter is estimated in real-time, according to section 2.2. The goal position of the quadcopter changes if the estimated tension is greater than a pre-defined threshold. When this occurs, the goal position of the quadcopter is successively updated to its current position, making it to follow the pull's direction. When the tension applied to the quadcopter is no longer greater than the pre-defined threshold, the goal position stops being updated and the quadcopter remains hovering at its last position. To observe the behaviour of the _tension following_ feature a few videos were taken 3, where figures 9 and 10 correspond to screenshots of those videos. Footnote 3: The full videos are presented on Youtube here. Morever, the implementation of the _tension following_ feature can also be used for the landing process. Given that, after the first tag, a flag is activated indicating that the _tension following_ feature is on. Thus, if the quadcopter flies under a certain height, the motors are turned off. Without using an external motion system, the same principle can be applied using a sensor distance, which deactivates the motors if the distance to the landing platform is smaller than a threshold. Figure 10 illustrates the mentioned landing process. ### Position estimation To avoid the modeling of the tether's oscillations, the results presented throughout this section concern hovering flights. Additionally, the angle \(\beta\), which relates the radial distance with the \(x\) and \(y\) coordinates, was Figure 10: Screenshots of the landing of the quadcopter using the _tension following_ feature. Figure 6: Ground-truth values and estimates of the vertical tension applied to the UAV. Figure 7: Test-bench for computing the horizontal tension. Figure 9: Screenshots of the quadcopter following the tension’s direction. also a source of inaccuracy. It was initially assumed that the \(\beta\) angle was known and was computed using the Mocap system. In practice, the angle \(\beta\) could be computed using a visual or mechanical system on the ground controller that could indicate the direction of the tether. Figure 11 presents a pair of experiments in which the height corresponds to 1m and the radial distances are similar between them. In a second set of experiments (figure 12) the radial distances are also similar between them but the estimation of the quadcopter's height is evaluated for two different heights - 0,5m and 1,2m. A third set of experiments (figure 13) is performed with a bigger range for the value of the \(y\) coordinate. In the initial instants the Kalman filter assumes that the tensions \(T_{x}\) and \(T_{y}\) are null, which means that the length \(\langle s_{2}\rangle\) and the tether parameter \(\langle a\rangle\) also start to be zero (see equations 42 and 43). \[s_{2} =\frac{T_{v}}{\omega} \tag{42}\] \[a =\frac{H}{\omega} \tag{43}\] According to the expression deduced in equation 38, the initial estimated position of the radial coordinate is null, implying that \(x\) and \(y\) coordinates are also null. On the one hand, equation 37 allows to infer that the initial estimate regarding the altitude corresponds to \(z_{i}-|s_{1}|\), since \(s_{2}\) is zero. On the other hand, the tether's total length is given by equation 32, which means that \(s_{1}=s_{tot}\) for a null length \(s_{2}\). Equation 44 presents the initial estimate of the altitude over this set of experiments. \[z=z_{i}-|s_{1}|=0,754-1,6=-0,846 \tag{44}\] ## 4 Conclusions This work presented a method to estimate the tension applied to a quadrotor by using measurements from the IMU sensors. Due to the high level of noise, two Kalman based filtering processes were introduced. Furthermore, the tension estimate was used to present alternative ways of controlling the quadcopter, and to improve the position estimate of a tethered quadcopter. The first flight control strategy used the tension estimate to update the quadcopter's position. Nevertheless, the position of the quadcopter must be known through an external motion system. Aiming to develop a control strategy that does not need to know the position of the UAV, a novel methodology that uses the tether's shape and the tension estimate was introduced. Moreover, this study presented a method to estimate the position of the quadcopter based on the tension that the tether applies to the UAV and its shape.
2309.01034
Discrete-to-continuum models of pre-stressed cytoskeletal filament networks
We introduce a mathematical model for the mechanical behaviour of the eukaryotic cell cytoskeleton. This discrete model involves a regular array of pre-stressed protein filaments that exhibit resistance to enthalpic stretching, joined at crosslinks to form a network. Assuming that the inter-crosslink distance is much shorter than the lengthscale of the cell, we upscale the discrete force balance to form a continuum system of governing equations and deduce the corresponding macroscopic stress tensor. We use these discrete and continuum models to analyse the imposed displacement of a bead placed in the domain, characterising the cell rheology through the force-displacement curve. We further derive an analytical approximation to the stress and strain fields in the limit of small bead radius, predicting the net force required to generate a given deformation and elucidating the dependency on the microscale properties of the filaments. We apply these models to networks of the intermediate filament vimentin and demonstrate good agreement between predictions of the discrete, continuum and analytical approaches. In particular, our model predicts that the network stiffness increases sublinearly with the filament pre-stress and scales logarithmically with the bead size.
J. Köry, N. A. Hill, X. Y. Luo, P. S. Stewart
2023-09-02T22:42:59Z
http://arxiv.org/abs/2309.01034v1
# Discrete-to-continuum models of pre-stressed cytoskeletal filament networks ###### Abstract We introduce a mathematical model for the mechanical behaviour of the eukaryotic cell cytoskeleton. This discrete model involves a regular array of pre-stressed protein filaments that exhibit resistance to enthalpic stretching, joined at crosslinks to form a network. Assuming that the inter-crosslink distance is much shorter than the lengthscale of the cell, we upscale the discrete force balance to form a continuum system of governing equations and deduce the corresponding macroscopic stress tensor. We use these discrete and continuum models to analyse the imposed displacement of a bead placed in the domain, characterising the cell rheology through the force-displacement curve. We further derive an analytical approximation to the stress and strain fields in the limit of small bead radius, predicting the net force required to generate a given deformation and elucidating the dependency on the microscale properties of the filaments. We apply these models to networks of the intermediate filament vimentin and demonstrate good agreement between predictions of the discrete, continuum and analytical approaches. In particular, our model predicts that the network stiffness increases sublinearly with the filament pre-stress and scales logarithmically with the bead size. _Keywords:_ multiscale modelling, discrete-to-continuum asymptotics, intracellular transport, pre-stress, semi-flexible filaments, vimentin ## 1 Introduction Eukaryotic cells exhibit a complicated rheology in response to mechanical stimuli, arising primarily through deformation of their cytoskeleton, a complex network of crosslinked filamentous proteins including actin filaments, microtubules and intermediate filaments (e.g. vimentin). In addition to stretching of the filaments themselves, the system also dissipates energy both through transport of viscous fluid through this network and through transient crosslink (CL) dynamics [38, 2, 60]. Depending on the rate at which the deformation is applied, cells have been shown to behave as visco-elastic, soft-glassy, or poro-elastic materials [47, 16, 37, 26]. This complex rheology underpins a wide variety of cellular behaviour including migration and growth. In particular, epithelial cells can undergo an epithelial-mesenchymal transition, where these cells disassemble their cytoskeleton to become migratory [2]. Such transitions underpin healthy growth and development during embryogenesis and tissue repair [1, 48], but also accompany progression of tumour cells towards more aggressive (i.e. invasive) phenotypes. Hence, a thorough knowledge of cell rheology (and in particular its mechanical properties) is a likely pre-requisite for successful anti-cancer treatments [2, 53]. Tensegrip models of the cell cytoskeleton postulate that certain elements are pre-stretched which must be balanced by other elements under compression [28]. It is now well established that both actin and vimentin filaments _in vivo_ are pre-stretched (i.e. under tension) [38, 14]. On the other hand, microtubules have been shown to bear significant compressive loads [62, 8] due to their large bending stiffness. Although actin and microtubules have generally attracted more attention in the literature, the intermediate filament vimentin also greatly impacts cell mechanics due to its capacity to withstand very large strains (especially in comparison with actin and microtubules) [27, 44]. Most models describing the mechanical behaviour of individual cytoskeletal filaments have been derived using the theory of semi-flexible polymer chains [35], incorporating not only their elastic stretching and bending, but also uncoiling of their undulations under an applied stress [46]. As result, the distance between two ends of the filament differs from its stress-free contour length, so models relate the axial force applied to one end of the filament to the end-to-end distance normalized with respect to the contour length [35]. Similar relationships have also been derived based on the theory of Cosserat rods [23, 24]. Due to the complexity of cell cytoplasm _in vivo_, _in silico_ approaches are useful to elucidate the mechanisms underlying the mechanical behaviour on the network scale. Existing mathematical models typically fall into two categories. Discrete models of cell mechanics (including molecular dynamics simulations) enable the inclusion of detailed biophysics on the microscale derived from first principles, but also contain large number of discrete elements and their interactions which makes them computationally expensive [30, 31, 41, 40, 29]. On the other hand, continuum models of cell mechanics are typically much less computationally demanding, allowing fast parameter sweeps, but, because they are proposed to match macroscopic (i.e. cell-scale) phenomena, the manner in which microscale (molecular-scale) parameters and processes influence the macroscale response is often unclear [60]. The mechanical response of crosslinked networks of semiflexible filaments (e.g. actin or collagen) subject to various loading configurations has been studied using discrete network models elucidating key length and energy scales [20, 21, 18, 5]. Under bulk deformations (uniaxial or shear strain), the dominant modes of deformation - material stretching, entropic stretching and bending - have been linked to the regions of affine and non-affine deformations in the parameter space consisting of the filament length and the crosslink density [20]. A similar approach has been subsequently used to mimic localized perturbations in cytoskeletal networks via point forces applied at a single crosslink [21]. Local deformations were further explored in recent years, modelling the stress stiffening of extracellular matrices induced by contractile cells pulling on the adjacent fibers [18, 5]. However, at high filament densities encountered _in vivo_, the discrete simulations become computationally expensive [20] and as the networks are typically highly disordered, there is no simple and reliable way to derive the corresponding continuum (computationally faster) model. Furthermore, scaling arguments do not account for the _in vivo_ network pre-stress discussed above which makes direct utilization of the deduced power laws impossible. The vast majority of macroscale continuum models are inferred by ensemble averaging based on polymer physics [56, 35]. The models stemming from rubber elasticity form the oldest and largest group, including chain, full-network and microsphere models [17, 61, 3, 54, 36]. The latter have been applied to actin networks resulting in hyper-elastic and visco-elastic constitutive models [57, 58, 25]. Other approaches utilized Doi-Edwards theory [51] or the effective medium approach [12]. Discrete lattice models have also been employed, but to the best of our knowledge, rigorous upscaling techniques have not been used to derive a macroscale model. It is also worth noting that these discrete lattices often have unrealistic topologies - models using triangular lattices with coordination number 6 are not representative of crosslinked cytoskeletal networks and further care is needed to achieve a biologically realistic node connectivity [9, 11]. Efforts involving more rational and rigorous mathematical methods (such as discrete-to-continuum upscaling or homogenization) to systematically bridge between these two approaches are still largely missing. This problem also pertains to collagen networks where predictions of discrete and continuum models often disagree [13, 52]. Rational mathematical modelling has been successfully applied to study dynamic aspects of cytoskeletal reorganization during cell motility, including the dynamics of actin, myosin and other crosslinking proteins at the leading edge. This approach leads to mathematical formulations that are often amenable to analytical study and can provide explicit solutions, e.g. predicting the dependency of cell velocity on properties of the substrate [39, 43]. However, such rational techniques have seldom been applied to study mechanics of crosslinked networks. Recent research has focused on the effective transport properties of cytoplasm as a porous medium [19, 42]; as a result, the forces generated within the cytoskeleton as it is deformed by the transported object remain incompletely understood. The force required to move a spherical object (bead) inside a living cell was recently measured using the optical tweezers, elucidating dependence on key parameters such as bead size and pulling velocity [26, 27]. The primary goal of current study is to formalize these dependencies using a theoretical model built from first principles. To this end, we develop a multiscale framework for mechanical response during prescribed motion of an internal organelle or bead which rationally encodes a state-of-the-art microscale constitutive law for the axial stretching of individual semi-flexible filaments. The paper is organized as follows. First, in Section 2 we introduce a discrete model of the cell cytoskeleton consisting of a two-dimensional crosslinked network with prescribed displacement of a set of CLs. In Section 3 we upscale this discrete force balance using discrete-to-continuum asymptotics, arrive at a macroscale continuum model equipped with appropriate boundary conditions and infer the corresponding stress tensor and strain-energy density. In Section 4 we compare simulations of the discrete and continuum models and numerically explore how net force exerted on the transported bead depends on key model parameters. In Section 5 we consider the limit of small deformations in the continuum problem and compute an asymptotic approximation to the net force as a function of bead displacement, valid whenever the bead size is much smaller than the macroscopic length scale. ## 2 Discrete model and nondimensionalization ### Initial network #### 2.1.1 Geometry We consider a planar square region within a eukaryotic cell of fixed side length \(\tilde{D}\), well away from the nucleus and the cell membrane (Figure 1a). This region is parameterized by coordinates \(\tilde{X}\) and \(\tilde{Y}\), along the two edges of the square with origin at the centre. Focusing on mesh-forming crosslinking proteins (e.g. filamin) we propose a simple model assuming that the cytoskeleton can be modelled as a square grid of semi-flexible filaments. Although this arrangement is highly idealized, it facilitates a formal upscaling. Initially the filaments are assumed to be equally spaced and are oriented (after averaging out microscale fluctuations) parallel to either the \(\tilde{X}\) or \(\tilde{Y}\) axes (blue lines in Figure 1a), with crosslinks (CLs) at their intersections, forming a regular two-dimensional grid. These CLs divide each filament into \(N\) filament segments (FSs). Initially these crosslinks are a distance \(\tilde{R}=\tilde{D}/N\) apart, so crosslink \((i,j)\) is located at \[\tilde{\mathbf{X}}_{i,j}=\left(\tilde{X}_{i},\tilde{Y}_{j}\right)=(i,j)\,\tilde{R},\quad\text{ where }i,j=-\tfrac{1}{2}N,-\tfrac{1}{2}N+1,...,\tfrac{1}{2}N-1, \tfrac{1}{2}N; \tag{1}\] we assume that \(N\) is even for simplicity. Note that throughout this work, tildes denote dimensional variables and parameters. At subcellular scales, thermal effects play an important role causing undulations in cytoskeletal filaments even in the absence of external force. As result, a FS connecting arbitrary two neighbouring CLs need not be straight and its end-to-end distance need not be equal to its contour length (or arclength). For simplicity, we assume that all filaments are of the same stress-free contour length \(\tilde{L}\), with the stress-free contour length of FSs being \(\tilde{\Lambda}=\tilde{L}/N\), noting that these two quantities are typically distinct from the domain size \(\tilde{D}\) and inter-CL distance \(\tilde{R}\). Our model contains a relatively large number of parameters; for convenience, our notation is summarized in Section S1 of Supplementary Material. #### 2.1.2 Pre-stretch In later sections, we specialize our modelling framework to actin and vimentin networks; tensegrity models of the cytoskeleton postulate that these elements are typically pre-stretched [28]. For a fixed \(\tilde{R}\), the filament pre-stretch is controlled by the normalized end-to-end distance \[\xi=\frac{\tilde{R}}{\tilde{\Lambda}}=\frac{\tilde{D}}{\tilde{L}}, \tag{2}\] which generates an axial force due to pre-stress denoted \(\tilde{f}_{p}\). Although the macroscale pre-stress has been measured experimentally [62, 63, 50], the complexity of cytoskeleton _in vivo_ (the number of different filaments and crosslinks and their interactions) makes it difficult to estimate \(\tilde{f}_{p}\) and so this will be considered a free parameter (similar to previous studies, e.g. [15]). The corresponding values of \(\tilde{\Lambda}\) and \(\xi\) then follow from the microscale constitutive law for the axial force discussed in the next section. In experiments, the macroscale pre-stress is usually estimated by measuring the total force exerted on a particular surface within the cell, and then normalizing by the cross-sectional area of that surface [62]. Applying an analogous method to the boundary of our square domain, we estimate the macroscale pre-stress of our filament networks by summing the force exerted by each of the adjoining filaments on that boundary and dividing by the boundary length. In this way, we estimate the total macroscale pre-stress as \[\tilde{\sigma}_{p}=\frac{(N-1)\tilde{f}_{p}}{\tilde{D}}.\] ### Deformed network #### 2.2.1 Applied deformation As a model for optical tweezers experiments [26, 27], we consider the motion of circular bead of radius \(\tilde{a}\) initially placed at the origin of the domain (Figure 1b). In this paper we restrict attention to quasi-static deformations, neglecting inertia and assuming zero net force on every CL for all time. In this simple model, we assume that CLs are free to rotate with no unfolding, unbinding, breakage or slippage. Thus, the energy supplied by the prescribed motion of the bead is stored as elastic energy in the filament network. The deformed coordinates of CL \((i,j)\) are denoted as \(\tilde{\mathbf{x}}_{i,j}=(\tilde{x}_{i,j},\tilde{y}_{i,j})\). Figure 1: Panel (a) shows a cell schematic with a small inserted bead (red). Zooming onto the bead, we idealize the undeformed cytoskeleton as a regular grid of curved filaments. Displacing the bead by a distance \(\tilde{R}_{b}\) at an angle \(\varphi_{*}\), we compute the locations of all crosslinks (black dots) in the perturbed network, as shown in panel (b). The calculation is based on a realistic microscale constitutive law for axial response of individual FSs (panel c) and assumes local force balance at CL \((i,j)\) (panel d) with contributing forces drawn as black arrows. Panel (c) also documents that the equation (9) provides an excellent approximation to model (3) for forces below tensile strength using default parameters for vimentin as estimated in Supplementary Section S2. #### 2.2.2 Implicit microscale constitutive law for axial force in a filament segment We denote \(\tilde{r}\) as the distance between CLs after deformation. In this study we follow models for semi-flexible filaments, and let the axial force \(\tilde{f}\) in each FS be a function of the end-to-end (straight-line) distance between its two end points normalized by its stress-free contour length \(r=\tilde{r}/\tilde{\Lambda}\) (Figure 1c) [24, 35]. Thus, tortuosity of individual FSs is accounted for implicitly. We use a well-established constitutive law for a single semi-flexible filament under tension, which includes the interplay between thermal undulations, bending stiffness and material extensibility [7, 35], in the form \[\frac{\tilde{r}}{\tilde{\Lambda}}=r(\tilde{f},\tilde{\Lambda})=\left(1+\frac{ \tilde{f}}{\pi\tilde{Y}\tilde{b}^{2}}\right)\left(1-\sqrt{\frac{\tilde{k}_{B} \tilde{T}}{\pi\tilde{\Lambda}_{p}\left(\tilde{f}+\left(\pi^{2}\tilde{k}_{B} \tilde{T}\tilde{\Lambda}_{p}/\tilde{\Lambda}^{2}\right)\right)}}\right) \tag{3}\] is the Euler buckling threshold force, \(\tilde{k}_{B}\approx 1.38\times 10^{-23}\)m\({}^{2}\)kg s\({}^{-2}\)K\({}^{-1}\) is the Boltzmann constant, \(\tilde{T}=300\) K is the absolute temperature, \(\tilde{Y}\) is the Young's modulus, \(\tilde{\Lambda}_{p}\) is the persistence length and \(\tilde{b}\) is the radius of the filament under consideration. The constitutive law (3) for an individual filament assumes that the stress-free contour length \(\tilde{\Lambda}\) and the end-to-end distance \(\tilde{r}\) are comparable (i.e. the normalized end-to-end distance \(r\) is close to 1, [46]). The first factor in (3) accounts for extensibility of the material while the second factor constitutes a model for an inextensible filament balancing thermal effects with its bending stiffness. Fixing all material parameters and substituting the initial values for \(\tilde{r}=\tilde{R}\) and \(\tilde{f}=\tilde{f}_{p}\) provides an implicit relationship between \(\tilde{f}_{p}\) and \(\tilde{\Lambda}\). Note that for extensible filaments, direct inversion to obtain \(\tilde{f}\) as a function of \(r\) is cumbersome [24, 35]. For a detailed description of the energy stored in individual FSs, see Supplementary Section S3.1. ### Force balance at a crosslink The local force balance at each CL requires that the net force (Figure 1d) must be zero [22, 6]. As the forces equilibrate at every CL, it follows that the total moment of forces about any CL is also zero. Note that apart from the axial forces, one would typically also need to introduce restoring forces due to the resistance of filaments to bending [6]. However due to the combination of high filament density and the imposed pre-stretch of actin and vimentin in our model, the response will be dominated by the elastic stretching and the bending can be neglected [20, 46, 9, 10]. ### Boundary conditions All CLs on the outer boundary of the domain are assumed to be pinned, mimicking attachment to the membrane, nucleus or some other organelle. The bead is assumed to be at least as large as the mesh size (typically much larger, in line with the optical tweezers experiments [26]) and therefore a hole of appropriate shape and size must be extracted from the discrete network. To mimick a rigid body translation, we model the bead motion via an imposed displacement of all CLs within the initial outline of the bead by a distance \(\tilde{R}_{b}\) at a pulling angle \(\varphi_{*}\) measured anti-clockwise from the \(\tilde{X}\) axis. ### Baseline parameter values We identify baseline parameter values representative of the cytoskeleton and denote these with the subscript \(c\). For instance, we choose a baseline filament spacing as \(\tilde{R}_{c}=0.05\)\(\mu\)m which, fixing the domain size as \(\tilde{D}=5\)\(\mu\)m, means that every filament is divided into \(N_{c}=100\) FSs [26]. All other model parameters are listed and the corresponding values representative of the cytoskeleton are estimated in Supplementary Material (Section S2). To ensure consistency as we vary the number of filaments, in simulations we hold the domain size and the total volume of filaments fixed to the baseline values by adjusting the mesh spacing and the filament radius according to \[\tilde{R}=\frac{N_{c}}{N}\tilde{R}_{c},\qquad\tilde{b}=\sqrt{\frac{N_{c}}{N}} \tilde{b}_{c}.\] Similarly, we hold the macroscale pre-stress fixed by adjusting the filament pre-stress and analogously rescale the axial force at arbitrary \(r\) according to \[\tilde{f}_{p}=\frac{N_{c}}{N}\tilde{\mathcal{F}}_{p},\qquad\tilde{f}(r)=\frac{N_ {c}}{N}\tilde{\mathcal{F}}(r),\] where \(\tilde{\mathcal{F}}(\xi)=\tilde{\mathcal{F}}_{p}\). ### Nondimensionalization We nondimensionalize all lengths based on the domain side length \(\tilde{D}\), and forces (including \(\tilde{\mathcal{F}}_{p}\)) with respect to the enthalpic (elastic) force \(\pi\tilde{\mathrm{\Gamma}}\tilde{b}_{c}^{2}\). We denote as \(l_{i\pm 1/2,j}\) and \(l_{i,j\pm 1/2}\) the deformed lengths of FSs connecting CL \((i,j)\) to CLs \((i\pm 1,j)\) and \((i,j\pm 1)\), respectively. At CL \((i,j)\), we define unit vectors pointing in the directions of the four adjacent FSs as \[\hat{\boldsymbol{r}}_{i\pm\frac{1}{2},j}=\frac{(x_{i\pm 1,j}-x_{i,j},y_{i\pm 1,j} -y_{i,j})}{l_{i\pm\frac{1}{2},j}},\qquad\hat{\boldsymbol{r}}_{i,j\pm\frac{1}{2 }}=\frac{(x_{i,j\pm 1}-x_{i,j},y_{i,j\pm 1}-y_{i,j})}{l_{i,j\pm\frac{1}{2}}},\] and upon multiplying by \(\varepsilon_{c}N\), where \(\varepsilon_{c}=N_{c}^{-1}\), the dimensionless force balance takes the form \[\boldsymbol{0}= \mathcal{F}\left(\xi Nl_{i-\frac{1}{2},j}\right)\hat{\boldsymbol {r}}_{i-\frac{1}{2},j}+\mathcal{F}\left(\xi Nl_{i+\frac{1}{2},j}\right)\hat{ \boldsymbol{r}}_{i+\frac{1}{2},j}+\mathcal{F}\left(\xi Nl_{i,j-\frac{1}{2}} \right)\hat{\boldsymbol{r}}_{i,j-\frac{1}{2}}+\mathcal{F}\left(\xi Nl_{i,j+ \frac{1}{2}}\right)\hat{\boldsymbol{r}}_{i,j+\frac{1}{2}}. \tag{4}\] The dimensionless magnitude of the bead displacement is denoted as \(R_{b}:=\tilde{R}_{b}/\tilde{D}\). ### Analysis of dimensionless microscale constitutive law The dimensionless constitutive law for an individual filament (3) becomes \[r(\mathcal{F};\mathcal{T}_{1},\mathcal{T}_{2},\xi,\varepsilon_{c},N)=(1+ \mathcal{F})\left(1-\sqrt{\frac{\mathcal{T}_{1}}{\mathcal{F}/(\varepsilon_{c} N)+4\pi^{3}\left(\varepsilon_{c}\xi N\mathcal{T}_{2}\right)^{2}\mathcal{T}_{1}}} \right), \tag{5}\] where \[\mathcal{T}_{1}=\frac{\tilde{\mathcal{F}}_{\mathrm{entropic}}}{\tilde{ \mathcal{F}}_{\mathrm{enthalpic}}}=\frac{\tilde{k}_{B}\tilde{T}}{\pi^{2} \tilde{Y}\dot{b}_{c}^{2}\tilde{\Lambda}_{p}}\qquad\text{and}\qquad\mathcal{T} _{2}=\frac{\tilde{\Lambda}_{p}}{2\tilde{R}_{c}} \tag{6}\] are the dimensionless ratios of the entropic force (\(\tilde{\mathcal{F}}_{\mathrm{entropic}}=\tilde{k}_{B}\tilde{T}/(\pi\tilde{ \Lambda}_{p})\)) to the enthalpic force (\(\tilde{\mathcal{F}}_{\mathrm{enthalpic}}=\pi\tilde{Y}\dot{b}_{c}^{2}\)) and one half of the ratio of the persistence length to the end-to-end distance, respectively1. Note that all dimensionless parameters featuring in (5) are independent of the force due to pre-stress \(\mathcal{F}_{p}\) and \(N\), with the exception of \(\xi\). Given that \(\mathcal{F}(\xi)=\mathcal{F}_{p}\), we obtain Footnote 1: The factor of \(1/2\) was chosen in line with previous studies so that our \(\mathcal{T}_{2}\) is a direct analogue of the so-called normalized filament stiffness [35]. Note further that the \(r\) introduced in (3) should be regarded as a distance normalized with respect to the stress-free contour length and even though without units, this quantity is distinct from the nondimensionalized end-to-end distance (with respect to the macroscale). \[\xi=\{1+\mathcal{F}_{p}\}\left\{1-\left(\varepsilon_{c}N\right)^{-1}\left( \mathcal{F}_{p}\left(\varepsilon_{c}N\right)^{-3}\mathcal{T}_{1}^{-1}+4\pi^{3 }\mathcal{T}_{2}^{2}\xi^{2}\right)^{-1/2}\right\}, \tag{7}\] which provides a quartic polynomial for pre-stretch \(\xi\) as function of \(\mathcal{F}_{p}\), which cannot easily be inverted analytically. However, for vimentin filaments we compute \(\mathcal{T}_{1}\approx 1.9\times 10^{-8}\) and \(\mathcal{T}_{2}\approx 10\) (based on parameters listed in Table S1 in Supplementary Material) and so provided \(\varepsilon_{c}N=O(1)\) and \(\mathcal{F}_{p}\gg\mathcal{T}_{1}\) (i.e. the force due to pre-stress is much greater than the entropic force; for \(\varepsilon_{c}N\gg 1\) we do not need any additional conditions) we approximate \[\xi=1+\mathcal{F}_{p}. \tag{8}\] The approximation (8) is not sufficiently accurate for actin, since \(\mathcal{T}_{1}\approx 10^{-9}\) but \(\mathcal{T}_{2}\approx 170\) (i.e. the persistence length of actin is much larger than the representative cytoskeletal mesh size), and an expansion in powers of \(\mathcal{T}_{2}^{-1}\) is required (see Supplementary Material, Section S3.3). In the main text we focus attention on networks of vimentin filaments. We further consider networks of actin filaments in Supplementary Material (Section S3.4), although here the critical stretch for filament breakage is typically very low and so the networks quickly disassemble. In summary, the dimensionless problem is governed by eight dimensionless parameters (\(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\), \(\mathcal{F}_{p}\), \(\varepsilon_{c}\), \(N\), \(R_{b}\), \(\varphi_{*}\), \(a\)) and the microscale constitutive law (5), where \(\xi\) is given by (8). Model parameters and their default values are listed in Supplementary Material Section S2. ### Negligible response to compression and simplified microscale constitutive law Neither actin nor vimentin filaments can sustain large compressive stresses due to their low bending stiffness [35]. We therefore assume that the response to compression is negligible, similar to previous studies for actin networks [57, 25] ( vimentin filaments have even lower bending stiffness). Given the weak response of filaments to compression and also the smallness of \(\mathcal{T}_{1}\) discussed in the previous section, we can neglect the square root term in (5) and using (8) derive a simplified microscale constitutive law (5) in the form \[\mathcal{F}=\left\{\begin{array}{ll}\mathcal{F}_{p}+(r-\xi)=r-1,&\text{ if }r>1\\ 0,&\text{ if }r<1,\end{array}\right. \tag{9}\] which is continuous at \(r=1\), i.e. when the filament is straightened out to its full contour length (\(\tilde{r}=\tilde{\Lambda}\)), as (8) holds. Note that in the case of vimentin, this linearized expression was not obtained via Taylor expansion of the full model (5) about \(r=\xi\), but was instead derived rationally based on the smallness of \(\mathcal{T}_{1}\); it is analogous to previous models studying mechanics of pre-stressed filament networks [15]. Equation (9) provides a very good approximation to (3) using parameters pertinent to the intermediate filament vimentin (Figure 1c) across all values of \(r\). The model (9) will be used in the sections of this paper where we present discrete and continuum simulations for vimentin. Note that it is possible, in principle, to simulate networks where filaments are modelled using (3) in its full form, but numerical simulations take significantly longer due to its implicit form. The multiscale continuum framework developed in Section 3 below can account for arbitrary microscale constitutive law relating the axial force to the end-to-end distance. In the following section we will assume that axial stretching on the microscale is governed by a general constitutive law \(\mathcal{F}(r)\). ## 3 Upscaling and continuum model ### Upscaling the force balance We now define a small parameter \(\varepsilon\equiv N^{-1}\ll 1\), the (dimensionless) undeformed CL-to-CL distance. We upscale the discrete model (4) in the limit \(\varepsilon\to 0\) to form a continuum model. We assume that there exist smooth functions \(x(X,Y)\) and \(y(X,Y)\) defined on the square domain \(-\frac{1}{2}<X,Y<\frac{1}{2}\) such that for all \(i,j\) we have \(g(X_{i},Y_{j})=g_{i,j}\) where \(g\) is either \(x\) or \(y\). Assuming \(x\), \(y\) and \(\mathcal{F}\) are sufficiently smooth, we Taylor expand the discrete equations (4) (centering about \((X_{i},Y_{j})\)) and rationally derive a continuum model [4]. Further details of the derivation can be found in Supplementary Material (Section S4.1). The first non-trivial balance in the momentum equations gives \[\left(\mathcal{F}\left(\xi\sqrt{x_{X}^{2}+y_{X}^{2}}\right)\frac{(x_{X},y_{X} )}{\sqrt{x_{X}^{2}+y_{X}^{2}}}\right)_{X}+\left(\mathcal{F}\left(\xi\sqrt{x_{ Y}^{2}+y_{Y}^{2}}\right)\frac{(x_{Y},y_{Y})}{\sqrt{x_{Y}^{2}+y_{Y}^{2}}} \right)_{Y}=\mathbf{0}, \tag{10}\] where subscripts denote partial derivatives. This system of two coupled nonlinear equations in the divergence form constitutes the upscaled problem. Notice that in the (continuum) \(N\to\infty\) limit, the constitutive law (5) converges to \(\mathcal{F}=r-1\) which is identical to (9); the resulting equations under the linearized microscale constitutive law are deduced in Supplementary Section S4.2. The momentum balance equations (10) are consistent with other classical results in continuum mechanics (see Supplementary Material, Section S4.3, for details). Since these equations are expressed in divergence form, we can define \(F_{kl}=\partial x_{k}/\partial X_{l}\) to be the components of the corresponding deformation gradient tensor and immediately deduce the nominal stress tensor in the form \[\tilde{\mathbf{S}}=\frac{1}{\tilde{R}_{c}}\begin{pmatrix}\tilde{\mathcal{F}}\left( \xi\sqrt{F_{11}^{2}+F_{21}^{2}}\right)\frac{F_{11}}{\sqrt{F_{11}^{2}+F_{21}^{2 }}}&\tilde{\mathcal{F}}\left(\xi\sqrt{F_{11}^{2}+F_{21}^{2}}\right)\frac{F_{21} }{\sqrt{F_{11}^{2}+F_{21}^{2}}}\\ \tilde{\mathcal{F}}\left(\xi\sqrt{F_{12}^{2}+F_{22}^{2}}\right)\frac{F_{12}}{ \sqrt{F_{12}^{2}+F_{22}^{2}}}&\tilde{\mathcal{F}}\left(\xi\sqrt{F_{12}^{2}+F_ {22}^{2}}\right)\frac{F_{22}}{\sqrt{F_{12}^{2}+F_{22}^{2}}}\end{pmatrix}. \tag{11}\] This formulation is a special case (reflecting the particular geometry of the undeformed configuration) of the stress tensor derived for an arbitrary distribution of filament directions using the Doi-Edwards construction [51]. In the initial configuration, \(\mathbf{F}=\mathbf{I}\) and therefore \(\tilde{\mathbf{S}}=\tilde{\mathcal{F}}_{p}/\tilde{R}_{c}\mathbf{I}\), consistent with our prediction of macroscale pre-stress in Section 2.1. Similarly, we conclude that the dimensional strain energy density in the deformed configuration is \[\tilde{W}(\mathbf{C})=\frac{\tilde{\mathcal{E}}\left(\xi\sqrt{I_{4}(\mathbf{C})} \right)+\tilde{\mathcal{E}}\left(\xi\sqrt{I_{6}(\mathbf{C})}\right)}{\tilde{R}_{c }^{2}}, \tag{12}\] where \(\tilde{\mathcal{E}}\) denotes the energy stored in elastic stretching of the filaments (see Supplementary Material, Section S3.1), \(\mathbf{C}\) is the right Cauchy-Green deformation tensor and \(\sqrt{I_{4}(\mathbf{C})}\) and \(\sqrt{I_{6}(\mathbf{C})}\) represent local stretches in \(X\) and \(Y\) directions, reflecting the underlying square-grid geometry of the cytoskeleton with two preferred filament directions. Such anisotropic contributions to the strain energy are often proposed in phenomenological models for fiber-reinforced materials (e.g. [49]). ### Boundary conditions The pinning of the outer layer of CLs in the discrete model gives in the continuum limit \[x(X,Y)=X,\quad y(X,Y)=Y \tag{13}\] along all boundaries characterized by \(X=\pm\frac{1}{2}\) or \(Y=\pm\frac{1}{2}\). In the continuum model, the bead is represented by a disc of radius \(\tilde{a}\) cut out from the domain, initially centred at \((\tilde{X},\tilde{Y})=(0,0)\) and displaced by \(\tilde{R}_{b}\) at a pulling angle \(\varphi_{*}\). Note that in the dimensionless setting, we must have \(a=\tilde{a}/\tilde{D}=O(1)\). The bead boundary condition is written for \(-\pi<\varphi\leq\pi\) as \[x(a\cos\left(\varphi\right),a\sin\left(\varphi\right))=a\cos\left(\varphi \right)+R_{b}\cos\left(\varphi_{*}\right),\qquad y(a\cos\left(\varphi\right),a\sin\left(\varphi\right))=a\sin\left(\varphi\right)+R_{b}\sin\left(\varphi_ {*}\right). \tag{14}\] ## 4 Discrete and continuum simulations To facilitate direct comparison between the discrete and continuum predictions, we return to the dimensional variables and introduce the continuum displacement fields \[\tilde{u}(\tilde{X},\tilde{Y})=\tilde{x}(\tilde{X},\tilde{Y})-\tilde{X}\qquad \tilde{v}(\tilde{X},\tilde{Y})=\tilde{y}(\tilde{X},\tilde{Y})-\tilde{Y}, \tag{15}\] as well as their discrete counterparts \[\tilde{u}_{i,j}=\tilde{x}_{i,j}-\tilde{X}_{i}\qquad\tilde{v}_{i,j}=\tilde{y}_ {i,j}-\tilde{Y}_{j} \tag{16}\] for all \(i\) and \(j\). Unless stated otherwise, all lengths (including those indicated in colorbars) are given in microns and all forces in nanonewtons. In quasi-static simulations of the discrete model (4), we use numerical continuation from the initial configuration to find steady-state solutions for a variety of bead displacements. In order to avoid pulling along the initial direction of one of the filaments or exactly along the diagonal, we choose a default pulling angle as \(\varphi_{*}=\pi/6\). To avoid FSs crossing each other, we only displace the bead up to a maximal distance equal to the undeformed mesh size, i.e. \(0\leq\tilde{R}_{b}\leq\tilde{R}\). For every \(\tilde{R}_{b}\), we solve for the locations of CLs outside the bead using fsolve toolbox in MATLAB (based on Newton's method) and then calculate the resultant force acting on the bead by summing up tensile forces from all attached FSs. The continuum problem (10) is solved in FEniCS using Newton solver and we employed Lagrange finite elements of degree 1 [34]. As the difference in the predicted force on the bead using default model parameters, maximum displacement and domain resolutions (the minimum number of elements across the square in both \(\tilde{X}\) and \(\tilde{Y}\) directions) equal to 200 and 400 was less than 0.3% of the value at 200, we conclude that the resolution 400 provides us with a sufficiently fine mesh giving trustworthy force estimates. We use this value as default from now onwards. In the continuum model, the net force acting on the bead is then found by numerically integrating the traction (\(\tilde{\mathbf{S}}^{T}\mathbf{N}\) where \(\mathbf{N}\) is the unit normal to the bead) over the bead boundary. ### Simulation with default parameters for vimentin In order to assess the convergence of discrete simulations to the continuum as \(N\to\infty\), in Figure 2 we plot the force-displacement graphs for various \(N\), fixing all other parameters at their default value (including \(\varepsilon_{c}=1/100\)). In each case the graph of the magnitude of the force as a function of bead displacement (termed the force-displacement curve) is almost perfectly linear because we restrict attention to (small) deformations up to a single mesh size. For every given displacement the discrete and continuum predictions of the force approach one another as \(N\) becomes large (Figure 2a) and the results are almost indistinguishable for \(N=1/R_{c}=100\). Note that the convergence is not monotonic for small \(N\), but this is an artifact caused by the relatively small number of FSs attached to the bead in such cases. In order to elucidate how the steady state force distribution changes with increasing bead displacement, in the insets of Figure 2(a) we show the accumulation of tension in the wake of the moving bead. The solution profiles for such dense network (\(N=100\)) are not easy to visualize and throughout this work we will therefore zoom onto a small region in the vicinity of the bead where the perturbation is localized. The magnitude of the continuum displacement field \(||(\tilde{u},\tilde{v})||\) (Figure 2b) shows good agreement with its discrete counter-part (Figure 2c). Note that the near-perfect symmetries of these fields with respect to the \(\tilde{X}\) and \(\tilde{Y}\) axes can be explained by the smallness of the deformation: while the nonlinear system (S29) does not suggest any symmetry, the structure of the small-deformations limit (20)-(23) derived in Section 5 (together with the symmetries of the domain under consideration) indicate that both components of the displacement field must be even functions of \(\tilde{X}\) (\(\tilde{Y}\)) for a fixed \(\tilde{Y}\) (\(\tilde{X}\)). In summary, this figure shows that the discrete and continuum predictions are in excellent agreement with one another as the mesh spacing reduces. ### Effect of model parameters In this section we explore dependency on model parameters, namely the pulling angle \(\varphi_{*}\) (Section 4.2.1) and the force due to pre-stress \(\tilde{\mathcal{F}}_{p}\) (Section 4.2.2). #### 4.2.1 Pulling angle \(\varphi_{*}\) In order to assess the anisotropy of the force-displacement curves induced by our assumption of a regular array of filaments, in Figure 3 we examine the dependency on the pulling angle across its entire range. Amongst both the discrete and continuum simulations, the force-displacement curves remain within 1% of one another for the full range of pulling angles (Figure 3a,b). Furthermore, this difference remains small across the entire range of bead sizes considered (data not shown), consistent with the predictions of the continuum model in the limit of small deformations (see Section 5 below). As before, we observe good agreement between the continuum and discrete model predictions. However, despite the force exerted on the bead being almost independent of the pulling angle, we note that the overall stress profile is qualitatively different for different pulling angles (Figure 3c-f): the more aligned the direction of movement is with the initial direction of the filament, the greater the increase (decrease) in tension in the wake (at the front) of the moving bead. In summary, this figure shows that while the force-displacement curve is approximately independent of the direction of bead movement, the stress profile within the material is sensitive to the direction of pulling. #### 4.2.2 Force due to pre-stress \(\tilde{\mathcal{F}}_{p}\) In order to assess the importance of the filament pre-stress (since this is not known experimentally), in Figure 4(a) we study force response for increasing \(\tilde{\mathcal{F}}_{p}\) and default parameters otherwise. As might be expected, with increasing (tensile) pre-stress in the filaments, the response gets stiffer, i.e. the gradient of the force displacement curve increases. Due to the smallness of the deformations, the deviations from linear behaviour of the force-displacement curves are negligible in all studied cases which allows us to introduce a scalar measure of the network stiffness \(\tilde{\mathcal{K}}=\mathrm{d}\tilde{F}_{b}/\mathrm{d}\tilde{R}_{b}\) which we approximate as \(\max(\tilde{F}_{b})/\max(\tilde{R}_{b})\) evaluated at the largest bead displacement. The network stiffness increases with the pre-stress in a slightly sublinear manner (see the inset in Figure 4a). As expected, the overall force distribution within the network scales with the amount of pre-stress (Figure 4b,c). ## 5 Small-deformation and small-bead analysis To provide further insight into the force-displacement relationship, and in particular the dependency on the model parameters, we investigate the limit \(R_{b}\ll 1\), i.e. the limit of small macroscale deformations. Assuming that the bead displacement is small, it is natural to assume that all components of the deformation gradient tensor are small everywhere in the macroscopic domain. Note that the small deformations assumption is consistent with our restriction to bead displacements up to one inter-CL distance in the discrete model. We analyze small deformations by substituting \[x(X,Y)=X+R_{b}\hat{x}(X,Y)+O(R_{b}^{2}),\qquad y(X,Y)=Y+R_{b}\hat{y}(X,Y)+O(R_ {b}^{2}), \tag{17}\] with \(R_{b}\ll 1\) into the continuum problem. Following Section S5.1 of the Supplementary Material, we arrive at the macroscale equations at \(O(R_{b})\) \[\left(\xi\mathcal{F}^{\prime}(\xi)\hat{x}_{X}\right)_{X}+\left( \mathcal{F}(\xi)\hat{x}_{Y}\right)_{Y}=0, \tag{18}\] \[\left(\mathcal{F}(\xi)\hat{y}_{X}\right)_{X}+\left(\xi\mathcal{F} ^{\prime}(\xi)\hat{y}_{Y}\right)_{Y}=0. \tag{19}\] Figure 2: (a) The force-displacement graphs for increasing \(N\) in the discrete model (symbols) with default parameters converge to that for the continuum limit (solid blue). The insets depict solution profiles for \(\tilde{R}_{b}=0.025\mu\)m and \(0.05\mu\)m. Panels on the right show the magnitude of the displacement in the undeformed configuration in the continuum (b) and discrete (c) model (the latter visualized as a scatter plot). Note that equations (18) and (19) are decoupled. Since the constitutive law for the force in the FS is always monotonically increasing as a function of end-to-end distance (i.e. \(\mathcal{F}^{\prime}(\xi)>0\)), we can divide both equations by \(\xi\mathcal{F}^{\prime}(\xi)\) to obtain \[\hat{x}_{XX}+\omega\hat{x}_{YY}=0, \tag{20}\] \[\omega\hat{y}_{XX}+\hat{y}_{YY}=0, \tag{21}\] where \(\omega:=\mathcal{F}(\xi)/(\xi\mathcal{F}^{\prime}(\xi))>0\). For our particular FS constitutive law, \(\mathcal{F}=r-1\). These equations are subject to boundary conditions \[\hat{x}=\hat{y}=0, \tag{22}\] Figure 3: In panel (a), discrete (symbols) and continuum (lines) force-displacement graphs are plotted for default model parameters and for pulling angles 0 (green), \(\pi/12\) (black), \(\pi/6\) (blue) and \(\pi/4\) (red) radians. As the resulting curves lie very close to one another for both models (and any fixed \(\varphi_{*}\)), to make the differences between various pulling angles visible, we zoom onto the maximum bead displacement in panel (b). Note that the force-displacement graphs for \(\varphi_{**}\in(\pi/4,\pi/2)\) will mirror those for \(\varphi_{*}=\pi/2-\varphi_{**}\) due to the square shape of the macroscopic domain; in other words, due to the symmetry upon swapping \(X\) and \(Y\). Panels (c) and (e) show the discrete solution profiles (zoomed-in onto the bead) at the largest displacement \(\tilde{R}_{b}=\tilde{R}=0.05\mu\)m for two extreme values of the pulling angle \(\varphi_{*}=0\) (c) and \(\pi/4\) (e) radians - the response is stiffest when one pulls in the direction of one of the two filament families and softest when pulling along the diagonal. The corresponding principal stresses and directions of the continuum stress tensor (\(\mathbf{S}^{T}\)) are plotted using ellipses at selected points near the bead in panels (d) and (f). Note that the continuum results are plotted using the undeformed variables with the corresponding pre-stress shown via red crossheads inside circles located at the top, that the green arrows indicate the direction of bead’s motion and that the principal stresses were all normalized with respect to the same value chosen so that the ellipses do not overlap yet are large enough to be clearly seen. evaluated on the outer boundary of the domain. Similarly, on the boundary of the bead (circle of radius \(a\)) we impose for any \(-\pi<\varphi\leq\pi\) that \[\hat{x}(a\cos\left(\varphi\right),a\sin\left(\varphi\right))=\cos\left(\varphi_{ *}\right),\qquad\hat{y}(a\cos\left(\varphi\right),a\sin\left(\varphi\right))= \sin\left(\varphi_{*}\right). \tag{23}\] For our choice of FS constitutive law we deduce \(\omega=1-1/\xi<1\) which will be used in the elliptical transformation below (Figure 5). To the best of our knowledge it is not possible to solve (20)-(23) exactly. However, under the assumption \(a\ll 1\), it is possible to find an asymptotic approximation valid in the inner region (i.e. close to \(X^{2}+Y^{2}=a^{2}\)). This assumption can easily be justified, as the beads used in optical tweezers experiments are typically small compared to the cell size [26]. ### Solution in the limit \(a\ll 1\) As the two equations are decoupled, we solve them separately. The technical details are presented in Supplementary Material (Section S5.2). The solution strategy for \(\hat{x}\) (\(\hat{y}\) problem is dealt with analogously) is summarized in Figure 5: we study the outer problem (20) subject to the outer boundary conditions together with the inner problem obtained by rescaling \((\bar{X},\bar{Y})=(X,Y)/a\), which localizes the problem to the neighbourhood of the bead (Figure 5a,b). In the inner region, we then need to transform the \(\bar{Y}\) coordinate to \(\bar{Z}=\bar{Y}/\sqrt{\omega}\) which transforms the governing equation into Laplace's equation on a (stretched) domain with an elliptical (inner) boundary (Figure 5c). Elliptical coordinates (S37) then allow us to transform this problem onto a semi-infinite strip while keeping the same governing equation so that an analytical solution can be found easily (Figure 5d). Undetermined constants in the inner solution are obtained by transforming back to Cartesian coordinates, writing in outer variables and matching with the outer \(\hat{x}\). Eventually, we conclude the inner approximation (denoted with superscript \(I\)) \[\hat{x}^{I}=\cos(\varphi_{*})\left\{1+\frac{2\cosh^{-1}((1-\omega)^{-\frac{1} {2}})-\ln\left(1-2q+2\sqrt{q^{2}-q}\right)}{2\ln\left(1/a\right)+\ln\left(4 \omega/(1-\omega)\right)-2\cosh^{-1}((1-\omega)^{-\frac{1}{2}})}\right\}+O \left(a^{2}\right), \tag{24}\] where \[q(\bar{X},\bar{Y})=\frac{-\omega\bar{X}^{2}-\bar{Y}^{2}+(1-\omega)-\sqrt{(\omega \bar{X}^{2}+\bar{Y}^{2}-(1-\omega))^{2}+4(1-\omega)\omega\bar{X}^{2}}}{2(1- \omega)}. \tag{25}\] Figure 5: Demonstration of key steps in the solution process using \(a=0.02\) and \(\omega=0.2\). Starting from the macroscale variables \((X,Y)\) (a) and assuming \(a\ll 1\), we rescale to the inner layer (b). Then, we stretch the \(\bar{Y}\) coordinate by the means of which we transform the governing equation into Laplace’s equation which is to be solved subject to Dirichlet boundary conditions at an elliptical inner boundary in \((\bar{X},\bar{Z})\) (c). Using elliptical coordinates - note in panel (c) that the blue curves correspond to \(\mu=\) constant while yellow to \(\nu=\) constant, the \(\mu=\cosh^{-1}\left((1-\omega)^{-1/2}\right)\approx 0.5\) representing the inner boundary - we can finally transform this non-trivial geometry into a rectangular one in \((\mu,\nu)\) while keeping the governing equation same (d). Similarly, to find an inner approximation for \(\hat{y}\), we first transform \(\bar{X}\) to \(\bar{W}=\bar{X}/\sqrt{\omega}\) and then use elliptical coordinates (S44), where we derive \[\hat{y}^{\bar{I}}=\sin(\varphi_{*})\left\{1+\frac{2\cosh^{-1}((1-\omega)^{-\frac {1}{2}})-\ln\left(1-2q_{2}+2\sqrt{q_{2}^{2}-q_{2}}\right)}{2\ln\left(1/a\right) +\ln\left(4\omega/(1-\omega)\right)-2\cosh^{-1}((1-\omega)^{-\frac{1}{2}})} \right\}+O\left(a^{2}\right), \tag{26}\] where \[q_{2}(\bar{X},\bar{Y})=\frac{-\bar{X}^{2}-\omega\bar{Y}^{2}+(1-\omega)-\sqrt{( \bar{X}^{2}+\omega\bar{Y}^{2}-(1-\omega))^{2}+4(1-\omega)\omega\bar{Y}^{2}}}{2 (1-\omega)}. \tag{27}\] Differentiating (24) and (26) with respect to \(X\) and \(Y\) we deduce leading-order approximations for the strain fields away from the bead (see Supplementary Material, Section S5.3). ### Stress field and net force exerted on the bead Substituting the small-deformations ansatz (17) into the stress tensor (11) we further expand using \(R_{b}\ll 1\) to obtain in the inner layer \[\tilde{\mathbf{S}}=\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}}{\varepsilon_{c}\tilde{D}} \left\{\begin{pmatrix}\mathcal{F}(\xi)&0\\ 0&\mathcal{F}(\xi)\end{pmatrix}+R_{b}\begin{pmatrix}\xi\mathcal{F}^{\prime}( \xi)\hat{x}_{X}^{I}&\mathcal{F}(\xi)\hat{y}_{X}^{I}\\ \mathcal{F}(\xi)\hat{x}_{Y}^{I}&\xi\mathcal{F}^{\prime}(\xi)\hat{y}_{Y}^{I} \end{pmatrix}+O(R_{b}^{2})\right\}. \tag{28}\] As in the full continuum problem, the net force exerted on the bead is calculated by integrating \(\tilde{\mathbf{S}}^{T}\mathbf{N}\) over the boundary of the bead \(\bar{X}^{2}+\bar{Y}^{2}=1\) using the displacement profiles (24) and (26) in the inner layer. Performing these calculations and using the constitutive law (9) (details are included in Supplementary Material, Section S5.4), we derive an analytical expression for the force response of the material to the bead being pulled through it valid asymptotically (accurate up to \(O(a)\) error) of the form \[\tilde{\mathbf{F}}_{b}\approx-(\cos\left(\varphi_{*}\right),\sin\left(\varphi_{*} \right))\tilde{F}_{b}^{0}, \tag{29}\] where \[\tilde{F}_{b}^{0}=\frac{2\pi R_{b}/\varepsilon_{c}\sqrt{\mathcal{F}_{p}\left( 1+\mathcal{F}_{p}\right)}}{\ln\left(2\left(\sqrt{\mathcal{F}_{p}(1+\mathcal{ F}_{p})}-\mathcal{F}_{p}\right)/a\right)}\pi\tilde{Y}\tilde{b}_{c}^{2}. \tag{30}\] Note that this force on the bead is in the direction opposite to that of the pulling, as expected. Equation (30) elucidates how the force-displacement curve depends on key model parameters, namely filament's pre-stress \(\mathcal{F}_{p}\), Young's modulus \(\tilde{Y}\) and radius \(\tilde{b}_{c}\), mesh spacing \(\varepsilon_{c}\) and bead radius \(a\). Finally, we deduce an analytical formula for the (dimensional) effective network stiffness \[\tilde{\mathcal{K}}=\frac{\mathrm{d}\tilde{F}_{b}}{\mathrm{d}\tilde{R}_{b}} \approx\frac{\tilde{F}_{b}^{0}}{\tilde{R}_{b}}=\frac{\pi\tilde{Y}\tilde{b}_{c }^{2}}{\tilde{R}_{c}}\frac{2\pi\sqrt{\mathcal{F}_{p}\left(1+\mathcal{F}_{p} \right)}}{\ln\left(2\left(\sqrt{\mathcal{F}_{p}(1+\mathcal{F}_{p})}-\mathcal{ F}_{p}\right)/a\right)}. \tag{31}\] Note from the inset in panel (a) of Figure 4 that for the default bead size, the analytical result is already in good qualitative agreement with the discrete and continuum models. ### Dependence of force response on the bead size In order to assess the comparison between discrete, continuum and analytical approaches, in Figure 6(a) we plot force-displacement curves for a number of values of \(a\). The discrete and continuum model results agree well for all considered values and, as expected, the larger the bead is the greater the force required for its transport. Moreover, as \(a\) decreases, our asymptotic result (30) approaches simulation results of the continuum model. More specifically, as \(a\) is reduced from 0.1 to 0.025, the absolute (relative) approximation error at the maximum displacement (\(\tilde{R}_{b}=0.05\mu\)m) decreases from roughly 28 nN to 8 nN. Figure 6(b) confirms the increasing agreement between the direct numerical simulations of the continuum model and our analytical approximation as \(a\) is further reduced. When plotted using logarithmic scales on both axes, the continuum model predictions do not collapse onto a straight line indicating that the net force does not scale with \(a\) according to a power law but behaves in the logarithmic manner instead (c.f. Equation (30)). Note that the continuum and analytical results almost overlap for \(a=1/400\). The discrete solution profiles at the maximum displacement value \(\tilde{R}_{b}=0.05\mu\)m for varying bead radius \(a\) are presented in Figures 6(c,d,e). With decreasing \(a\), the number of FSs exerting force on the bead decreases linearly, but their individual stretches (and hence forces) are larger. In summary, this figure confirms that discrete and continuum predictions converge to the analytical formula (30) thus establishing it as a useful predictor of the net force exerted on a small bead. Figure 6: (a) Force-displacement curves for default model parameters and the bead radius equal to twice (\(0.5\mu\)m; green) and half (\(0.125\mu\)m; red) the default value (\(0.25\mu\)m; blue), with panels (c), (e) and (d) showing the corresponding solution profiles at the maximum displacement, \(\tilde{R}_{b}=0.05\mu\)m. Panel (b) demonstrates the convergence of the continuum simulations onto the prediction of the \(a\ll 1\) asymptotics. Discussion In this paper we have developed a multiscale framework for modelling the mechanical response of the eukaryotic cell cytoskeleton to internal motion of a small internal bead or organelle, mimicking recent rheological tests using optical tweezers [26]. In particular, we have developed a discrete model of the cell cytoskeleton by assuming a planar regular square grid of cytoskeletal filaments, using a microscale constitutive law for the mechanical response of each filament segment (Figure 1) [35]. This model is highly idealized, ignoring the complex irregular geometry of the cytoskeletal network, including three-dimensional effects, and representing the structure by just one type of filament (in this case the intermediate filament vimentin). However, the simplicity of our framework allows a rational upscaling of the discrete model, from which we can construct a macroscale continuum model which encodes the microscale properties of the individual filaments. This continuum model provides an excellent match to discrete simulations across the parameter space at a fraction of the computational cost. Furthermore, in the limit of small bead displacements the continuum model can be solved asymptotically for small bead size by stretching the geometry of the (bead) boundary region, transforming to elliptical coordinates and matching with the outer region (Figure 5), from which it is possible to construct a closed form expression (30) for the net force acting on the bead as a function of its size, the Young's modulus and radius of the filaments, the angle at which the bead is pulled through the network relative to the filaments, the network pre-stress and its spacing. In future, expression (30) could in principle be used to infer an estimate of a microscale filament pre-stress from the macroscale force-displacement data. The option of having both discrete and continuum formulation allows us to consider a variety of sizes of transported objects. For example, cell organelles are often much larger than the mesh size so that the continuum description for cytoskeleton is justified and computationally inexpensive (and one can make use of our analytical result (30)). Conversely, the discrete simulations without a hole would form an appropriate model for transport of small cytoplasmic molecules which are usually smaller than the mesh spacing [26]. A unified picture emerges from solving these discrete, continuum and analytical models: the system predicts an approximately linear relationship between the force on the bead and its displacement, and the gradient of this curve provides an estimate of the network stiffness (Figure 2). In particular, we show that this network stiffness is approximately independent of the angle at which the bead is pulled through the structure (Figure 3), consistent with the optical tweezers experiments [26]. The net force \(F_{b}\) increases sublinearly with the filament pre-stress across the studied range although the deviation from linear behaviour is small (Figure 4) and decreases in a logarithmic manner (\(F_{b}\propto\left(\ln\left(1/a\right)+\text{const}\right)^{-1}\)) as the radius of the bead (\(a\)) reduces (Figure 6). We note that a linear increase in network stiffness with increasing pre-stress is found in tensegrity studies of cell mechanics, even though such linearity is typically established under bulk (shearing) deformations as opposed to local perturbations studied here [38]. Numerical simulations in the absence of pre-stress take significantly longer than their pre-stretched counterparts. Initially stress-free networks thus appear to be the borderline case beyond which (pre-compressed filaments) neither discrete (MATLAB) solver nor continuum (FEniCS) solver converge. By analogy with the literature on central force networks we therefore expect that the problem with initially stress-free network suffers from ill-posedness issues associated with the so-called stiffness percolation (positive elastic modulus at zero strain) [46]. The present work has also restricted attention to quasi-static deformations. However eukaryotic cells are known to exhibit a complicated rheology involving additional dissipative factors such as transient crosslink binding/unbinding, sliding and unfolding, giving rise to viscoelastic behaviour at the macroscale [32, 33, 59, 31]. Furthermore, in its current form the model neglects the mechanical role of the cytosol fluid in which the filament network is immersed [37]. Future work will extend this formulation to include these additional features, modelling the cell as a poro-visco-elastic continuum, allowing exploration of how these different mechanical responses manifest in different cell types. This discrete-to-continuum modelling approach is not restricted to cytoskeletal networks and could similarly be applied to other crosslinked networks of semi-flexible filaments such as collagen [52]. Furthermore, our approach could be modified to model cells migrating through (and interacting with) extra-cellular matrix [45, 29, 55]. _Data accessibility:_ This article has no experimental data. Numerical scripts for the discrete model were written in Matlab version R2021a, those solving the continuum model were written in python using FEniCS version 2019.2.0.dev0-, and can be accessed at [http://dx.doi.org/10.5525/gla.researchdata.1443](http://dx.doi.org/10.5525/gla.researchdata.1443) _Funding:_ J.K., N.A.H., X.Y.L. and P.S.S. acknowledge funding from EPSRC grant no. EP/S030875/1. _Acknowledgements:_ We thank Mr. Gordon McNicol, Drs. Namshad Thekkethil and Yangkun Du (University of Glasgow) and Profs. Ming Guo and Roger Kamm (MIT) for valuable discussions. ## References * [1] Wylie W Ahmed and Timo Betz. Dynamic cross-links tune the solid-fluid behavior of living cells. _Proc Natl Acad Sci U S A_, 112(21):6527-6528, 2015. * [2] Bruce Alberts. _Molecular biology of the cell_. WW Norton & Company, 2017. * [3] Ellen M Arruda and Mary C Boyce. A three-dimensional constitutive model for the large stretch behavior of rubber elastic materials. _J Mech Phys Solids_, 41(2):389-412, 1993. * [4] Roxanna G Barry, Nicholas A Hill, and Peter S Stewart. Continuum soft tissue models from upscaling of arrays of hyperelastic cells. _Proc R Soc Lond A Math Phys Sci_, 478(2266):20220065, 2022. * [5] Estelle Berthier, Haiqian Yang, Ming Guo, Pierre Ronceray, and Chase P Broedersz. Nonlinear mechanosensation in fiber networks. _arXiv preprint arXiv:2208.06328_, 2022. * [6] Jamie R Blundell and Eugene M Terentjev. The influence of disorder on deformations in semiflexible networks. _Proc R Soc Lond A Math Phys Sci_, 467(2132):2330-2349, 2011. * [7] JR Blundell and EM Terentjev. Stretching semiflexible filaments and their networks. _Macromolecules_, 42(14):5388-5394, 2009. * [8] Clifford P Brangwynne, Frederick C MacKintosh, Sanjay Kumar, Nicholas A Geisse, Jennifer Talbot, L Mahadevan, Kevin K Parker, Donald E Ingber, and David A Weitz. Microtubules can bear enhanced compressive loads in living cells because of lateral reinforcement. _J Cell Biol_, 173(5):733-741, 2006. * [9] Chase P Broedersz, Xiaoming Mao, Tom C Lubensky, and Frederick C MacKintosh. Criticality and isostaticity in fibre networks. _Nat Phys_, 7(12):983-988, 2011. * [10] CP Broedersz and FC MacKintosh. Molecular motors stiffen non-affine semiflexible polymer networks. _Soft Matter_, 7(7):3186-3191, 2011. * [11] CP Broedersz, M Sheinman, and FC MacKintosh. Filament-length-controlled elasticity in 3d fiber networks. _Phys Rev Lett_, 108(7):078102, 2012. * [12] CP Broedersz, C Storm, and FC MacKintosh. Nonlinear elasticity of composite networks of stiff biopolymers with flexible linkers. _Phys Rev Lett_, 101(11):118103, 2008. * [13] Preethi L Chandran and Victor H Barocas. Affine versus non-affine fibril kinematics in collagen networks: theoretical studies of network behavior. _J Biomech Eng_, 2006. * [14] Guillaume T Charras and Mike A Horton. Single cell mechanotransduction and its modulation analyzed by atomic force microscope indentation. _Biophys J_, 82(6):2970-2981, 2002. * [15] Mark F Coughlin and Dimitrije Stamenovic. A prestressed cable network model of the adherent cell cytoskeleton. _Biophys J_, 84(2):1328-1336, 2003. * [16] Ben Fabry, Geoffrey N Maksym, James P Butler, Michael Glogauer, Daniel Navajas, and Jeffrey J Fredberg. Scaling the microrheology of living cells. _Phys Rev Lett_, 87(14):148102, 2001. * [17] Paul J Flory and John Rehner Jr. Statistical mechanics of cross-linked polymer networks i. rubberlike elasticity. _J Chem Phys_, 11(11):512-520, 1943. * [18] Yu Long Han, Pierre Ronceray, Guoqiang Xu, Andrea Malandrino, Roger D Kamm, Martin Lenz, Chase P Broedersz, and Ming Guo. Cell contraction induces long-ranged stress stiffening in the extracellular matrix. _Proceedings of the National Academy of Sciences_, 115(16):4075-4080, 2018. * [19] Daniel Ch Haspinger, Sandra Klinge, and Gerhard A Holzapfel. Numerical analysis of the impact of cytoskeletal actin filament density alterations onto the diffusive vesicle-mediated cell transport. _PLoS Comput Biol_, 17(5):e1008784, 2021. * [20] DA Head, AJ Levine, and FC MacKintosh. Distinct regimes of elastic response and deformation modes of cross-linked cytoskeletal and semiflexible polymer networks. _Phys Rev E_, 68(6):061907, 2003. * [21] DA Head, AJ Levine, and FC MacKintosh. Mechanical response of semiflexible networks to localized perturbations. _Phys Rev E_, 72(6):061914, 2005. * [22] Claus Heussinger, Boris Schaefer, and Erwin Frey. Nonaffine rubber elasticity for stiff polymer networks. _Phys Rev E_, 76(3):031906, 2007. * [23] Gerhard A Holzapfel and Ray W Ogden. On the bending and stretching elasticity of biopolymer filaments. _J Elast_, 104(1-2):319-342, 2011. * [24] Gerhard A Holzapfel and Ray W Ogden. Elasticity of biopolymer filaments. _Acta Biomater_, 9(7):7320-7325, 2013. * [25] Gerhard A Holzapfel, Michael J Unterberger, and Ray W Ogden. An affine continuum mechanical model for cross-linked f-actin networks with compliant linker proteins. _J Mech Behav Biomed Mater_, 38:78-90, 2014. * [26] Jiliang Hu, Somaye Jafari, Yulong Han, Alan J Grodzinsky, Shengqiang Cai, and Ming Guo. Size-and speed-dependent mechanical behavior in living mammalian cytoplasm. _Proc Natl Acad Sci U S A_, 114(36):9529-9534, 2017. * [27] Jiliang Hu, Yiwei Li, Yukun Hao, Tianqi Zheng, Satish K Gupta, German Alberto Parada, Huayin Wu, Shaoting Lin, Shida Wang, Xuanhe Zhao, et al. High stretchability, strength, and toughness of living cells enabled by hyperelastic vimentin intermediate filaments. _Proc Natl Acad Sci U S A_, 116(35):17175-17180, 2019. * [28] Donald E Ingber. Tensegrity i. cell structure and hierarchical systems biology. _J Cell Sci_, 116(7):1157-1173, 2003. * [29] Min-Cheol Kim, Yaron R Silberberg, Rohan Abeyaratne, Roger D Kamm, and H Harry Asada. Computational modeling of three-dimensional ecm-rigidity sensing to guide directed cell migration. _Proc Natl Acad Sci U S A_, 115(3):E390-E399, 2018. * [30] Taeyoon Kim, W Hwang, and RD Kamm. Computational analysis of a cross-linked actin-like network. _Exp Mech_, 49(1):91-104, 2009. * [31] Hyungsuk Lee, Benjamin Pelz, Jorge M Ferrer, Taeyoon Kim, Matthew J Lang, and Roger D Kamm. Cytoskeletal deformation at high strains and the role of cross-link unfolding or unbinding. _Cell Mol Bioeng_, 2(1):28-38, 2009. * [32] O Lieleg, KM Schmoller, Mireille Maria Anna Elisabeth Claessens, and Andreas R Bausch. Cytoskeletal polymer networks: viscoelastic properties are determined by the microscopic interaction potential of cross-links. _Biophys J_, 96(11):4725-4732, 2009. * [33] Oliver Lieleg, Mireille MAE Claessens, and Andreas R Bausch. Structure and dynamics of cross-linked actin networks. _Soft Matter_, 6(2):218-225, 2010. * [34] Anders Logg, Kent-Andre Mardal, and Garth Wells. _Automated solution of differential equations by the finite element method: The FEniCS book_, volume 84. Springer Science & Business Media, 2012. * [35] Fanlong Meng and Eugene M Terentjev. Theory of semiflexible filaments and networks. _Polymers_, 9(2):52, 2017. * [36] C Miehe, Serdar Goktepe, and F20965751091 Lulei. A micro-macro approach to rubber-like materials--part i: the non-affine micro-sphere model of rubber elasticity. _J Mech Phys Solids_, 52(11):2617-2660, 2004. * [37] Emad Moeendarbary, Leo Valon, Marco Fritzsche, Andrew R Harris, Dale A Moulding, Adrian J Thrasher, Eleanor Stride, L Mahadevan, and Guillaume T Charras. The cytoplasm of living cells behaves as a poroelastic material. _Nat Mater_, 12(3):253-261, 2013. * [38] Mohammad RK Mofrad and Roger D Kamm. _Cytoskeletal mechanics: models and measurements in cell mechanics_. Cambridge University Press, 2006. * [39] Alex Mogilner and Leah Edelstein-Keshet. Regulation of actin dynamics in rapidly moving cells: a quantitative analysis. _Biophys J_, 83(3):1237-1258, 2002. * [40] Kei W Muller, Anna M Birzle, and Wolfgang A Wall. Beam finite-element model of a molecular motor for the simulation of active fibre networks. _Proc R Soc Lond A Math Phys Sci_, 472(2185):20150555, 2016. * [41] Kei W Muller, Christian J Cyron, and Wolfgang A Wall. Computational analysis of morphologies and phase transitions of cross-linked, semi-flexible polymer networks. _Proc R Soc Lond A Math Phys Sci_, 471(2182):20150332, 2015. * [42] Dietmar B Oelz. Quasi-steady-state reduction of a model for cytoplasmic transport of secretory vesicles in stimulated chromaffin cells. _J Math Biol_, 82(4):1-25, 2021. * [43] HG Othmer. Eukaryotic cell dynamics from crawlers to swimmers. _Wiley Interdiscip Rev Comput Mol Sci_, 9(1):e1376, 2019. * [44] Alison E Patteson, Robert J Carroll, Daniel V Iwamoto, and Paul A Janmey. The vimentin cytoskeleton: when polymer physics meets cell biology. _Phys Biol_, 18(1):011001, 2020. * [45] L Preziosi and M Scianna. Mathematical models of the interaction of cells and cell aggregates with the extracellular matrix. In _Mathematical models and methods for living systems_, pages 131-210. Springer, 2016. * [46] Robyn H Pritchard, Yan Yan Sherry Huang, and Eugene M Terentjev. Mechanics of biological networks: from the cell cytoskeleton to connective tissue. _Soft matter_, 10(12):1864-1884, 2014. * [47] Masaaki Sato, N Ohshima, and RM Nerem. Viscoelastic properties of cultured porcine aortic endothelial cells exposed to shear stress. _J Biomech_, 29(4):461-467, 1996. * [48] Jean Carlos Serrano, Satish Kumar Gupta, Roger D Kamm, and Ming Guo. In pursuit of designing multicellular engineered living systems: A fluid mechanical perspective. _Annu Rev Fluid Mech_, 53(1):411-437, 2021. * [49] Anthony James Merrill Spencer. _Continuum theory of the mechanics of fibre-reinforced composites_, volume 282. Springer, 2014. * [50] Dimitrije Stamenovic, Bela Suki, Ben Fabry, Ning Wang, Jeffrey J Fredberg, and Julie E Buy. Rheology of airway smooth muscle cells is associated with cytoskeletal contractile stress. _J Appl Physiol_, 96(5):1600-1605, 2004. * [51] Cornelis Storm, Jennifer J Pastore, Fred C MacKintosh, Tom C Lubensky, and Paul A Janmey. Nonlinear elasticity in biological gels. _Nature_, 435(7039):191-194, 2005. * [52] Alberto Stracuzzi, Ben R Britt, Edoardo Mazza, and Alexander E Ehret. Risky interpretations across the length scales: continuum vs. discrete models for soft tissue mechanobiology. _Biomech Model Mechanobiol_, 21(2):433-454, 2022. * [53] Jean Paul Thiery, Herve Acloque, Ruby YJ Huang, and M Angela Nieto. Epithelial-mesenchymal transitions in development and disease. _Cell_, 139(5):871-890, 2009. * [54] LRG Treloar and G Riding. A non-gaussian theory for rubber in biaxial strain. i. mechanical properties. _Proc R Soc Lond A Math Phys Sci_, 369(1737):261-280, 1979. * [55] Erika Tsingos, Bente Hilde Bakker, Koen AE Keijzer, Hermen Jan Hupkes, and Roeland MH Merks. Hybrid cellular potts and bead-spring modeling of cells in fibrous extracellular matrix. _Biophysical Journal_, 2023. * [56] Michael J Unterberger and Gerhard A Holzapfel. Advances in the mechanical modeling of filamentous actin and its cross-linked networks on multiple scales. _Biomech Model Mechanobiol_, 13(6):1155-1174, 2014. * [57] Michael J Unterberger, Kurt M Schmoller, Andreas R Bausch, and Gerhard A Holzapfel. A new approach to model cross-linked actin networks: multi-scale continuum formulation and computational analysis. _J Mech Behav Biomed Mater_, 22:95-114, 2013. * [58] Michael J Unterberger, Kurt M Schmoller, Christine Wurm, Andreas R Bausch, and Gerhard A Holzapfel. Viscoelasticity of cross-linked actin networks: Experimental tests, mechanical modeling and finite-element analysis. _Acta Biomater_, 9(7):7343-7353, 2013. * [59] Hans Van Oosterwyck, Jose Felix Rodriguez, Manuel Doblare, and Jose Manuel Garcia Aznar. An affine micro-sphere-based constitutive model, accounting for junctional sliding, can capture f-actin network mechanics. _Comput Methods Biomech Biomed Engin_, 16(9):1002-1012, 2013. * [60] Ashkan Vaziri and Arvind Gopinath. Cell and biomolecular mechanics in silico. _Nat Mater_, 7(1):15-23, 2008. * [61] Ming Chen Wang and Eugene Guth. Statistical theory of networks of non-gaussian flexible chains. _J Chem Phys_, 20(7):1144-1157, 1952. * [62] Ning Wang, Keiji Naruse, Dimitrije Stamenovic, Jeffrey J Fredberg, Srboljub M Mijailovich, Iva Marija Tolic-Norrelykke, Thomas Polte, Robert Mannix, and Donald E Ingber. Mechanical behavior in living cells consistent with the tensegrity model. _Proc Natl Acad Sci U S A_, 98(14):7765-7770, 2001. * [63] Ning Wang, Iva Marija Tolic-Norrelykke, Jianxin Chen, Srboljub M Mijailovich, James P Butler, Jeffrey J Fredberg, and Dimitrije Stamenovic. Cell prestress. i. stiffness and prestress are closely associated in adherent contractile cells. _Am J Physiol Cell Physiol_, 282(3):C606-C616, 2002. # Discrete-to-continuum models of pre-stressed cytoskeletal filament networks J. Kory School of Mathematics and Statistics, University of Glasgow, Mathematics and Statistics Building, University Place, Glasgow G12 8QQ, UK N. A. Hill School of Mathematics and Statistics, University of Glasgow, Mathematics and Statistics Building, University Place, Glasgow G12 8QQ, UK X. Luo School of Mathematics and Statistics, University of Glasgow, Mathematics and Statistics Building, University Place, Glasgow G12 8QQ, UK P. S. Stewart School of Mathematics and Statistics, University of Glasgow, Mathematics and Statistics Building, University Place, Glasgow G12 8QQ, UK ###### Abstract This Supplementary Material is organized as follows. Section S1 summarizes notation adopted throughout this article. Model parameters together with their default values are listed in Section S2. Section S3 contains detailed derivations pertaining to the discrete model and Section S4 to the upscaling and the resulting continuum model. Finally, Section S5 presents calculations relating to the small-deformations and small-bead limits, including that of the net force acting on the bead. ## S1 Summary of notation Below we list notation adopted in this work, stating the symbols and their definitions. We note that this table is not exhaustive but with its help one can easily deduce all notation adopted in this work. For example, the mesh spacing representative of the cytoskeleton \(\varepsilon_{c}\) is obtained by adding subscript \(c\) to the mesh spacing \(\varepsilon\). **General** \(FS\): Abbreviation for filament segment \(CL\): Abbreviation for crosslink \(\sim\): Dimensional variable (above the symbol) \(i,j\): Indices of the discrete network (as subscript) \(kl\): Indices of tensors attaining value 1 or 2 for the two spatial dimensions (as subscript) \(c\): Value representative of the cytoskeleton (as subscript) \(I/O\): Pertaining to inner/outer region (as superscript) **Variables and Functions** \(\mathbf{X}=(X,Y)\): Initial configuration variables \(\mathbf{x}=(x,y)\): Deformed configuration variables \((u,v)\): Components of the displacement field \((\hat{x},\hat{y})\): Small-deformations variables \((\bar{X},\bar{Y})\): Initial configuration variables rescaled to the bead boundary region \(\bar{Z}\): Stretched \(\bar{Y}\) coordinate \((\mu,\nu)\): Elliptical coordinates \(\tilde{r}\): End-to-end distance (straight-line distance between two ends of a filament segment) \(r\): End-to-end distance normalized with respect to the stress-free contour length \(f\) Axial force in a filament segment (scales with \(N\)) \(\mathcal{F}\) Axial force in a filament segment (\(N=N_{c}\)) \(e\) Energy stored in a filament segment (scales with \(N\)) \(\mathcal{E}\) Energy stored in a filament segment (\(N=N_{c}\)) \(\varphi\) Polar angle **Parameters of the initial configuration** \(\bar{D}\) Domain length \(N\) Number of filament segments belonging to one filament \(\bar{R}\) Initial mesh spacing \(\varepsilon\) Dimensionless initial mesh spacing \(\tilde{L}\) Stress-free contour length of a filament \(\tilde{\Lambda}\) Stress-free contour length of a filament segment \(\xi\) Initial mesh spacing normalized with respect to the stress-free contour length \(f_{p}\) Force in a filament segment due to pre-stress (scales with \(N\)) \(\mathcal{F}_{p}\) Force in a filament segment due to pre-stress (\(N=N_{c}\)) \(\omega\) Dimensionless parameter of the small-deformations problem \(\tilde{\sigma}_{p}\) Macroscale pre-stress \(a\) Bead radius **Material properties of the filaments** \(\tilde{Y}\) Young's modulus of a filament \(\tilde{b}\) Radius of a filament \(\tilde{k}_{B}\) Boltzmann constant \(\tilde{T}\) Absolute temperature \(\tilde{\Lambda}_{p}\) Persistence length of a filament \(\tilde{\mathcal{F}}_{\text{entropic}}\) Entropic force \(\tilde{\mathcal{F}}_{\text{enthalpic}}\) Enthalpic force \(\mathcal{T}_{1}\) Ratio of the entropic force to the enthalpic force \(\mathcal{T}_{2}\) One half of the ratio of the persistence length to the initial end-to-end distance **Deformation** \(\varphi_{*}\) Pulling angle \(R_{b}\) Magnitude of the bead displacement \(F_{b}\) Magnitude of the net force acting on the bead \(\mathcal{K}\) Scalar measure of network stiffness \(\hat{\boldsymbol{r}}_{i\pm\frac{1}{2},j}/\hat{\boldsymbol{r}}_{i,j\pm\frac{ 1}{2}}\) Unit vectors pointing in the directions of filament segments adjacent to node (i,j) \(l_{i\pm\frac{1}{2},j}/l_{i,j\pm\frac{1}{2}}\) Deformed lengths of the filament segments \(\boldsymbol{I}\) Identity tensor \(\boldsymbol{F}\) Deformation gradient tensor \(\mathbf{C}\)Right Cauchy-Green deformation tensor \(I_{4/6}\left(\mathbf{C}\right)\)Invariants of the right Cauchy-Green deformation tensor \(\mathbf{S}\)Nominal stress tensor \(W\)Strain energy density ## S2 Model parameters ### Summary of the dimensional and dimensionless models The discrete model is only representative of a cytoskeletal mesh with spacing \(\tilde{R}_{c}\) provided one takes \(N=N_{c}=1/\varepsilon_{c}=\tilde{D}/\tilde{R}_{c}\) - increasing \(N\) only facilitates convergence to the continuum model. Subject to the microscale constitutive law (3), the dimensional model is then governed by ten parameters - three parameters describing the initial geometry with the bead (\(\tilde{D}\), \(\tilde{R}_{c}\) and \(\tilde{a}\)), four parameters describing mechanical properties of the filaments under consideration (\(\tilde{Y}\), \(\tilde{b}_{c}\), \(\tilde{\Lambda}_{p}\) and \(\tilde{\mathcal{F}}_{p}\)), two parameters governing the bead displacement (\(\tilde{R}_{b}\) and \(\varphi_{*}\)) and temperature \(\tilde{T}\). Remaining parameters can be deduced from these. Note in particular that the stress-free contour length \(\tilde{\Lambda}_{c}(\tilde{\mathcal{F}}_{p})\) can be found numerically upon substituting \(\tilde{b}=\tilde{b}_{c}\), \(\tilde{r}=\tilde{R}_{c}\) and \(\tilde{f}=\tilde{\mathcal{F}}_{p}\) (provided \(N=N_{c}\)) into (3); alternatively one can use an explicit approximation (S15) derived in Section S3.3. Subject to the microscale constitutive law (5), the dimensionless model (for \(N=1/\varepsilon_{c}\)) is governed by 7 parameters - \(\varepsilon_{c}\), \(a\), \(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\), \(\mathcal{F}_{p}\), \(R_{b}\) and \(\varphi_{*}\) - and \(\xi\) must be found by numerically solving (7). Alternatively, one can impose the explicit approximations (9) (vimentin) or (S13) (actin; \(\xi\) is the given by (S12)) for the microscale constitutive law, in which case the dimensionless model (\(N=1/\varepsilon_{c}\)) needs 5 parameters for both actin and vimentin - \(\varepsilon_{c}\), \(a\), \(\mathcal{F}_{p}\), \(R_{b}\) and \(\varphi_{*}\) - and actin requires one more parameter (\(7_{2}\)) for full specification. Recall that in each of the above cases, it is assumed that filaments cannot withstand any compressive loads. ### Default dimensional parameters Recall the default values \(\tilde{D}=5\mu\)m, \(\tilde{R}_{c}=0.05\mu\)m, \(N=(N_{c}=)100\) (estimated in Section 2.1), \(\varphi_{*}=\pi/6\) and \(0\leq\tilde{R}_{b}\leq\tilde{R}_{c}\) (Section 4). We further use as default values \(\tilde{T}=300\) K for the absolute temperature and \(\tilde{a}=0.25\)\(\mu\)m for the bead radius [7]. The standard value of the Boltzmann constant is \(\tilde{k}_{B}\approx 1.38\times 10^{-23}\)m\({}^{2}\)kg s\({}^{-2}\)K\({}^{-1}\). Mechanical behaviour of actin filaments subject to tension has been widely studied experimentally. The microscale constitutive law (3) has been shown to be equivalent to another constitutive model which in turn reproduced the experimental data well [2, 6]. The following estimates for material parameters from (3) pertain to actin filaments _in vivo_: \(\tilde{Y}=2\) GPa, \(\tilde{\Lambda}_{p}=17\mu\)m and \(\tilde{b}=3.5\) nm [8]. Even though vimentin has only gained significant attention from the scientific community relatively recently, much is already known about its tensile behaviour. Using atomistic simulations, three distinct regimes in force extension diagram of single vimentin dimer under tension were uncovered and the underlying changes in its molecular structure identified [10]. Tensile behaviour of single vimentin filaments was measured [1], confirming three distinct regimes reported previously [10], and showing good agreement between optical trap and atomic force microscopy experiments. Unfortunately, it is very unclear how these results could be translated to vimentin FSs of various contour lengths. For simplicity, we thus assume that a tensile response of a single vimentin FS can be modelled using Equation (3) with an appropriate choice of model parameters. Vimentin filaments are about 10 nm in diameter (\(\tilde{b}_{c}=5\) nm) and their Young's modulus was measured to be about \(\tilde{Y}=0.9\) GPa [5]. The persistence length of vimentin is approximately \(\tilde{\Lambda}_{p}=1\mu\)m [8]. To allow as large deformations as possible without breaking the actin filaments, we take the tensile strength \(\tilde{\mathcal{F}}_{\max}\) of actin to be the upper bound of the values found in the literature, i.e. 600 pN [13], and the value 8 nN provides us with a lower bound for the tensile strength of vimentin [1]. The only unknown parameter is then the value of the force due to pre-stress \(\tilde{\mathcal{F}}_{p}\). As discussed in the main body of this paper, we have not managed to find any estimate for microscale force due to pre-stress (and, equivalently, pre-stretch \(\xi\) or stress-free contour length \(\tilde{\Lambda}\) values) representative of cells _in vivo_. Therefore, \(\tilde{\mathcal{F}}_{p}\) will here be considered a free parameter, with the default value equal to one half of the tensile strength estimates from above. For completeness we also list the default values for the mechanical parameters of the dimensionless model to be \[\mathcal{T}_{1}^{\mathrm{actin}}\approx 1\times 10^{-9}\qquad\mathcal{T}_{2}^{ \mathrm{actin}}\approx 170\qquad\mathcal{T}_{1}^{\mathrm{ vimentin}}\approx 1.9\times 10^{-8}\qquad\mathcal{T}_{2}^{\mathrm{ vimentin}}\approx 10.\] Table S1 lists key model parameters together with their default values. ## S3 Discrete model ### Stored elastic energy in undeformed and deformed configurations The total (elastic) energy of the discrete network, denoted \(\tilde{e}_{T}\) (introducing subscript \(T\) for total), is the sum of contributions due to axial filament stretching. First let us note that the microscale constitutive law (3) is parameterized by the stress-free contour length \(\tilde{\Lambda}\) which in turn scales as \(O(1/N)\) in the \(N\to\infty\) limit. Therefore, we simply write \(\tilde{f}=\tilde{f}(\tilde{r}/\tilde{\Lambda};N)\). We then express the elastic energy stored in an individual FS for general \(N\) as \[\tilde{e}(r;N)=\int\limits_{\tilde{R}_{sf}}^{r\tilde{\Lambda}}\tilde{f}\left( \frac{\tilde{s}}{\tilde{\Lambda}};N\right)d\tilde{s},\] (S1) where \(\tilde{R}_{sf}\) denotes the stress-free end-to-end distance. Defining the deformed lengths of the FSs between neighbouring CLs as \[\tilde{l}_{i\pm\frac{1}{2},j}=\sqrt{\left(\tilde{x}_{i\pm 1,j}-\tilde{x}_{i,j} \right)^{2}+\left(\tilde{y}_{i\pm 1,j}-\tilde{y}_{i,j}\right)^{2}}\qquad\tilde{l}_{i,j \pm\frac{1}{2}}=\sqrt{\left(\tilde{x}_{i,j\pm 1}-\tilde{x}_{i,j}\right)^{2}+ \left(\tilde{y}_{i,j\pm 1}-\tilde{y}_{i,j}\right)^{2}},\] (S2) where the usage of the index \(\pm\frac{1}{2}\) arises naturally between any two neighbouring CLs, the total energy \(\tilde{e}_{T}\) can be obtained by summing up these energies for all FSs, i.e. \[\begin{split}\tilde{e}_{T}=&\sum_{i=-N/2+1}^{N/2-1 }\sum_{j=-N/2+1}^{N/2-1}\left[\tilde{e}\left(\frac{\tilde{l}_{i-\frac{1}{2},j }}{\tilde{\Lambda}};N\right)+\tilde{e}\left(\frac{\tilde{l}_{i,j-\frac{1}{2}}} {\tilde{\Lambda}};N\right)\right]+\\ &\sum_{j=-N/2+1}^{N/2-1}\tilde{e}\left(\frac{\tilde{l}_{\frac{N} {2}-\frac{1}{2},j}}{\tilde{\Lambda}};N\right)+\sum_{i=-N/2+1}^{N/2-1}\tilde{e} \left(\frac{\tilde{l}_{i,\frac{N}{2}-\frac{1}{2}}}{\tilde{\Lambda}};N\right). \end{split}\] (S3) We assume no slippage of CLs along the filaments so that the stress-free contour lengths of every FS in the undeformed and the deformed configurations are equal to \(\tilde{\Lambda}\). Following Sections S3.2 and S3.2.2, we can write the axial forces as \(\tilde{f}(r;N)=N_{c}\tilde{\mathcal{F}}(r)/N\) and express the elastic energy stored in a single FS at the undeformed end-to-end distance \(\tilde{R}\) as \[\tilde{e}(\xi;N)=\int\limits_{\tilde{R}_{sf}}^{\tilde{R}}\frac{N_{c}}{N} \tilde{\mathcal{F}}\left(\frac{\tilde{s}}{\tilde{\Lambda}}\right)d\tilde{s},\] (S4) where \(\xi\) is defined in (2) and we have \[\tilde{e}\left(\frac{\tilde{r}}{\tilde{\Lambda}};N\right)=\int\limits_{\tilde {R}_{sf}}^{\tilde{r}}\frac{N_{c}}{N}\tilde{\mathcal{F}}\left(\frac{\tilde{s}} {\tilde{\Lambda}}\right)d\tilde{s}=\tilde{e}(\xi,N)+\int\limits_{\tilde{R}}^{ \tilde{r}}\frac{N_{c}}{N}\tilde{\mathcal{F}}\left(\frac{\tilde{s}}{\tilde{ \Lambda}}\right)d\tilde{s}.\] (S5) To derive the strain energy density in Section S4, it is also useful to introduce the elastically stored energy at an arbitrary CL \((i,j)\) as \[\tilde{e}_{i,j}(N)=\frac{1}{2}\left(\tilde{e}\left(\frac{\tilde{l}_{i-\frac{1}{2},j}}{\tilde{\Lambda}},N\right)+\tilde{e}\left(\frac{\tilde{l}_{i+\frac{1}{2},j} }{\tilde{\Lambda}},N\right)+\tilde{e}\left(\frac{\tilde{l}_{i,j-\frac{1}{2}}}{ \tilde{\Lambda}},N\right)+\tilde{e}\left(\frac{\tilde{l}_{i,j+\frac{1}{2}}}{ \tilde{\Lambda}},N\right)\right),\] (S6) where the factor \(1/2\) accounts for the fact that the tensile energy stored in any FS corresponds to two CLs rather than just one. We further define \(\tilde{D}_{sf}=N\tilde{R}_{sf}\). For any \(r=\tilde{r}/\tilde{\Lambda}\) we can then using integration by substitution \((\tilde{s}/\tilde{\Lambda}=N\tilde{s}/\tilde{L}=t)\) write \[\tilde{e}(r,N)=\frac{N_{c}}{N}\int\limits_{\tilde{R}_{sf}}^{\tilde{\Lambda}r} \tilde{\mathcal{F}}\left(\frac{\tilde{s}}{\tilde{\Lambda}}\right)d\tilde{s}= \frac{\tilde{L}N_{c}}{N^{2}}\int\limits_{\tilde{D}_{sf}/\tilde{L}}^{r}\tilde{ \mathcal{F}}(t)dt=\left(\frac{N_{c}}{N}\right)^{2}\tilde{\mathcal{E}}(r),\] (S7) which defines the energy \(\tilde{\mathcal{E}}(r)\) stored in a FS in a situation representative of cytoskeleton (\(N=N_{c}\)). Defining \(\xi_{sf}=\tilde{D}_{sf}/\tilde{L}\), this can be split into the energy due to pre-stress \(\tilde{\mathcal{E}}_{P}\) and that supplied with the deformation \(\tilde{\mathcal{E}}_{D}\) as \[\tilde{\mathcal{E}}(r)=\frac{\tilde{D}}{\tilde{\xi}N_{c}}\int\limits_{\xi_{sf }}^{\xi}\tilde{\mathcal{F}}(t)dt+\frac{\tilde{D}}{\tilde{\xi}N_{c}}\int\limits_ {\xi}^{r}\tilde{\mathcal{F}}(t)dt=\tilde{\mathcal{E}}_{P}+\tilde{\mathcal{E}} _{D}(r).\] (S8) ### Decreasing the mesh spacing #### s3.2.1 Scaling geometric parameters with \(N\) In Section 3 we upscale the discrete force balance into a continuum problem as \(N\to\infty\) (\(\tilde{\Lambda}/\tilde{L}\to 0\)). For the continuum limit to be a good approximation of the discrete model at the baseline setup representative of the cytoskeleton (i.e. using \(N=N_{c}=100\); for default values of all parameters see Section S2), we need to ensure that all model parameters are appropriately scaled as \(N\to\infty\). As \(N\) increases we have \(\tilde{R}=N_{c}/N\times\tilde{R}_{c}\) and \(\tilde{\Lambda}=N_{c}/N\times\tilde{\Lambda}_{c}\) where \(\tilde{R}_{c}\) and \(\tilde{\Lambda}_{c}\) are representative of the cytoskeleton. Note that the total length of the network increases without bounds as \(N\to\infty\). Assuming constant density for the material of the filament, the total mass is a constant multiple of the total volume \(2(N-1)\tilde{L}\pi\tilde{b}^{2}\) and to keep this \(O(1)\) as \(N\to\infty\), we assume that \(\tilde{b}=\tilde{b}_{c}\sqrt{N_{c}/N}\) where \(\tilde{b}_{c}\) is a representative radius of the filament. #### s3.2.2 Scaling forces (including those due to pre-stress) with \(N\) Next we need to ensure that we get \(O(1)\) tensile pre-stress in the \(N\to\infty\) limit. Substituting the above scalings into (3) and switching to \(\varepsilon_{c}=1/N_{c}\), we get in the undeformed configuration \[\frac{\tilde{D}}{\tilde{L}}=\left(1+\frac{Ne_{c}\tilde{J}_{p}}{\pi\tilde{Y} \tilde{b}_{c}^{2}}\right)\left(1-\sqrt{\frac{\tilde{k}_{B}\tilde{T}}{\pi \tilde{\Lambda}_{p}\left(\tilde{J}_{p}+\frac{\pi^{2}\tilde{k}_{B}\tilde{T} \tilde{\Lambda}_{p}N^{2}}{\tilde{L}^{2}}\right)}}\right).\] (S9) Using a dimensionless force due to pre-stress \(f_{p}=\tilde{f}_{p}/(\pi\tilde{Y}\tilde{b}_{c}^{2})\) (as per Section 2.6), equation (S9) becomes \[\frac{\tilde{D}}{\tilde{L}}=(1+\varepsilon_{c}Nf_{p})\left(1-\sqrt{\frac{ \tilde{k}_{B}\tilde{T}}{\pi\tilde{\Lambda}_{p}\left(\pi\tilde{Y}\tilde{b}_{c} ^{2}f_{p}+\frac{\pi^{2}\tilde{k}_{B}\tilde{T}\tilde{\Lambda}_{p}N^{2}}{\tilde {L}^{2}}\right)}}\right).\] (S10) Deriving an explicit relationship for \(f_{p}(N)\) would be cumbersome and the situation is further complicated by the fact that \(\tilde{L}\) depends on \(f_{p}\). However, in order to keep the right-hand side of (S10) \(O(1)\) in \(N\to\infty\) limit, \(f_{p}\) must scale as \(1/N\) for large \(N\). We thus get \(f_{p}(N)=\mathcal{F}_{p}/(\varepsilon_{c}N)\) with \(\mathcal{F}_{p}=O(1)\), which ensures that the total elastic energy stored in the pre-stressed domain stays \(O(1)\) as \(N\to\infty\) and we arrive at a finite (and non-zero) pre-stress in the continuum limit. Similarly, we must have \(f(r;N)=\mathcal{F}(r)/(\varepsilon_{c}N)\). To demonstrate the central idea behind this scaling, resulting force distributions for \(N=10\) and \(20\) (and otherwise default parameters for vimentin, as described in Section S2) are shown in Figure S1. Notice how the colorbar ranges vary with increasing \(N\), which reflects the force scaling. In other words, for the discrete simulations to converge onto an \(O(1)\) force response in the continuum (\(N\to\infty\)) limit, forces must scale as \(1/N\) and these forces are thus in physiologically realistic range only for \(N=N_{c}\). ### Implicit and explicit microscale constitutive laws for vimentin and actin #### s3.3.1 Explicit microscale constitutive laws for fixed \(N\) We observe that \(\mathcal{T}_{1}\) is very small for considered filaments - \(\mathcal{T}_{1}\approx 1.0\times 10^{-9}\) for actin and \(\mathcal{T}_{1}\approx 1.9\times 10^{-8}\) for vimentin. Substituting \(\mathcal{T}_{1}\ll 1\) into (7) we get (8) which provides an explicit relationship between the force due to pre-stress and pre-stretch. While this formula provides a good approximation to the implicit formula (7) for vimentin (provided the pre-stress is not too small), a non-negligible gap exists in the approximation for actin - see Figure S2. Similarly, when we plot the microscale constitutive law (5) (using \(\xi\) corresponding to the default pre-stress as found numerically from (7)) and compare it with the \(\mathcal{T}_{1}=0\) approximation (9), we again observe good agreement for vimentin but a clear gap for actin (see Figure S3). The gaps for actin can be explained by a combination of considered force range (the maximum considered value of \(\mathcal{F}\) is small for actin compared to vimentin, due to the small tensile strength of the former) and the size of \(\mathcal{T}_{2}\) (\(\approx 170\) for actin, which means that the spacing between neighbouring CLs is much shorter than the persistence length of actin). Having noticed the largeness of \(\mathcal{T}_{2}\) we (assume \(\mathcal{T}_{1}=O(1)\) and) substitute the ansatz \[\xi(\mathcal{F}_{p};\mathcal{T}_{1},\mathcal{T}_{2},\varepsilon_{c},N)=\xi_ {0}(\mathcal{F}_{p};\mathcal{T}_{1},\varepsilon_{c},N)+\frac{1}{\mathcal{T}_ {2}}\xi_{1}(\mathcal{F}_{p};\mathcal{T}_{1},\varepsilon_{c},N)+O(\mathcal{T }_{2}^{-2})\] (S11) into (7) and get \[\xi_{0}+\frac{1}{\mathcal{T}_{2}}\xi_{1}+O(\mathcal{T}_{2}^{-2})=(1+\mathcal{F}_{ p})\left(1-\sqrt{\frac{\mathcal{T}_{1}}{\mathcal{F}_{p}/(\varepsilon_{c}N)+4\pi^{3} \left(\varepsilon_{c}NT_{2}\right)^{2}\mathcal{T}_{1}\left(\xi_{0}^{2}+O( \sqrt{\mathcal{T}_{2}^{-1}})\right)}}\right).\] At the leading order in \(\mathcal{T}_{2}\) (\(O(1)\)) we again get \[\xi_{0}=1+\mathcal{F}_{p}\] and at \(O(\mathcal{T}_{2}^{-1})\) we conclude \[\xi_{1}=-\frac{1+\mathcal{F}_{p}}{2\pi^{3/2}\varepsilon_{c}N\xi_{0}^{2}}=- \frac{1}{2\pi^{3/2}\varepsilon_{c}N\left(1+\mathcal{F}_{p}\right)}.\] Retaining the first two terms, the approximation reads \[\xi=1+\mathcal{F}_{p}-\frac{1}{\mathcal{T}_{2}}\frac{1}{2\left(1+\mathcal{F}_ {p}\right)\pi^{3/2}\varepsilon_{c}N}.\] (S12) For actin, this provides a good approximation (without a significant gap) to (7), as shown in Figure S2a. Similarly, we can expand (5) for \(\mathcal{T}_{2}\gg 1\) and retaining \(O(\mathcal{T}_{2}^{-1})\) terms we get \[r=(1+\mathcal{F})\left(1-\frac{1}{2\pi^{3/2}\varepsilon_{c}N\xi\mathcal{T}_{2 }}\right).\] (S13) Using the approximation for \(\xi\) from (S12) in (S13), we again recover an excellent approximation without a gap, see Figure S3a. Redimensionalized microscale constitutive lawsFor the sake of completeness, we also state the approximate microscale constitutive laws in their dimensional forms (dimensionalizing both the force and the end-to-end distance). For vimentin, dimensionless microscale constitutive law (9) can be redimensionalized using (8) to give an expression in terms of the dimensional parameters of the approximate model (and \(N\)) which reads \[\tilde{f}=\max\left\{0,\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}}{\tilde{R}_{c}} \left[\left(1+\frac{\tilde{\mathcal{F}}_{p}}{\pi\tilde{Y}\tilde{b}_{c}^{2}} \right)\tilde{r}-\frac{\tilde{D}}{N}\right]\right\}.\] (S14) From (S12), we can deduce for actin \[\tilde{\Lambda}=\frac{\tilde{R}}{1+\mathcal{F}_{p}-\frac{1}{\mathcal{T}_{2}} \frac{1}{2\left(1+\mathcal{F}_{p}\right)\pi^{3/2}\varepsilon_{c}N}}\] (S15) and eventually conclude from (S13) the redimensional microscale constitutive law in the form \[\tilde{f}=\max\left\{0,\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}}{\tilde{R}_{c}}\left[ \frac{\xi}{1-\frac{\tilde{D}}{\pi^{3/2}\xi N\tilde{\Lambda}_{p}}}\tilde{r}- \frac{\tilde{D}}{N}\right]\right\},\] (S16) where \(\xi\) is given in (S12). #### s3.3.2 Scaling of \(\xi\) with \(N\) for fixed \(\mathcal{F}_{p}\) We observed in Section S3.2.2 that the force due to pre-stress must scale as \(f_{p}(N)=\mathcal{F}_{p}/(\varepsilon_{c}N)\), with \(\mathcal{F}_{p}=O(1)\). In order to study how \(\xi\) scales with \(N\) and determine whether this scaling is consistent for both the implicit model and the explicit approximations, we fix all parameters at their default value for both actin and vimentin (including \(\mathcal{F}_{p}\); see Section S2) and find the root \(\xi\) of (7) as function of \(N\) numerically. The log-plots in Figure S4 show that while \(\xi\) has not yet converged to its \(N\to\infty\) limit \(1+\mathcal{F}_{p}\) for \(N=1/\varepsilon_{c}=100\) (used as default throughout this work and indicated by vertical black lines in the figure), the dependence is very weak for both actin and vimentin. Moreover, (S12) provides a good approximation for \(\xi(N)\) near \(N=1/\varepsilon_{c}=100\) for actin (see dotted green curve in Figure S4a). ### Discrete model simulations for actin We simulated the discrete model for actin using the microscale constitutive law (S16) with \(\xi\) given by (S12). As in the main body, we plot the steady-state solutions in the vicinity of the bead. For bead displacements \(\tilde{R}_{b}\) as small as (roughly) one tenth of the mesh size (\(\bar{R}=0.05\mu\)m), actin FSs start experiencing tensile forces beyond the upper limit of their tensile strength, 600 pN (see Figure S5). In particular, FSs in the wake of the bead motion whose undeformed orientation has a significant component in its direction would typically be broken for very small bead displacement, by being stretched beyond the tensile strength. Moreover, best-studied actin CLs like filamin or alpha-actin typically unbind at even lower rupture forces of \(40-80\) pN [4] and such unbinding and rebinding is thought to play a crucial role in viscoelastic response of cytoskeletal networks. At present, our modelling framework is stationary and does not account for dynamic CLs. We thus acknowledge that the model as it stands is not yet suitable for realistic description of crosslinked actin networks and postpone such considerations for future work. Upscaling and continuum model ### Details of discrete-to-continuum upscaling #### s4.1.1 Upscaling the force balance Denoting partial derivatives with subscripts, we can express the relevant finite differences using Taylor expansions as \[x_{i+1,j}-x_{i,j}=x(X_{i+1},Y_{j})-x(X_{i},Y_{j})=(X_{i+1}-X_{i})x _{X}(X_{i},Y_{j})+\] (S17) \[\frac{(X_{i+1}-X_{i})^{2}}{2}x_{XX}(X_{i},Y_{j})+O((X_{i+1}-X_{i} )^{3})=\varepsilon x_{X}(X_{i},Y_{j})+\frac{\varepsilon^{2}}{2}x_{XX}(X_{i},Y_ {j})+O(\varepsilon^{3}).\] For other differences involving the deformed coordinate \(x\) occuring in our equations, we get \[x_{i-1,j}-x_{i,j} =-\varepsilon x_{X}(X_{i},Y_{j})+\frac{\varepsilon^{2}}{2}x_{XX }(X_{i},Y_{j})+O(\varepsilon^{3})\] (S18) \[x_{i,j+1}-x_{i,j} =\varepsilon x_{Y}(X_{i},Y_{j})+\frac{\varepsilon^{2}}{2}x_{YY}( X_{i},Y_{j})+O(\varepsilon^{3})\] \[x_{i,j-1}-x_{i,j} =-\varepsilon x_{Y}(X_{i},Y_{j})+\frac{\varepsilon^{2}}{2}x_{YY }(X_{i},Y_{j})+O(\varepsilon^{3}),\] and analogous equations hold for \(y\). For convenience, we omit the point at which the derivatives are evaluated from now on. In what follows we assume that all relevant partial derivatives of \(x\) and \(y\) are \(O(1)\). Assuming \(\varepsilon\ll 1\), we apply (S17)-(S18) to the dimensionless force balance (4) and bringing back the \(1/(\varepsilon_{c}N)=\varepsilon/\varepsilon_{c}\) factor, we get \[\left\{\mathcal{F}\left(\xi\sqrt{\left(-x_{X}+\frac{\varepsilon}{2 }x_{XX}+O(\varepsilon^{2})\right)^{2}+\left(-y_{X}+\frac{\varepsilon}{2}y_{XX }+O(\varepsilon^{2})\right)^{2}}\right)\frac{\left(-x_{X}+\frac{\varepsilon} {2}x_{XX}+O(\varepsilon^{2}),-y_{X}+\frac{\varepsilon}{2}y_{XX}+O( \varepsilon^{2})\right)}{\sqrt{\left(-x_{X}+\frac{\varepsilon}{2}x_{XX}+O( \varepsilon^{2})\right)^{2}+\left(-y_{X}+\frac{\varepsilon}{2}y_{XX}+O( \varepsilon^{2})\right)^{2}}}+\] (S19) \[\mathcal{F}\left(\xi\sqrt{\left(x_{Y}+\frac{\varepsilon}{2}x_{YY }+O(\varepsilon^{2})\right)^{2}+\left(y_{Y}+\frac{\varepsilon}{2}y_{YY}+O( \varepsilon^{2})\right)^{2}}\right)\frac{\left(x_{Y}+\frac{\varepsilon}{2}x_ {YY}+O(\varepsilon^{2}),y_{Y}+\frac{\varepsilon}{2}y_{YY}+O(\varepsilon^{2}) \right)}{\sqrt{\left(x_{Y}+\frac{\varepsilon}{2}x_{YY}+O(\varepsilon^{2}) \right)^{2}+\left(y_{Y}+\frac{\varepsilon}{2}y_{YY}+O(\varepsilon^{2})\right) ^{2}}}\frac{\varepsilon}{\varepsilon_{c}}=\mathbf{0}.\] Assuming \(\varepsilon\ll 1\), we Taylor expand the denominators according to \[\frac{1}{\sqrt{A+B\varepsilon+C\varepsilon^{2}+O(\varepsilon^{3})}}=\frac{1} {\sqrt{A}}-\frac{B\varepsilon}{2A^{3/2}}+\varepsilon^{2}\frac{3B^{2}-4AC}{8A ^{5/2}}+O(\varepsilon^{3})\] (S20) and those in the discrete bending terms (with \(\varepsilon^{2}\) cancelling out as a common term in both the numerator and denominator) according to \[\frac{1}{A+B\varepsilon+O(\varepsilon^{2})}=\frac{1}{A}-\frac{B\varepsilon}{A ^{2}}+O(\varepsilon^{2}).\] (S21) The nonlinear force terms are expanded as \[\mathcal{F}(\psi(\varepsilon))=\mathcal{F}(\psi(0))+\varepsilon\mathcal{F}^{ \prime}(\psi(0))\psi^{\prime}(0)+O(\varepsilon^{2})\] (S22) which holds for sufficiently smooth functions \(\mathcal{F}\) and \(\psi\). We denote \[\lambda^{X}(X,Y)=\xi\sqrt{x_{X}^{2}+y_{X}^{2}}\qquad\lambda^{Y}(X,Y)=\xi\sqrt{ x_{Y}^{2}+y_{Y}^{2}}\] using which the \(X-\)component of the force balance can be simplified to \[\left\{\left(-x_{X}+\frac{\varepsilon}{2}x_{XX}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{X}^{2}+y_{X}^{2}}}+\frac{\varepsilon(x_{X}x_{XX} +y_{X}y_{XX})}{2(x_{X}^{2}+y_{X}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{X}\right)-\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{X}\right)\frac{x_{X}x_{XX}+y_{X}y_{XX}}{2\sqrt{x_{X}^{2}+y_{X}^{2}}}+O (\varepsilon^{2})\right)+\right.\] \[\left.\left(x_{X}+\frac{\varepsilon}{2}x_{XX}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{X}^{2}+y_{X}^{2}}}-\frac{\varepsilon(x_{X}x_{XX }+y_{X}y_{XX})}{2(x_{X}^{2}+y_{X}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{X}\right)+\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{X}\right)\frac{x_{X}x_{XX}+y_{X}y_{XX}}{2\sqrt{x_{X}^{2}+y_{X}^{2}}}+ O(\varepsilon^{2})\right)+\right.\] \[\left.\left(-x_{Y}+\frac{\varepsilon}{2}x_{YY}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{Y}^{2}+y_{Y}^{2}}}+\frac{\varepsilon(x_{Y}x_{ YYYY}+y_{Y}y_{YY})}{2(x_{Y}^{2}+y_{Y}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{Y}\right)-\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{Y}\right)\frac{x_{Y}x_{YY}+y_{Y}y_{YY}}{2\sqrt{x_{Y}^{2}+y_{Y}^{2}}}+ O(\varepsilon^{2})\right)+\right.\] \[\left.\left(x_{Y}+\frac{\varepsilon}{2}x_{YY}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{Y}^{2}+y_{Y}^{2}}}-\frac{\varepsilon(x_{Y}x_{ YY}+y_{Y}y_{YY})}{2(x_{Y}^{2}+y_{Y}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{Y}\right)+\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{Y}\right)\frac{x_{Y}x_{YY}+y_{Y}y_{YY}}{2\sqrt{x_{Y}^{2}+y_{Y}^{2}}}+ O(\varepsilon^{2})\right)\right\}\frac{\varepsilon}{\varepsilon_{c}}=0\] (S23) and the \(Y-\)component to \[\left\{\left(-y_{X}+\frac{\varepsilon}{2}y_{XX}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{X}^{2}+y_{X}^{2}}}+\frac{\varepsilon(x_{X}x_{XX }+y_{X}y_{XX})}{2(x_{X}^{2}+y_{X}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{X}\right)-\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{X}\right)\frac{x_{X}x_{XX}+y_{X}y_{XX}}{2\sqrt{x_{X}^{2}+y_{X}^{2}}}+ O(\varepsilon^{2})\right)+\right.\] \[\left.\left(y_{X}+\frac{\varepsilon}{2}y_{XX}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{X}^{2}+y_{X}^{2}}}-\frac{\varepsilon(x_{X}x_{XX }+y_{X}y_{XX})}{2(x_{X}^{2}+y_{X}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{X}\right)+\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{X}\right)\frac{x_{X}x_{XX}+y_{X}y_{XX}}{2\sqrt{x_{X}^{2}+y_{X}^{2}}}+ O(\varepsilon^{2})\right)+\right.\] \[\left.\left(-y_{Y}+\frac{\varepsilon}{2}y_{YY}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{Y}^{2}+y_{Y}^{2}}}+\frac{\varepsilon(x_{Y}x_{ YYYY}+y_{Y}y_{YY})}{2(x_{Y}^{2}+y_{Y}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{Y}\right)-\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{Y}\right)\frac{x_{Y}x_{YY}+y_{Y}y_{YY}}{2\sqrt{x_{Y}^{2}+y_{Y}^{2}}}+ O(\varepsilon^{2})\right)+\right.\] \[\left.\left(y_{Y}+\frac{\varepsilon}{2}y_{YY}+O(\varepsilon^{2}) \right)\left(\frac{1}{\sqrt{x_{Y}^{2}+y_{Y}^{2}}}-\frac{\varepsilon(x_{Y}x_{YY}+y _{Y}y_{YY})}{2(x_{Y}^{2}+y_{Y}^{2})^{3/2}}+O(\varepsilon^{2})\right)\left( \mathcal{F}\left(\lambda^{Y}\right)+\varepsilon\xi\mathcal{F}^{\prime}\left( \lambda^{Y}\right)\frac{x_{YY}x_{YY}+y_{Y}y_{YY}}{2\sqrt{x_{Y}^{2}+y_{Y}^{2}}}+ O(\varepsilon^{2})\right)\right\}\frac{\varepsilon}{\varepsilon_{c}}=0.\] (S24) At \(O(1)\) and \(O(\varepsilon)\), the balance is automatically satisfied. Returning to vector form, at \(O(\varepsilon^{2})\) we get \[\mathbf{0}=\frac{1}{\varepsilon_{c}}\left\{\mathcal{F}(\lambda^{X}) \left(\frac{(x_{X},y_{X})}{\sqrt{x_{X}^{2}+y_{X}^{2}}}\right)_{X}+\xi\frac{x_{X} x_{XX}+y_{X}y_{XX}}{x_{X}^{2}+y_{X}^{2}}\mathcal{F}^{\prime}(\lambda^{X})(x_{X},y_{X})+ \mathcal{F}(\lambda^{Y})\left(\frac{(x_{Y},y_{Y})}{\sqrt{x_{Y}^{2}+y_{Y}^{2}}} \right)_{Y}+\right.\] \[\left.\xi\frac{x_{Y}x_{YY}+y_{Y}y_{YY}}{x_{Y}^{2}+y_{Y}^{2}} \mathcal{F}^{\prime}(\lambda^{Y})(x_{Y},y_{Y})\right\}.\] Recalling the definitions of \(\lambda^{X}\) and \(\lambda^{Y}\) and multiplying by \(\varepsilon_{c}\), we conclude the macroscale force balance (10). #### s4.1.2 Deriving strain energy density We further deduce the strain energy function for the derived continuum problem. As before, we nondimensionalize the lengths with respect to \(\tilde{D}\) and the forces with \(\pi\tilde{Y}\tilde{b}_{c}^{2}\) (so that the energies are nondimensionalized with \(\pi\tilde{D}\tilde{Y}\tilde{b}_{c}^{2}\)), Taylor expand the dimensional energy stored at CL \((i,j)\) (S6) and upon further simplifications get \[e_{i,j}(\varepsilon)=\frac{\varepsilon^{2}}{\varepsilon_{c}^{2}}\left(\mathcal{E} \left(\xi\sqrt{x_{X}^{2}+y_{X}^{2}}\right)+\mathcal{E}\left(\xi\sqrt{x_{Y}^{2}+y _{Y}^{2}}\right)+O(\varepsilon)\right),\] where \(\mathcal{E}\) is the dimensionless counterpart of the dimensional \(\tilde{\mathcal{E}}\) defined in (S7). To arrive at macroscale strain energy density \(W\), we divide by an area corresponding to one CL in the undeformed configuration \(\varepsilon^{2}\) and sending \(\varepsilon\to 0\) (\(N\to\infty\)) obtain \[W=\frac{1}{\varepsilon_{c}^{2}}\left(\mathcal{E}\left(\xi\sqrt{x_{X}^{2}+y_{X}^{ 2}}\right)+\mathcal{E}\left(\xi\sqrt{x_{Y}^{2}+y_{Y}^{2}}\right)\right).\] (S25) ### Deducing the continuum problem under the linearized constitutive law Under the linearized microscale constitutive law for vimentin (9), the continuum problem (10) takes (upon dividing by \(\xi\)) the form \[\left((x_{X},y_{X})\max\left\{0,1-\frac{1}{\xi\sqrt{x_{X}^{2}+y_{X}^{2}}}\right\} \right)_{X}+\left((x_{Y},y_{Y})\max\left\{0,1-\frac{1}{\xi\sqrt{y_{Y}^{2}+x_{Y}^{2}}} \right\}\right)_{Y}=\mathbf{0}.\] (S26) We further state the dimensional stress tensor (11) under (9) to be \[\tilde{\mathbf{S}}=\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}}{\tilde{\Lambda}_{c}}\begin{pmatrix} \tilde{x}_{\tilde{X}}\max\left\{0,1-\frac{1}{\xi\sqrt{\tilde{x}_{\tilde{X}}^{2} +\tilde{y}_{\tilde{X}}^{2}}}\right\}&\tilde{y}_{\tilde{X}}\max\left\{0,1-\frac {1}{\xi\sqrt{\tilde{x}_{\tilde{X}}^{2}+\tilde{y}_{\tilde{X}}^{2}}}\right\}\\ \tilde{x}_{\tilde{Y}}\max\left\{0,1-\frac{1}{\xi\sqrt{\tilde{x}_{\tilde{Y}}^{2} +\tilde{y}_{\tilde{Y}}^{2}}}\right\}&\tilde{y}_{\tilde{Y}}\max\left\{0,1-\frac {1}{\xi\sqrt{\tilde{x}_{\tilde{Y}}^{2}+\tilde{y}_{\tilde{Y}}^{2}}}\right\} \end{pmatrix},\] (S27) and the dimensional counterpart of (S26) then reads \[\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}}{\tilde{\Lambda}_{c}}\left\{\left(( \tilde{x}_{\tilde{X}},\tilde{y}_{\tilde{X}})\max\left\{0,1-\frac{1}{\xi\sqrt{ \tilde{x}_{\tilde{X}}^{2}+\tilde{y}_{\tilde{X}}^{2}}}\right\}\right)_{\tilde{X }}+\left((\tilde{x}_{\tilde{Y}},\tilde{y}_{\tilde{Y}})\max\left\{0,1-\frac{1} {\xi\sqrt{\tilde{y}_{\tilde{Y}}^{2}+\tilde{x}_{\tilde{Y}}^{2}}}\right\} \right)_{\tilde{Y}}\right\}=\mathbf{0},\] (S28) which is used in continuum simulations. Using the displacement field \[\tilde{u}(\tilde{X},\tilde{Y})=\tilde{x}(\tilde{X},\tilde{Y})-\tilde{X}\qquad \tilde{v}(\tilde{X},\tilde{Y})=\tilde{y}(\tilde{X},\tilde{Y})-\tilde{Y},\] this can be rewritten as \[\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}}{\tilde{\Lambda}_{c}}\left\{\left((1+ \tilde{u}_{\tilde{X}},\tilde{v}_{\tilde{X}})\max\left\{0,1-\frac{1}{\xi\sqrt{ \left(1+\tilde{u}_{\tilde{X}}\right)^{2}+\tilde{v}_{\tilde{X}}^{2}}}\right\} \right)_{\tilde{X}}+\left((\tilde{u}_{\tilde{Y}},1+\tilde{v}_{\tilde{Y}})\max \left\{0,1-\frac{1}{\xi\sqrt{\tilde{u}_{\tilde{Y}}^{2}+\left(1+\tilde{v}_{ \tilde{Y}}\right)^{2}}}\right\}\right)_{\tilde{Y}}\right\}=\mathbf{0}.\] (S29) ### Strain energy density and nominal stress tensor #### s4.3.1 Relation to nonlinear elasticity models for fiber-reinforced materials Redimensionalizing (S25) (in order to facilitate comparison with the standard results on fiber-reinforced materials), the strain energy density can be rewritten in terms of the deformation gradient tensor \(\mathbf{F}\) with components \(F_{kl}=\partial\tilde{x}_{i}/\partial\tilde{X}_{j}\) or in terms of the right Cauchy-Green deformation tensor \(\mathbf{C}=\mathbf{F}^{T}\mathbf{F}\) as \[\tilde{W}=\frac{\tilde{\mathcal{E}}\left(\xi\sqrt{F_{11}^{2}+F_{21}^{2}}\right) +\tilde{\mathcal{E}}\left(\xi\sqrt{F_{12}^{2}+F_{22}^{2}}\right)}{\tilde{R}_ {c}^{2}}=\frac{\tilde{\mathcal{E}}\left(\xi\sqrt{C_{11}}\right)+\tilde{ \mathcal{E}}\left(\xi\sqrt{C_{22}}\right)}{\tilde{R}_{c}^{2}}.\] (S30) Introducing \(\mathbf{M}=(1,0)\) and \(\mathbf{M^{\prime}}=(0,1)\) as the two directions of filaments in the undeformed configuration and employing the theory of fiber-reinforced materials, two invariants corresponding to these directions take forms \(I_{4}=\mathbf{M}\cdot(\mathbf{CM})=C_{11}\) and \(I_{6}=\mathbf{M^{\prime}}\cdot(\mathbf{CM^{\prime}})=C_{22}\). We can therefore express \(\tilde{W}\) also in terms of the invariants of \(\mathbf{C}\) as (12), thus establishing connection to the rich literature on constitutive modelling of fiber-reinforced materials. Using (S8), we can express this strain energy density as \[\tilde{W}(\mathbf{C})=\frac{2\int\limits_{\xi_{sf}}^{\xi}\tilde{\mathcal{F}}(t)dt +\int\limits_{\xi}^{\xi\sqrt{I_{4}(\mathbf{C})}}\tilde{\mathcal{F}}(t)dt+\int \limits_{\xi}^{\xi\sqrt{I_{6}(\mathbf{C})}}\tilde{\mathcal{F}}(t)dt}{\xi\tilde{R }_{c}},\] (S31) where \(\xi_{sf}\) denotes the normalized stress-free end-to-end distance so that the first integral represents the strain energy stored in the undeformed domain due to pre-stress (noting that in the undeformed configuration one has \(\mathbf{F}=\mathbf{I}\) and \(I_{4}(\mathbf{C})=I_{6}(\mathbf{C})=1\)) and the last two integrals represent the elastically-stored energy supplied with the deformation. Finally, assuming the approximation (9), the strain energy density (12) can be written as \[\tilde{W}(\mathbf{C})=\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}\xi}{2\tilde{R}_{c}}\left[ \max\left(0,\sqrt{I_{4}(\mathbf{C})}-\frac{1}{\xi}\right)^{2}+\max\left(0,\sqrt{I _{6}(\mathbf{C})}-\frac{1}{\xi}\right)^{2}\right],\] (S32) where \(\xi\) is approximated using (8). The problem (S29) can thus be re-formulated in the framework of nonlinear elasticity as minimization of the strain energy (S32). We further propose a smooth approximation to the microscale constitutive law and using \[\max\left(0,x\right)\approx\frac{x}{1+e^{-\kappa x}},\] (S33) which provides a good approximation to the maximum function for large \(\kappa\), we get a smooth approximation to the strain energy \[\tilde{W}(\mathbf{C})=\frac{\pi\tilde{Y}\tilde{b}_{c}^{2}\xi}{2\tilde{R}_{\rm c}} \left[\left(\frac{\sqrt{I_{4}(\mathbf{C})}-1/\xi}{1+e^{-\kappa\left(\sqrt{I_{4}(\mathbf{ C})}-1/\xi\right)}}\right)^{2}+\left(\frac{\sqrt{I_{6}(\mathbf{C})}-1/\xi}{1+e^{- \kappa\left(\sqrt{I_{6}(\mathbf{C})}-1/\xi\right)}}\right)^{2}\right],\] (S34) the minimization of which is implemented in our FEniCS code. Figure S6 documents that the approximation (S33) with \(\kappa=200\) leads to only negligible changes in the microscale constitutive law for vimentin. #### s4.3.2 Stress tensor Next, we deduce the components of the nominal stress tensor \(\tilde{\mathbf{S}}\) from \(\tilde{S}_{kl}=\partial\tilde{W}/\partial F_{ji}\) getting (11). Storm et al. [11] applied the Doi-Edwards construction [3] to a crosslinked network with an arbitrary distribution \(\tilde{\Psi}\) of end-to-end separation vectors \(\tilde{\mathbf{r}}\) and arrived at an averaged Cauchy stress tensor \(\tilde{\mathbf{\sigma}}\) of the form \[\tilde{\sigma}_{kl}^{T}=\frac{\tilde{\varrho}}{\det(\mathbf{F})}\left\langle\tilde {f}(|\mathbf{F}\tilde{\mathbf{r}}|)\frac{F_{il}\tilde{r}_{l}F_{jkl}\tilde{r}_{k}}{|\bm {F}\tilde{\mathbf{r}}|}\right\rangle_{\tilde{\Psi}(\tilde{\mathbf{r}})}\] where \(\tilde{\varrho}\) denotes the number of FSs per unit volume (otherwise their notation coincides with ours). Note that the microscale force was given as function of the dimensional end-to-end distance \(\tilde{r}\), as opposed to the dimensionless distance \(r\). Letting \(\tilde{\delta}(\tilde{\mathbf{r}})\) denote the Dirac delta function centered at \(\tilde{\mathbf{r}}=\mathbf{0}\) and applying this formula to our two-dimensional case with \[\tilde{\Psi}(\tilde{\mathbf{r}})=\frac{1}{4}\left\{\tilde{\delta}(\tilde{\mathbf{r}}- (\tilde{R},0))+\tilde{\delta}(\tilde{\mathbf{r}}-(-\tilde{R},0))+\tilde{\delta}( \tilde{\mathbf{r}}-(0,\tilde{R}))+\tilde{\delta}(\tilde{\mathbf{r}}-(0,-\tilde{R}))\right\}\] reflecting the undeformed orientations of filaments in our geometry and with \(\tilde{\varrho}=2N^{2}/\tilde{D}^{2}\), we arrive at a result identical to the one obtained when the connection \(\tilde{\mathbf{\sigma}}^{T}=\det(\mathbf{F})^{-1}\tilde{\mathbf{S}}^{T}\mathbf{F}^{T}\) from nonlinear elasticity [9] is applied to (11), which further certifies the correctness of our results. Details of the small-deformations and small-bead analysis ### Deriving small-deformations limit Substituting (17) into (10) we get \[\begin{split}&\mathbf{0}=\left(\mathcal{F}\left(\xi\sqrt{(1+R_{b}\hat{x}_ {X})^{2}+(R_{b}\hat{y}_{X})^{2}+O(R_{b}^{2})}\right)\frac{(1+R_{b}\hat{x}_{X}+O (R_{b}^{2}),R_{b}\hat{y}_{X}+O(R_{b}^{2}))}{\sqrt{(1+R_{b}\hat{x}_{X})^{2}+(R_ {b}\hat{y}_{X})^{2}+O(R_{b}^{2})}}\right)_{X}+\\ &\left(\mathcal{F}\left(\xi\sqrt{(R_{b}\hat{x}_{Y})^{2}+(1+R_{b} \hat{y}_{Y})^{2}+O(R_{b}^{2})}\right)\frac{(R_{b}\hat{x}_{Y}+O(R_{b}^{2}),1+R_ {b}\hat{y}_{Y}+O(R_{b}^{2}))}{\sqrt{(R_{b}\hat{x}_{Y})^{2}+(1+R_{b}\hat{y}_{Y })^{2}+O(R_{b}^{2})}}\right)_{Y}.\end{split}\] (S35) Taylor expanding \(\mathcal{F}\) as well as the denominators for \(R_{b}\ll 1\), we get \[\begin{split}&\mathbf{0}=\left(\left(\mathcal{F}(\xi)+R_{b}\xi \mathcal{F}^{\prime}(\xi)\hat{x}_{X}+O(R_{b}^{2})\right)(1+R_{b}\hat{x}_{X}+O (R_{b}^{2}),R_{b}\hat{y}_{X}+O(R_{b}^{2}))\left(1-R_{b}\hat{x}_{X}+O(R_{b}^{2} )\right)\right)_{X}+\\ &\left(\left(\mathcal{F}(\xi)+R_{b}\xi\mathcal{F}^{\prime}(\xi) \hat{y}_{Y}+O(R_{b}^{2})\right)(R_{b}\hat{x}_{Y}+O(R_{b}^{2}),1+R_{b}\hat{y}_{ Y}+O(R_{b}^{2}))\left(1-R_{b}\hat{y}_{Y}+O(R_{b}^{2})\right)\right)_{Y}.\end{split}\] (S36) The balance at \(O(1)\) is trivially satisfied. At \(O(R_{b})\), we get (18)-(19) in the main text. ### Details of small-bead asymptotics #### s5.2.1 \(\hat{x}\) problem We split the analysis into the inner (boundary) region characterized by \[\frac{X}{a}=\bar{X}=O(1)\quad\frac{Y}{a}=\bar{Y}=O(1)\] where we use the ansatz for inner solution \[\hat{x}^{I}(\bar{X},\bar{Y},a)=\hat{x}_{0}^{I}(\bar{X},\bar{Y})+\frac{1}{\ln \left(1/a\right)}\hat{x}_{1}^{I}(\bar{X},\bar{Y})+\frac{1}{\ln^{2}\left(1/a \right)}\hat{x}_{2}^{I}(\bar{X},\bar{Y})+O\left(\frac{1}{\ln^{3}\left(1/a \right)}\right)\] satisfying the boundary condition at \(\bar{X}^{2}+\bar{Y}^{2}=1\), and the outer region \(X=O(1)=Y\) with the outer solution \[\hat{x}^{O}(X,Y,a)=\hat{x}_{0}^{O}(X,Y)+\frac{1}{\ln\left(1/a\right)}\hat{x}_ {1}^{O}(X,Y)+\frac{1}{\ln^{2}\left(1/a\right)}\hat{x}_{2}^{O}(\bar{X},\bar{Y} )+O\left(\frac{1}{\ln^{3}\left(1/a\right)}\right)\] satisfying the Dirichlet condition at the outer boundary \(\hat{x}^{O}=0\). The rationale behind the logarithmic terms in the expansions will become apparent in the course of the analysis. In the inner region, we transform the \(\bar{Y}\) coordinate according to \(\bar{Y}=\sqrt{\omega}\bar{Z}\) so that we get Laplace's equation for the leading-order inner solution \[\hat{x}_{0,\bar{X}\bar{X}}^{I}+\hat{x}_{0,\bar{Z}\bar{Z}}^{I}=0\] on a domain with an ellipse cut out of it as depicted in Figure 5. We introduce elliptical coordinates \[\bar{X}=c\sinh(\mu)\sin(\nu)\qquad\bar{Z}=c\cosh(\mu)\cos(\nu)\] (S37) where \(c=\sqrt{(1-\omega)/\omega}\) is the (linear) eccentricity of the inner ellipse, \((\mu,\nu)\in(\mu_{1},\infty)\times[0,2\pi]\) so that \(\mu=\mu_{1}=\cosh^{-1}\left((1-\omega)^{-1/2}\right)\) represents the elliptical (inner) boundary. Note that even if our outer domain boundary is a circle in \((X,Y)\) coordinates (a square domain being even less amenable to analysis), in \((\bar{X},\bar{Z})\) coordinates it transforms into an ellipse with the same eccentricity as the inner (bead) elliptic boundary and therefore it cannot be simply characterized by \(\mu=\mu_{2}\) for some \(\mu_{2}>\mu_{1}\) (because the eccentricity of ellipses given by \(\mu=\) constant in elliptical coordinates strictly decreases with \(\mu\)). This causes the full problem (with \(a>0\)) to be analytically intractable and forced us to only study the \(a\ll 1\) limit. Writing \(\Phi_{0}^{I}(\mu,\nu)=\hat{x}_{0}^{I}(\bar{X},\bar{Z})\) we then have \(\Phi_{0}^{I}(\mu_{1},\nu)=\cos\left(\varphi_{*}\right)\) for all \(\nu\) and \[\frac{1}{c^{2}(\cosh^{2}(\mu)+\sin^{2}(\nu))}\left(\Phi_{0,\mu\mu}^{I}+\Phi_{0, \nu\nu}^{I}\right)=0\] and thus \[\Phi_{0,\mu\mu}^{I}+\Phi_{0,\nu\nu}^{I}=0.\] We assume \(\Phi_{0}^{I}\) to be \(2\pi-\)periodic in \(\nu\). As nothing drives the variation in \(\nu\) in the inner layer, we search for a solution in the form \(\Phi_{0}^{I}(\mu,\nu)=\Phi_{0}^{I}(\mu)\) solving \(\Phi_{0,\mu\mu}^{I}=0\). The solution reads \(\Phi_{0}^{I}=A_{0}\mu+B_{0}\). Following the same line of reasoning, we get that the higher-order terms \(\Phi_{i}^{I}(\mu,\nu)=\hat{x}_{i}^{I}(\bar{X},\bar{Z})\) are of the same form \(\Phi_{i}^{I}=A_{i}\mu+B_{i}\). We transform back to Cartesian coordinates [12] so that \[\mu=\frac{1}{2}\ln\left(1-2q(\bar{X},\bar{Z})+2\sqrt{q^{2}(\bar{X},\bar{Z})-q( \bar{X},\bar{Z})}\right)\] (S38) with \[q(\bar{X},\bar{Z})=\frac{-(\bar{X}^{2}+\bar{Z}^{2}-c^{2})-\sqrt{(\bar{X}^{2}+ \bar{Z}^{2}-c^{2})^{2}+4c^{2}\bar{X}^{2}}}{2c^{2}}.\] In \((\bar{X},\bar{Y})\) variables and expressed using \(\omega\), \(q\) reads (25). Next we wish to write \(\hat{x}_{0}^{I}\) in the outer coordinates \((X,Z)\) for the purposes of matching with the outer layer. To do this, we first rewrite \(q\) in these variables and expand in \(a\) as \[q=\frac{-(X/a)^{2}-(Z/a)^{2}+c^{2}-\sqrt{\left((X/a)^{2}+(Z/a)^{2}-c^{2} \right)^{2}+4c^{2}(X/a)^{2}}}{2c^{2}}=\] \[\frac{-(X/a)^{2}-(Z/a)^{2}+c^{2}-1/a^{2}\times\sqrt{(X^{2}+Z^{2})^{2}+a^{2}c^ {2}(2X^{2}-2Z^{2})+a^{4}c^{4}}}{2c^{2}}=\frac{1}{a^{2}}\left(-\frac{X^{2}+Z^{ 2}}{c^{2}}\right)+O(1)\] and then substitute it into (S38) to get \[\mu=\frac{1}{2}\ln\left(1+\frac{2(X^{2}+Z^{2})}{a^{2}c^{2}}+O(1)+2\sqrt{\left( \frac{X^{2}+Z^{2}}{a^{2}c^{2}}\right)^{2}+O\left(\frac{1}{a^{2}}\right)+\frac{ X^{2}+Z^{2}}{a^{2}c^{2}}+O(1)}\right)=\] \[\frac{1}{2}\ln\left(\frac{4(X^{2}+Z^{2})}{a^{2}c^{2}}+O(1)\right)=\ln\left( \frac{1}{a}\right)+\ln\left(\frac{2}{c}\sqrt{X^{2}+Z^{2}}\right)+O(a^{2}).\] Thus we see that \(A_{0}\) must equal \(0\) because otherwise the matching would require a contribution of order \(\ln\left(1/a\right)\gg 1\) to exist in the outer solution. The inner boundary condition at the leading order then enforces \(B_{0}=\cos(\varphi_{*})\). Our inner approximation thus so far reads \[\hat{x}^{I}=\cos(\varphi_{*})+\frac{1}{\ln\left(1/a\right)}\left(A_{1}\mu+B_{1 }\right)+\frac{1}{\ln^{2}\left(1/a\right)}\left(A_{2}\mu+B_{2}\right)+O\left( \frac{1}{\ln^{3}\left(1/a\right)}\right)\] (S39) and writing this in outer variables we get \[\cos(\varphi_{*})+A_{1}+O\left(\frac{1}{\ln\left(1/a\right)}\right).\] (S40) The leading order outer solution satisfies Laplace's equation (in \((X,Z)\) variables) and Dirichlet boundary condition \(\hat{x}_{0}^{O}=0\) at the outer boundary. Irrespective of whether we assume this outer boundary to be a circle or a square in the original - i.e. \((X,Y)\) - variables, the only admissible constant solution to this problem is \(\hat{x}_{0}^{O}\equiv 0\). Comparing this with (S40) we conclude that the matching requires \(A_{1}=-\cos(\varphi_{*})\). Finally, to satisfy the inner boundary condition at \(O\left(\frac{1}{\ln\left(1/a\right)}\right)\), we must have \(B_{1}=-A_{1}\mu_{1}=\cos\left(\varphi_{*}\right)\cosh^{-1}\left(\left(1- \omega\right)^{-1/2}\right)\). Substituting \(A_{1}\) and \(B_{1}\) back into (S39) we get \[\hat{x}^{I}=\cos(\varphi_{*})+\frac{\cos(\varphi_{*})\left(-\mu+\cosh^{-1} \left(\left(1-\omega\right)^{-1/2}\right)\right)}{\ln\left(1/a\right)}+\frac{ 1}{\ln^{2}\left(1/a\right)}\left(A_{2}\mu+B_{2}\right)+O\left(\frac{1}{\ln^{3} \left(1/a\right)}\right).\] (S41) Writing this in outer variables we get \[0+\frac{1}{\ln\left(1/a\right)}\left(\cos(\varphi_{*})\left(-\ln\left(2/c \right)-\ln\left(\sqrt{X^{2}+Z^{2}}\right)+\cosh^{-1}\left(\left(1-\omega \right)^{-1/2}\right)\right)+A_{2}\right).\] (S42) Note that to match the \(\ln\left(\sqrt{X^{2}+Z^{2}}\right)\) behaviour, the first-order correction in the outer solution must satisfy \[\hat{x}_{1XX}^{O}+\hat{x}_{1ZZ}^{O}=-2\pi\cos\left(\varphi_{*}\right)\delta_{( 0,0)},\] where \(\delta_{(0,0)}\) denotes the Dirac delta function (centered at the origin), and must vanish at the outer boundary. Matching further requires \(A_{2}=\cos\left(\varphi_{*}\right)\left(\ln\left(2/c\right)-\cosh^{-1}\left( (1-\omega)^{-1/2}\right)\right)\). The inner boundary condition at this order implies \(B_{2}=-\cosh^{-1}\left(\left(1-\omega\right)^{-1/2}\right)\cos\left(\varphi_{*} \right)\left(\ln\left(2/c\right)-\cosh^{-1}\left(\left(1-\omega\right)^{-1/2} \right)\right)\). By induction, we can deduce the form of general \(A_{i}\) and \(B_{i}\), getting \(A_{0}=0\), \(B_{0}=\cos\left(\varphi_{*}\right)\) and for \(i\geq 1\) \[A_{i}=-\cos\left(\varphi_{*}\right)\left(\cosh^{-1}\left(\left(1-\omega\right)^ {-1/2}\right)-\ln\left(2/c\right)\right)^{i-1}\qquad B_{i}=-\cosh^{-1}\Big{(} (1-\omega)^{-1/2}\Big{)}A_{i}.\] (S43) The inner expansion thus reads \[\dot{x}^{I}=\cos\left(\varphi_{*}\right)+\sum_{i=1}^{\infty}\frac{A_{i}\mu+B_{i} }{\ln^{i}\left(1/a\right)}+O(a)\] which using (S43) and the formula for the sum of an infinite geometric series gives \[\dot{x}^{I}=\cos\left(\varphi_{*}\right)\left(1+\frac{\cosh^{-1}\left(\left(1- \omega\right)^{-1/2}\right)-\mu}{\ln\left(1/a\right)}\sum_{i=1}^{\infty}\left( \frac{\cosh^{-1}\left(\left(1-\omega\right)^{-1/2}\right)-\ln\left(2/c\right) }{\ln\left(1/a\right)}\right)^{i-1}\right)+O(a)=\] \[=\cos\left(\varphi_{*}\right)\left(1+\frac{\cosh^{-1}\left(\left(1-\omega \right)^{-1/2}\right)-\mu}{\ln\left(1/a\right)+\ln\left(2/c\right)-\cosh^{-1} \left(\left(1-\omega\right)^{-1/2}\right)}\right)+O(a).\] Substituting \(\mu\) from (S38) and expressing \(c\) in terms of \(\omega\), we arrive at (24). #### s.5.2.2 \(\hat{y}\) problem The equation for \(\hat{y}\) becomes Laplace's equation after transforming the \(\bar{X}\) coordinate according to \(\bar{X}=\sqrt{\omega}\bar{W}\) (keeping \(\bar{Y}\)) and we have the boundary condition \(\hat{y}=\sin\left(\varphi_{*}\right)\) at the inner ellipse. We then must transform to elliptical coordinates (ellipses now being oriented along the \(\bar{W}\) axis rather than \(\bar{Z}\) axis) as \[\bar{Y}=c\sinh(\mu)\sin(\nu)\qquad\bar{W}=c\cosh(\mu)\cos(\nu)\] (S44) where again \(c=\sqrt{(1-\omega)/\omega}\) is the (linear) eccentricity of the inner ellipse and \(\mu_{1}=\cosh^{-1}\left(\left(1-\omega\right)^{-1/2}\right)\) denotes the (inner) elliptical boundary. The solutions to the resulting inner problems again read \(C_{i}\mu+D_{i}\) and we again have \[\mu=\frac{1}{2}\ln\left(1-2q_{2}(\bar{W},\bar{Y})+2\sqrt{q_{2}^{2}(\bar{W}, \bar{Y})-q_{2}(\bar{W},\bar{Y})}\right)\] (S45) where we now have \[q_{2}(\bar{W},\bar{Y})=\frac{-(\bar{W}^{2}+\bar{Y}^{2}-c^{2})-\sqrt{(W^{2}+Y^ {2}-c^{2})^{2}+4c^{2}Y^{2}}}{2c^{2}}.\] In \((\bar{X},\bar{Y})\) variables and expressed using \(\omega\), \(q_{2}\) reads (27). As before, we write the inner solutions in terms of the outer variable and conclude that in order to match we must have \(C_{0}=0\) and inner boundary condition at the leading order gives \(D_{0}=\sin\left(\varphi_{*}\right)\). At the higher orders, we (analogous to the \(\dot{x}\) case) conclude \[C_{i}=-\sin\left(\varphi_{*}\right)\left(\cosh^{-1}\left(\left(1-\omega\right) ^{-1/2}\right)-\ln\left(2/c\right)\right)^{i-1}\qquad D_{i}=-\cosh^{-1}\Big{(} (1-\omega)^{-1/2}\Big{)}C_{i}\] (S46) which leads to the inner expansion (26) for \(\hat{y}\). ### Leading-order approximations to the strain fields for small beads We get \[\dot{x}^{I}_{X/Y}= -\frac{\cos\left(\varphi_{*}\right)}{2\ln\left(1/a\right)+\ln \left(4\omega/(1-\omega)\right)-2\cosh^{-1}\left((1-\omega)^{-1/2}\right)} \frac{1}{1-2q+2\sqrt{q^{2}-q}}\left(-2+\frac{2q-1}{\sqrt{q^{2}-q}}\right) \frac{1}{a}\frac{\partial q}{\partial\bar{X}/\bar{Y}}+O(1)=\] (S47) \[\frac{1}{a}\frac{\cos\left(\varphi_{*}\right)}{2\ln\left(1/a \right)+\ln\left(4\omega/(1-\omega)\right)-2\cosh^{-1}\left((1-\omega)^{-1/2} \right)}\frac{1}{\sqrt{q^{2}-q}}\frac{\partial q}{\partial\bar{X}/\bar{Y}}+O(1)\] \[\hat{y}^{I}_{X/Y}=\frac{1}{a}\frac{\sin\left(\varphi_{*}\right)}{2\ln\left(1/a \right)+\ln\left(4\omega/(1-\omega)\right)-2\cosh^{-1}\left((1-\omega)^{-1/2} \right)}\frac{1}{\sqrt{q_{2}^{2}-q_{2}}}\frac{\partial q_{2}}{\partial\bar{X}/ \bar{Y}}+O(1)\] (S48) where \(q(\bar{X},\bar{Y})\) and \(q_{2}(\bar{X},\bar{Y})\) are given by (25) and (27). We have \[\frac{\partial q}{\partial\bar{X}} =-\frac{\omega\bar{X}}{1-\omega}\left(1+\frac{\omega\bar{X}^{2}+ \bar{Y}^{2}+(1-\omega)}{\sqrt{(\omega\bar{X}^{2}+\bar{Y}^{2}-(1-\omega))^{2}+4 (1-\omega)\omega\bar{X}^{2}}}\right),\] (S49) \[\frac{\partial q}{\partial\bar{Y}} =-\frac{\bar{Y}}{1-\omega}\left(1+\frac{\omega\bar{X}^{2}+\bar{Y} ^{2}-(1-\omega)}{\sqrt{(\omega\bar{X}^{2}+\bar{Y}^{2}-(1-\omega))^{2}+4(1- \omega)\omega\bar{X}^{2}}}\right),\] \[\frac{\partial q_{2}}{\partial\bar{X}} =-\frac{\bar{X}}{1-\omega}\left(1+\frac{\bar{X}^{2}+\omega\bar{Y} ^{2}-(1-\omega)}{\sqrt{(\bar{X}^{2}+\omega\bar{Y}^{2}-(1-\omega))^{2}+4(1- \omega)\omega\bar{Y}^{2}}}\right),\] \[\frac{\partial q_{2}}{\partial\bar{Y}} =-\frac{\omega\bar{Y}}{1-\omega}\left(1+\frac{\bar{X}^{2}+\omega\bar {Y}^{2}+(1-\omega)}{\sqrt{(\bar{X}^{2}+\omega\bar{Y}^{2}-(1-\omega))^{2}+4(1- \omega)\omega\bar{Y}^{2}}}\right).\] ### Calculating net force exerted on a small bead Parameterizing the circle as \((\bar{X},\bar{Y})=(\cos{(\varphi)},\sin{(\varphi)})\), we get the unit normal vector \(\mathbf{N}=(\cos{(\varphi)},\sin{(\varphi)})\) (pointing into the material) in the undeformed configuration and we calculate the total force exerted on the bead by the material as a line integral over the circle of dimensional radius \(a\tilde{D}\) and thus \[\tilde{\mathbf{F}}_{b}=\frac{\pi\bar{Y}\tilde{b}_{c}^{2}}{\tilde{R}_ {c}}\int\limits_{0}^{2\pi}\begin{pmatrix}\mathcal{F}(\xi)+R_{b}\xi\mathcal{F} ^{\prime}(\xi)\hat{x}_{X}^{I}+O(R_{b}^{2})&R_{b}\mathcal{F}(\xi)\hat{x}_{Y}^{I }+O(R_{b}^{2})\\ R_{b}\mathcal{F}(\xi)\hat{y}_{X}^{I}+O(R_{b}^{2})&\mathcal{F}(\xi)+R_{b}\xi \mathcal{F}^{\prime}(\xi)\hat{y}_{Y}^{I}+O(R_{b}^{2})\end{pmatrix}\begin{pmatrix} \cos{(\varphi)}\\ \sin{(\varphi)}\end{pmatrix}a\tilde{D}d\varphi=\] \[\frac{\pi\bar{Y}\tilde{b}_{c}^{2}aR_{b}\xi\mathcal{F}^{\prime}( \xi)}{\varepsilon_{c}}\int\limits_{0}^{2\pi}\begin{pmatrix}\hat{x}_{X}^{I}& \omega\hat{x}_{Y}^{I}\\ \omega\hat{y}_{X}^{I}&\hat{y}_{Y}^{I}\end{pmatrix}\begin{pmatrix}\cos{( \varphi)}\\ \sin{(\varphi)}\end{pmatrix}d\varphi+O(R_{b}^{2}).\] (S50) Notice that the leading-order contributions (i.e. \(O(1)\) in \(R_{b}\)) cancel out. We calculate the leading-order approximation for the \(X-\) and \(Y-\)components of the net force using (S47)-(S48) as \[\tilde{F}_{b}^{X}= \frac{\cos(\varphi_{*})\pi\bar{Y}\tilde{b}_{c}^{2}R_{b}\xi \mathcal{F}^{\prime}(\xi)}{\varepsilon_{c}\left(2\ln{(1/a)}+\ln{(4\omega/(1- \omega))}-2\cosh^{-1}{(1-\omega)}\right)}\int\limits_{0}^{2\pi}\frac{1}{\sqrt {q^{2}-q}}\left(\frac{\partial q}{\partial\bar{X}}\cos{(\varphi)}+\omega \frac{\partial q}{\partial Y}\sin{(\varphi)}\right)d\varphi+O(a)\] \[\tilde{F}_{b}^{Y}= \frac{\sin(\varphi_{*})\pi\bar{Y}\tilde{b}_{c}^{2}R_{b}\xi \mathcal{F}^{\prime}(\xi)}{\varepsilon_{c}\left(2\ln{(1/a)}+\ln{(4\omega/(1- \omega))}-2\cosh^{-1}{(1-\omega)}\right)}\int\limits_{0}^{2\pi}\frac{1}{\sqrt {q_{2}^{2}-q_{2}}}\left(\omega\frac{\partial q_{2}}{\partial\bar{X}}\cos{( \varphi)}+\frac{\partial q_{2}}{\partial\bar{Y}}\sin{(\varphi)}\right)d \varphi+O(a)\] (S51) #### s5.4.1 Evaluating the integrands at the bead Evaluating (25) at the bead \((\bar{X},\bar{Y})=(\cos{(\varphi)},\sin{(\varphi)})\) and using \(\sin^{2}{(\varphi)}=1-\cos^{2}{(\varphi)}\), we get \[q(\varphi)= \frac{-\omega\cos^{2}{(\varphi)}-\sin^{2}{(\varphi)}+(1-\omega) -\sqrt{(\omega\cos^{2}{(\varphi)}+\sin^{2}{(\varphi)}-(1-\omega))^{2}+4(1- \omega)\omega\cos^{2}{(\varphi)}}}{2(1-\omega)}\] (S52) \[=\frac{(1-\omega)\cos^{2}{(\varphi)}-\omega-\sqrt{(\omega-(1- \omega)\cos^{2}{(\varphi)})^{2}+4(1-\omega)\omega\cos^{2}{(\varphi)}}}{2(1- \omega)}=-\frac{\omega}{1-\omega}\] and conclude \[\frac{1}{\sqrt{q^{2}-q}}=\frac{1-\omega}{\sqrt{\omega}}.\] Analogously, it is easy to show that \(q_{2}=q\) at the bead and thus \[\frac{1}{\sqrt{q_{2}^{2}-q_{2}}}=\frac{1-\omega}{\sqrt{\omega}}\] Similarly, using (S49), the same trigonometric identity and our knowledge on what the square root term in (S52) simplifies into, we get at the bead \[\frac{\partial q}{\partial\bar{X}}\cos{(\varphi)}+\omega\frac{ \partial q}{\partial Y}\sin{(\varphi)}=\] \[-\frac{\omega}{1-\omega}\left\{\cos^{2}{(\varphi)}\left(1+\frac{ \omega\cos^{2}{(\varphi)}+\sin^{2}{(\varphi)}+(1-\omega)}{\sqrt{(\omega\cos^{2 }{(\varphi)}+\sin^{2}{(\varphi)}-(1-\omega))^{2}+4\omega(1-\omega)\cos^{2}{( \varphi)}}}\right)+\right.\] \[\left.\sin^{2}{(\varphi)}\left(1+\frac{\omega\cos^{2}{(\varphi)}+ \sin^{2}{(\varphi)}-(1-\omega)}{\sqrt{(\omega\cos^{2}{(\varphi)}+\sin^{2}{( \varphi)}-(1-\omega))^{2}+4\omega(1-\omega)\cos^{2}{(\varphi)}}}\right)\right\}=\] \[-\frac{\omega}{1-\omega}\left\{1+\frac{\cos^{2}{(\varphi)}(2- \omega+(\omega-1)\cos^{2}{(\varphi)})+(1-\cos^{2}{(\varphi)})(\omega+(\omega-1) \cos^{2}{(\varphi)})}{\omega+(1-\omega)\cos^{2}{(\varphi)}}\right\}=-\frac{2 \omega}{1-\omega}\] and following the same steps also \[\omega\frac{\partial q_{2}}{\partial\bar{X}}\cos{(\varphi)}+\frac{\partial q_{ 2}}{\partial Y}\sin{(\varphi)}=-\frac{2\omega}{1-\omega}.\] Substituting back into (S51) we get \[\tilde{\mathbf{F}}_{b}=-(\cos{(\varphi_{*})},\sin{(\varphi_{*})})\tilde{F}_{b}^{0}+O( a),\] (S53) where \[\tilde{F}_{b}^{0}=\frac{2\pi R_{b}/\varepsilon_{c}\sqrt{\xi\mathcal{F}(\xi) \mathcal{F}^{\prime}(\xi)}}{\ln{(1/a)}+\ln{(2\sqrt{\omega/(1-\omega)})}-\cosh^{ -1}((1-\omega)^{-\frac{1}{2}})}\pi\tilde{Y}\tilde{b}_{c}^{2}.\] (S54) Using the constitutive law (9) and simplifying, the leading-order dimensionless net force can be written as \[\begin{split}& F_{b}^{0}=\frac{2\pi R_{b}/\varepsilon_{c}\sqrt{ \mathcal{F}_{p}\left(1+\mathcal{F}_{p}\right)}}{\ln{(2\sqrt{\mathcal{F}_{p}}/a )}-\cosh^{-1}(\sqrt{1+\mathcal{F}_{p}})}=\frac{2\pi R_{b}/\varepsilon_{c}\sqrt {\mathcal{F}_{p}\left(1+\mathcal{F}_{p}\right)}}{\ln{(2\sqrt{\mathcal{F}_{p}} /a)}-\ln{(\sqrt{1+\mathcal{F}_{p}}+\sqrt{\mathcal{F}_{p}})}}\\ &\frac{2\pi R_{b}/\varepsilon_{c}\sqrt{\mathcal{F}_{p}\left(1+ \mathcal{F}_{p}\right)}}{\ln{\left(\frac{2\sqrt{\mathcal{F}_{p}}}{a\left( \sqrt{1+\mathcal{F}_{p}}+\sqrt{\mathcal{F}_{p}}\right)}\right)}}=\frac{2\pi R _{b}/\varepsilon_{c}\sqrt{\mathcal{F}_{p}\left(1+\mathcal{F}_{p}\right)}}{ \ln{\left(\frac{2\left(\sqrt{\mathcal{F}_{p}(1+\mathcal{F}_{p})}-\mathcal{F}_ {p}\right)}{a}\right)}},\end{split}\] (S55) which gives (29)-(30).
2310.12895
Physical properties of circumnuclear ionising clusters. II. NGC 7469
Circumnuclear star forming regions (CNSFR) are massive clusters found close to galactic nuclei. These entities give us an excellent opportunity to study star formation in environments with high metallicity and to relate it with active galactic nuclei. Our principal aim is to derive the physical properties and dynamical masses of the CNSFRs in the two rings of the spiral NGC 7469, categorized as a Luminous Infrared Galaxy (ULIRG) and hosting a Seyfert 1 nucleus. We have used archival data obtained with the MUSE spectrograph attached to one of the ESO VLT telescopes and we have applied the techniques shown in the first paper of the series. Regions in the studied galaxy show large sizes which can be explained by the stellar winds produced by WR stars. The inner ring regions seem to be more compact than the outer ones, showing higher electron densities and filling factors. The young stellar population of the clusters has contributions of ionising populations with ages around 5 Ma and its masses constitute less than a 1\% of the total mass of each cluster. The inner ring regions which are close to the active galactic nucleus probably are the only ones that have enough mass to survive the action of the AGN. They constitute the $\sim$ 90 \% of the total inner ring mass.
S. Zamora, Ángeles I. Díaz
2023-10-19T16:45:13Z
http://arxiv.org/abs/2310.12895v1
# Physical properties of circumnuclear ionising clusters. II. NGC 7469 ###### Abstract Context:Circumnuclear star forming regions (CNSFR) are massive clusters found close to galactic nuclei. These entities give us an excellent opportunity to study star formation in environments with high metallicity and to relate it with active galactic nuclei. Aims:Our principal aim is to derive the physical properties and dynamical masses of the CNFRs in the two rings of the spiral NGC 7469, categorized as a Luminous Infrared Galaxy (ULIRG) and hosting a Seyfert 1 nucleus. Methods:We have used archival data obtained with the MUSE spectrograph attached to one of the ESO VLT telescopes and we have applied the techniques shown in the first paper of the series. Results:Regions in the studied galaxy show large sizes which can be explained by the stellar winds produced by WR stars. The inner ring regions seem to be more compact than the outer ones, showing higher electron densities and filling factors. The young stellar population of the clusters has contributions of ionising populations with ages around 5 Ma and its masses constitute less than a 1% of the total mass of each cluster. Conclusions:The inner ring regions which are close to the active galactic nucleus probably are the only ones that have enough mass to survive the action of the AGN. They constitute the \(\sim 90\) % of the total inner ring mass. Conclusions: ## 1 Introduction This is the second paper of a series to study the peculiar conditions of star formation in circumnuclear regions of early-type spiral galaxies, in particular the kinematics of the connected stars and gas using archival data obtained with the MUSE spectrograph attached to one of the ESO VLT telescopes. Circumnuclear star-forming regions (CNSFRs) represent a common mode of star formation found close to galactic nuclei. Some of these regions, being a few hundred pc in size and showing integrated H\(\alpha\) luminosities which overlap with those of HII galaxies (typically higher than \(10^{39}\) erg s\({}^{-1}\)), seem to be composed of several HII regions ionised by luminous compact stellar clusters whose sizes, as measured from high spatial resolution Hubble Space Telescope (HST) images, are seen to be of only a few pc. These regions are young (age \(<10\) Ma ) and massive (up to \(2\times 10^{8}\) M\({}_{\odot}\)) (Hagele et al. 2007a, 2013). In the UV and B wavebands, they contribute substantially to the emission of the entire nuclear region, even in the presence of an active nucleus (see e.g. Colina et al. 2002). In fact, in some nearby galaxies presenting circumnuclear star-forming rings this is the strongest organized source of far-UV (FUV) emission and 30% of the total observed FUV emission is produced within a radius of 10". At redshifts of z 2-3, this structure would be confined to a region 0.2" in diameter for \(\Omega=1\) and would appear point-like in low-resolution observations. Consequently, in the absence of diagnostic spectroscopy, some of these objects could be mistaken for an active galactic nucleus (AGN). At any rate, it is nowadays generally accepted that some connection exists between star formation and activity in galactic nuclei, and young stars appear as one component of the unified model of AGN giving rise to the blue featureless continuum which is observed in Seyfert 2 galaxies where the broad line region is obscured (see Gonzalez Delgado et al. 1998, and references therein). NGC 7469 gives us an excellent opportunity to study these phenomena in detail. It is one of the brightest blue galaxies first listed by Seyfert (1943), included also in Arp's Atlas for Peculiar Galaxies (Arp 1966) with number 298. It is relatively nearby (z=0.01627), has been classified as an SABa(rs) and categorized as a Luminous Infrared Galaxy (ULIRG). The galaxy has a close companion, IC 538, forming an isolated interacting pair first catalogued by Arp (1966). The companion is located at \(\sim\) 22 kpc (Burbidge et al. 1963). The pair interaction was studied in Marquez & Moles (1994) and Genzel et al. (1995) suggested that this interaction may have taken place more than 150 Ma ago triggering the powerful starburst found in the central 3 arcsec of the galaxy that is responsible for 60 % of the bolometric luminosity of the whole galaxy. The stellar population of their young stellar clusters has been studied in detail by Diaz-Santos et al. (2007). In Section 2, we describe the observations and the selection of the sample objects. Our results are presented in Section 3. Section 4 is devoted to the ionizing star cluster properties and the dynamical mass derivation is described in Section 5. A discussion of all our results in given in Section 6. Finally, Section summarizes this work and our conclusions are given in Section ## 2 Observations and sample selection In this work, we analyse the circumnuclear environment of the almost face-on galaxy NGC 7469 that shows two prominent star-forming rings using publicly available observations obtained by the IFS MUSE (Bacon et al. 2010). Some characteristics of this galaxy are given in Table 1. The galaxy is a well studied early spiral (SABa) hosting a Seyfert 1 nucleus and an actively star forming ring very close to it, within 1.5 arcsec from the galaxy centre, with circular appearance. Besides, it also shows a second, incomplete, ring further out, of elliptical appearance and with dimensions of its major and minor axes of 21 and 13.2 arcsec respectively (see Buta & Crocker 1993), at the limit of what can be considered as "circumnuclear" according to the definition given in Alvarez-Alvarez et al. (2015). These two described structures can be easily identified on moderate resolution images of the galaxy. All throughout this paper we will refer to them as inner and outer ring respectively. 1 Footnote 1: Please note that in Buta and Crocker’s notation these structures are referred to as nuclear and inner ring respectively. NGC 7469 was observed as part of the first MUSE Science Verification run in 2014 August 19 under ESO Programme 60.A-9339(A). The observing time was split into four exposures of 600s with an offset of 1 arcsec in declination and different rotations among observations. Offset sky observations were taken after the target observations for adequate sky subtraction. The median seeing was 1.6 arcsec. The reduction of the data was performed by the Quality Control Group at ESO in an automated process applying version 0.18.5 of the MUSE pipeline (Weilbacher et al. 2014) including all the steps listed in Zamora & Diaz (2023a). We have also used additional data from Hubble Space Telescope (HST) in the F336W and F606W filters. The UV data were acquired on 2018 October 29 with the Wide Field and Planetary Camera 3 (WFPC3) as part of the program GO/15472 providing high spatial resolution images (\(\simeq\) 0.1 arcsec pixel\({}^{-1}\)) and a FoV of 150 arcsec\({}^{2}\). These data were retrieved from the Hubble Legacy Archive and organised in 3 exposures of 820 s each. The optical data were acquired on 1994 June 10 with the Wide Field and Planetary Camera 2 (WFPC2) as part of the program ID/5479 and the data has an exposure time of 500s. Their reduction was performed by the Space Telescope Science Institute (STScI) using available calibration files taken for this observation and taking into account different dithering positions. ## 3 Results ### Ionised gas The data presented here have been analysed following the methodology already used and tested in Zamora & Diaz (2023a). The analysis is based on: (i) performing 2D maps for different emission lines and continuum bands; (ii) selecting HII regions from the H\(\alpha\) emission line map; (iii) extracting each region spectrum and measuring the available emission lines; (iv) calculating the integrated SDSS magnitudes in the r and i SDSS filters; and (v) deriving chemical abundances for each of the CNSFRs. In this section, only specific details introduced in this analysis due to the particular characteristics of NGC 7469 are explained. #### 3.1.1 Emission line and continuum maps From the observed data cubes we have constructed 2D maps for different emission lines and two continuum bands. The top left panel of Fig. 1 shows the spatial distribution of the observed H\(\alpha\) flux with contours of HST images from the WFC3 camera in the F336W filter superimposed. In this filter young star clusters should be most clearly visible and hence this comparison provides information about the spatial resolution of our MUSE instrumental configuration. The agreement between the HST contours and the MUSE maps ensures the existence of young ionising stellar populations in the observed clusters. On this map the emission of the nucleus and the inner ring of the galaxy are also clearly distinguished. The top central panel of this figure shows the [OIII]\(\lambda\)5007 A emission map. The emission from the active nucleus is predominant in this line and it seems to blur along the galaxy disc. The H\(\alpha\) and H\(\beta\) maps have been combined to produce an extinction map which is shown in the top right panel of the figure. It has been calculated by adopting the Galactic extinction law of Miller & Mathews (1972), with a specific attenuation of R\({}_{V}\) = 2.97 and the theoretical ratio H\(\alpha\)/H\(\beta\) = 2.87 from Osterbrock & Ferland (2006) (n\({}_{e}\) = 100 cm\({}^{-3}\), T\({}_{e}\) = 10\({}^{4}\) K, case B recombination). In this map the inner ring is clearly visible with the whole of it showing a similar extinction (\(\sim\) 2 mag). At the galaxy nucleus itself, A\({}_{V}\sim\) 0, suggesting some kind of observational problem with the H\(\alpha\) line emission. Actually, the existence of 30 pixels in this area of the galaxy with a H\(\alpha\)/H\(\beta\) ratio < 2.7 has been confirmed. The H\(\alpha\) line seems to be saturated in these pixels as probably it is also in other pixels around where number counts are close to the non-linear regime of the detector. On the other hand, there are apparently higher extinction values at the edges of the HII regions which could be due to the low S/N ratio in the H\(\beta\) emission line. The two bottom left panels of Fig. 1 show maps of the observed continuum fluxes at blue and red wavelengths, 5400 A and 8150 A respectively. Superimposed are the contours of HST-WFC3 data in the F336W filter. In both maps, the dominant continuum emission is shown to proceed from the galaxy disc. A Sersic fit to the stellar surface brightness has been done in order to better understand the behaviour of the continuum in our HII regions. Three different components with different scale-lengths have been fitted taking the geometrical values of this galaxy into account (PA = 128\({}^{\circ}\), i = 45\({}^{\circ}\) Davies et al. 2004). Fig. 2 shows from left to right the original map, the fitted model and the residuals for the blue (upper panels) and red (lower panels) continuum maps. Only a few HII regions within the outer ring stand out of the galaxy profile while all inner-ring regions show high fluxes in these bands. Also, there is a diffuse excess in both maps that \begin{table} \begin{tabular}{c c} \hline Galaxy & NGC7469 \\ \hline RA J2000 (deg)\({}^{a}\) & 345.815095 \\ Dec J2000 (deg)\({}^{a}\) & 8.873997 \\ Morphological type & (R’)SABa(rsa \\ Nuclear type & Sy 1 \\ z & 0.01627 \\ Distance (Mpc)\({}^{b}\) & 66.47 \\ Scale (pc/arcsec)\({}^{c}\) & 316 \\ \hline \multicolumn{2}{c}{\({}^{a}\) Clements (1981).} \\ \multicolumn{2}{c}{\({}^{b}\) Tully \& Fisher (1988).} \\ \multicolumn{2}{c}{\({}^{c}\) Cosmology corrected scale.} \\ \end{tabular} \end{table} Table 1: NGC 7469 global properties. follow the area of the outer ring and seems to fall to the central part of the galaxy. Finally, in the bottom right panel of Fig. 1 we can see the map of the equivalent width (EW) of \(H\alpha\) (in A). All circumnuclear regions have values EW(H\(\alpha\)) \(>\) 50 A, consistent with the presence of recent star formation, not older than 10 Ma. #### 3.1.2 HII region selection Our HII region selection method is described in detail in Zamora & Diaz (2023a). It is based on an iterative procedure that works on a line emission map and detects high intensity clumps. It requires several input parameters: the maximum size of the regions, the diffuse gas emission level, the relative flux intensity of each of the regions with respect to the emission of its centre, and the maximum and minimum extent of regions according to their typical projected size and the point spread function (PSF) of observations. For regions within the outer ring we have used the observed H\(\alpha\) flux map to select HII regions in the same way as was done for the NGC 7742 galaxy. However, for the inner-ring regions we have decided to use the observed HeL 6678 A flux map due to the already mentioned saturation effects in the H\(\alpha\) emission line in the central parts of the galaxy (see Sec. 3.1.1). Thus, we have constructed an observed HeL 6678 A map (see section 3.1.1) assuming a linear behavior of the continuum emission and choosing side-bands around the line of a given width (\(\lambda_{c}\) = 6678 A, \(\Delta\lambda\) = 3 A, \(\Delta\lambda_{left}\) = 6650 A and \(\Delta\lambda_{right}\) = 6695 A, all wavelengths in rest frame). In order to do that, we have compared first the spatial distribution of the hydrogen and helium emission. For this purpose, an exponential fit to the AGN brightness has been done. Fig. 3 shows from left to right the original map, the fitted model and the residuals. We can see that both maps show the same spatial distribution although the HeL 6678 A emission map has a lower S/N. The two procedures are slightly different since the HeI flux intensity is weaker. A longer exposure time would have allowed Figure 1: From left to right and top to bottom: maps of the observed H\(\alpha\) and [OIII]\(\lambda\)5007 Å emission line fluxes (in units of 10\({}^{-20}\) erg/s/cm\({}^{2}\) and logarithmic scale); A\({}_{V}\) extinction (in magnitudes); observed continuum fluxes in the blue and red parts of the spectrum (5400 Å and 8150Å respectively), in units of 10\({}^{-17}\) erg/s/cm\({}^{2}\) and logarithmic scale); and EW(\(H\alpha\)) in Å. Upper and bottom left and center images show superimposed contours of the HST-UV image described in the text. Orientation is North up and East to the left. Figure 2: Left panels: maps for the observed continua in the blue and red parts of the spectrum (5400 Å and 8150Å respectively), in logarithmic scale). Central panels: fitted models to the galaxy disc profile (see text for details). Right panels: residuals between the continuum maps and the disc fitted profile. Orientation is North up and East to the left. The panel sizes are 32 arcsec \(\times\) 32 arcsec. to obtain a higher S/N in this line and also in other weak lines needed for the determination of the physical conditions of the gas (see Zamora et al. 2022). Finally, we have imposed the following quality control requirements to the integrated spectra extracted from each selected region to ensure their physical meaning and to be certain that the emission has a star formation origin: EW(H\(\alpha\))? 6 A (Sanchez et al. 2015) and 2.7! H\(\alpha\)/H\(\beta\)! 6.0 (Osterbrock & Ferland 2006, n\({}_{e}\) = 100 cm\({}^{-3}\), T\({}_{e}\) = 10\({}^{4}\) K). At the end of the entire procedure, we have obtained a total of 23 HII regions in the outer ring and 5 in the inner one. Fig. 4 shows the HII regions selected with the use of the described methodology for the two rings and Table 2 lists their characteristics: the position of each HII region in the ring with respect to that of the galaxy centre, its size and its observed integrated H\(\alpha\) emission flux. Next, we have used HST data from the WFC3 camera in the F606W filter in order to determine if our selected regions are associated to single young stellar clusters. Fig. 5 shows the emission maps in this band where we have superimposed our selected CNSFR apertures. The left panel of the figure shows the outer ring of the galaxy where we can identify two regions, R1 and R3, that seem to encompass large star formation complexes in stead of single clusters. Also regions R2, R7 and R9 look like complexes in the HST images and the 5 mentioned regions show non symmetric profiles in the H\(\alpha\) emission line (see Fig. 1). We will keep these regions within the CNSFRs study sample keeping in mind the possible effects of this result in our analysis. The right panel of the figure shows enlarged the inner galaxy ring where the star clusters identified by (Diaz-Santos et al. 2007, DS07) at this wavelength are marked with crosses. Regions Ra, Rc and Rd seem to contain multiple stellar clusters although Rc only shows one ionising cluster in HST-WFC3 F336W filter (see Fig. 1). On the other hand, region Re does not exhibit flux excess in any of the two filters at 3360 A or 6600 A wavelengths. #### 3.1.3 Emission line measurements and uncertainties We have extracted each region spectrum by integrating the flux inside its corresponding aperture. All emission lines with sufficient S/N (higher than 2 \(\sigma_{c}\)) appear to have at least two kinematical components and four of them: regions R2, R10, R13 and Rd, show three components in the [OIII]\(\lambda\lambda\) 4959,5007 A emission lines. In order to separate the different kinematical components, we have used the code LiMe (Line Measuring library Fernandez et al. 2022). We have taken into account only those components that meet the requirement: \(A_{B}>3\sigma_{l}\), with \(A_{B}\) and \(\sigma_{l}\) being the Gaussian amplitude and the local standard deviation of the residuals of the Gaussian fit in 30 A around each line centre. With this criteria, we have assigned only one component to the [SIII]\(\lambda\) 9069 A emission line in all the outer ring regions. Fig. 6 shows two de-blended examples: outer ring regions, R6 and R10, showing two and three kinematical components in the [OIII]\(\lambda\lambda\) 4959,5007 A lines, and the inner ring region Rc, showing an extra component in the H\(\alpha\) emission line that seems to be associated with high density and high velocity gas. For each line, we have ascribed the most intense and narrow component to the emission of the ionising cluster, and we have subtracted the rest of the components of the total spectrum in order to perform our subsequent analysis. Next, we have measured the intensities of the identified emission lines in our spectra following the procedure descibed in Zamora & Diaz (2023a). The intensities of all prominent emission lines with a S/N? 3 have been measured, discarding the most uncertain values. These lines are: H\(\beta\) and H\(\alpha\) Balmer lines; [OIII]\(\lambda\lambda\) 4959,5007 A, [NII]\(\lambda\lambda\) 6548,84 A, [SII]\(\lambda\lambda\) 6716,31 A, and [SIII]\(\lambda\) 9069 A forbidden lines. We have also measured the weak [SIII]\(\lambda\) 6312 A and HeL\(\lambda\) 6678 A lines detected with S/N? 1 and additionally, in the inner ring regions the HeL1 5875 A line with the same precision. The [SIII]\(\lambda\) 6312 A and HeL \(\lambda\) 6678 \begin{table} \begin{tabular}{c c c} \hline Region ID & Area & Offsets from galaxy center \({}^{a}\) \\ & (arcsec\({}^{2}\)) & (arcsec) \\ \hline R1 & 7.72 & 0.2, -7.4 \\ R2 & 7.36 & -6.2, 0.8 \\ R3 & 5.80 & -8.6, -4.8 \\ R4 & 8.44 & 5.6, -1.4 \\ R5 & 5.76 & 3.2, -4.8 \\ R6 & 2.76 & 10.0, -3.0 \\ R7 & 7.04 & 8.2, -4.4 \\ R8 & 6.76 & -3.0, -8.4 \\ R9 & 3.24 & 9.0, 5.0 \\ R10 & 1.48 & -3.2, 4.6 \\ R11 & 6.80 & -8.6, -2.1 \\ R12 & 3.56 & 8.8, 1.6 \\ R13 & 2.76 & -4.6, 3.8 \\ R14 & 3.04 & -5.2, -8.6 \\ R15 & 1.72 & 6.9, -3.4 \\ R16 & 1.76 & 9.0, 3.2 \\ R17 & 0.88 & -7.9, -0.8 \\ R18 & 1.40 & -7.2, 2.6 \\ R19 & 1.36 & -9.8, -6.2 \\ R20 & 1.56 & 5.8, 0.7 \\ R21 & 2.68 & -5.6, 5.8 \\ R22 & 4.52 & 5.6, -4.6 \\ R23 & 1.48 & 6.6, -8.0 \\ \hline Ra & 1.48 & 0.6, -1.6 \\ Rb & 1.00 & 1.8, -0.4 \\ Rc & 1.44 & 1.2, 0.8 \\ Rd & 0.76 & -1.0, 1.2 \\ Re & 0.80 & -1.6, -0.8 \\ \hline \end{tabular} \({}^{a}\) Offsets from centre of the galaxy to the centre of each individual region. \end{table} Table 2: Selection characteristics for observed CNSFRs. Figure 3: Upper panels, from left to right: map for the observed H\(\alpha\) emission line, fitted model to the AGN profile (see text) and residuals between the observed map and the AGN fitted profile. Lower panels: same maps for the observed HeL1 6678 Å emission line. Orientation is North up, East to the left. The panel sizes are 10 arcsec \(\times\) 10 arcsec. A lines have been measured with in 8 and 10 outer ring regions respectively. For the inner ring regions, there are three regions with [SIII]\(\lambda\) 6312 A measurements and in all the regions both HeI lines have been measured. #### 3.1.4 Extinction correction For the regions in the outer galaxy ring, the measured line intensities have been corrected using the reddening constant, c(H\(\beta\)), derived from the ratio of the Balmer H\(\alpha\) and H\(\beta\) lines assuming a simple screen distribution of the dust and the same extinction for emission lines and the stellar continuum. We have adopted the Galactic extinction law of Miller & Mathews (1972), with a specific attenuation of R\({}_{v}\) = 2.97. A theoretical value for the H\(\alpha\)/H\(\beta\) ratio of 2.87 corresponding to n\({}_{e}\) = 100 cm\({}^{-3}\) and T\({}_{e}\) = 10\({}^{4}\) K for the electron density and temperature respectively has been adopted. The upper panel of Table 3 shows, for each selected outer ring HII region, the reddening corrected emission line intensities of strong lines relative to H\(\beta\), and its corresponding reddening constant. On the other hand, the study of circumnuclear regions around active galactic nuclei often presents an additional problem due to the high surface brightness at the galaxy centre. If a single exposure image is used for the analysis of outer and inner HII Figure 4: Left panel: HII regions selected using our segregation program on the H\(\alpha\) observed emission line map. Right panel: HII regions selected with our segregation program with the HeI \(\lambda\) 7768 Å observed emission line map. Logarithmic color scale. Orientation is north up, east to the left. The physical scale is represented at the bottom left corner of the map. The outer and inner rings are marked with blue ellipses. Figure 5: Left panel: outer ring selected HII regions superimposed on the HST WPC3 F606W image. The two regions labeled R1 and R2 seem to be associated to large star forming complexes. Right panel: enlargement of the central 5 by 5 arcsec of the galaxy showing the HII regions selected in the inner ring. The young star clusters identified by Díaz-Santos et al. (2007, DS07) are marked with black crosses. Orientation is North up, East to the left. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline & Line & Hb & [OIII] & [OIII] & [NII] & H\(\alpha\) & [NII] & [SII] & [SIII] & [SIII] \\ & \(\lambda\) & 4861 & 4959 & 5007 & 6548 & 6563 & 6584 & 6717 & 6731 & 9069 \\ & f(\(\lambda\)) & 0.000 & -0.024 & -0.035 & -0.311 & -0.313 & -0.316 & -0.334 & -0.336 & -0.561 \\ \hline Region ID & c(H\(\beta\)) & 1(H\(\beta\))\({}^{a}\) & & & & & & & & \\ \hline R1 & 0.50 \(\pm\) 0.02 & 7.53 \(\pm\) 0.42 & 42 \(\pm\) 10 & 118 \(\pm\) 9 & 304 \(\pm\) 6 & 2870 \(\pm\) 68 & 936 \(\pm\) 9 & 469 \(\pm\) 7 & 335 \(\pm\) 7 & 82 \(\pm\) 9 \\ R2 & 0.78 \(\pm\) 0.03 & 8.05 \(\pm\) 0.67 & 206 \(\pm\) 18 & 553 \(\pm\) 18 & 457 \(\pm\) 9 & 2870 \(\pm\) 101 & 1394 \(\pm\) 13 & 595 \(\pm\) 13 & 438 \(\pm\) 12 & 76 \(\pm\) 11 \\ R3 & 0.57 \(\pm\) 0.04 & 4.47 \(\pm\) 0.40 & 83 \(\pm\) 17 & 229 \(\pm\) 17 & 387 \(\pm\) 9 & 2870 \(\pm\) 108 & 1290 \(\pm\) 13 & 551 \(\pm\) 9 & 368 \(\pm\) 8 & 111 \(\pm\) 12 \\ R4 & 0.54 \(\pm\) 0.05 & 4.89 \(\pm\) 0.59 & 109 \(\pm\) 23 & 303 \(\pm\) 23 & 415 \(\pm\) 13 & 2870 \(\pm\) 146 & 1188 \(\pm\) 17 & 610 \(\pm\) 15 & 478 \(\pm\) 14 & 135 \(\pm\) 17 \\ R5 & 0.46 \(\pm\) 0.05 & 2.75 \(\pm\) 0.36 & 145 \(\pm\) 24 & 407 \(\pm\) 24 & 498 \(\pm\) 13 & 2870 \(\pm\) 158 & 1504 \(\pm\) 19 & 720 \(\pm\) 16 & 555 \(\pm\) 15 & 149 \(\pm\) 19 \\ R6 & 0.88 \(\pm\) 0.03 & 3.16 \(\pm\) 0.23 & 56 \(\pm\) 14 & 148 \(\pm\) 13 & 315 \(\pm\) 8 & 2870 \(\pm\) 90 & 1025 \(\pm\) 13 & 385 \(\pm\) 10 & 275 \(\pm\) 9 & 154 \(\pm\) 12 \\ R7 & 0.81 \(\pm\) 0.04 & 5.51 \(\pm\) 0.54 & 60 \(\pm\) 18 & 159 \(\pm\) 17 & 335 \(\pm\) 11 & 2870 \(\pm\) 120 & 1118 \(\pm\) 16 & 521 \(\pm\) 12 & 327 \(\pm\) 11 & 122 \(\pm\) 15 \\ R8 & 0.72 \(\pm\) 0.04 & 4.38 \(\pm\) 0.46 & 65 \(\pm\) 22 & 175 \(\pm\) 21 & 313 \(\pm\) 11 & 2870 \(\pm\) 128 & 1089 \(\pm\) 16 & 575 \(\pm\) 13 & 372 \(\pm\) 11 & 60 \(\pm\) 14 \\ R9 & 0.85 \(\pm\) 0.04 & 2.96 \(\pm\) 0.32 & 142 \(\pm\) 25 & 376 \(\pm\) 23 & 433 \(\pm\) 11 & 2870 \(\pm\) 131 & 1122 \(\pm\) 15 & 512 \(\pm\) 11 & 385 \(\pm\) 10 & 182 \(\pm\) 15 \\ R10 & 1.10 \(\pm\) 0.07 & 2.09 \(\pm\) 0.34 & 285 \(\pm\) 32 & 732 \(\pm\) 28 & 465 \(\pm\) 11 & 2870 \(\pm\) 198 & 1346 \(\pm\) 16 & 713 \(\pm\) 18 & 526 \(\pm\) 17 & 99 \(\pm\) 12 \\ R11 & 0.90 \(\pm\) 0.06 & 4.86 \(\pm\) 0.74 & 131 \(\pm\) 34 & 345 \(\pm\) 32 & 406 \(\pm\) 15 & 2870 \(\pm\) 184 & 1283 \(\pm\) 20 & 653 \(\pm\) 15 & 444 \(\pm\) 14 & 48 \(\pm\) 16 \\ R12 & 1.11 \(\pm\) 0.08 & 3.41 \(\pm\) 0.62 & 181 \(\pm\) 44 & 463 \(\pm\) 39 & 476 \(\pm\) 16 & 2870 \(\pm\) 220 & 1373 \(\pm\) 21 & 705 \(\pm\) 15 & 527 \(\pm\) 15 & 143 \(\pm\) 17 \\ R13 & 0.85 \(\pm\) 0.09 & 1.50 \(\pm\) 0.34 & 405 \(\pm\) 49 & 1076 \(\pm\) 41 & 647 \(\pm\) 20 & 2870 \(\pm\) 271 & 1809 \(\pm\) 28 & 887 \(\pm\) 27 & 619 \(\pm\) 25 & 112 \(\pm\) 22 \\ R14 & 0.12 \(\pm\) 0.06 & 0.46 \(\pm\) 0.07 & 46 \(\pm\) 30 & 135 \(\pm\) 30 & 264 \(\pm\) 18 & 2870 \(\pm\) 186 & 1118 \(\pm\) 24 & 662 \(\pm\) 20 & 350 \(\pm\) 18 & 61 \(\pm\) 31 \\ R15 & 0.66 \(\pm\) 0.06 & 0.78 \(\pm\) 0.11 & 102 \(\pm\) 31 & 278 \(\pm\) 30 & 393 \(\pm\) 17 & 2870 \(\pm\) 170 & 1038 \(\pm\) 22 & 606 \(\pm\) 18 & 460 \(\pm\) 17 & 118 \(\pm\) 20 \\ R16 & 0.91 \(\pm\) 0.07 & 1.33 \(\pm\) 0.22 & 118 \(\pm\) 39 & 310 \(\pm\) 36 & 420 \(\pm\) 17 & 2870 \(\pm\) 200 & 1271 \(\pm\) 23 & 598 \(\pm\) 17 & 473 \(\pm\) 16 & 137 \(\pm\) 22 \\ R17 & 0.89 \(\pm\) 0.06 & 0.77 \(\pm\) 0.10 & 74 \(\pm\) 33 & 195 \(\pm\) 30 & 376 \(\pm\) 14 & 2870 \(\pm\) 162 & 1167 \(\pm\) 20 & 641 \(\pm\) 15 & 415 \(\pm\) 13 & 76 \(\pm\) 13 \\ R18 & 0.68 \(\pm\) 0.06 & 0.62 \(\pm\) 0.09 & 66 \(\pm\) 35 & 181 \(\pm\) 33 & 482 \(\pm\) 16 & 2870 \(\pm\) 167 & 1411 \(\pm\) 22 & 518 \(\pm\) 16 & 388 \(\pm\) 16 & 149 \(\pm\) 22 \\ R19 & 0.59 \(\pm\) 0.06 & 0.53 \(\pm\) 0.07 & 115 \(\pm\) 31 & 316 \(\pm\) 30 & 436 \(\pm\) 18 & 2870 \(\pm\) 172 & 1318 \(\pm\) 25 & 634 \(\pm\) 19 & 397 \(\pm\) 17 & 79 \(\pm regions simultaneously, it is probable that in the central part of the galaxy some of the strongest nebular emission lines be saturated or the flux values fall within the non-linearity range of the detector. In our particular case, the H\(\alpha\) line looks saturated in at least 30 pixels around the galaxy centre (H\(\alpha\)/H\(\beta\) \(<\) 2.7, see Sec. 3.1.1). For this reason, we have used the weaker HeI emission lines at \(\lambda\) 5875A and \(\lambda\)6678 A in order to derive the extinction in regions within the inner ring following the methodology proposed in Zamora et al. (2022), after checking carefully that the H\(\beta\) and HeI \(\lambda\) 6678 A lines show the same spatial distribution (Fig. 3) being also similar to that of the continuum emission map at 6060 A (see Fig. 5). A theoretical value for the ratio of HeI\(\lambda\) 5875 A/ HeI\(\lambda\) 6678 A ratio of 3.52 (Luridiana et al. 2015, for n\({}_{e}\) = 100 cm\({}^{-3}\) and T\({}_{e}\) = 10\({}^{4}\) K) has been assumed. The reddening constant c(H\(\beta\)) has been derived by performing a linear regression using all the available HI and HeI emission lines (Fig. 7). Table 4 shows, for each inner ring HII region, the results obtained with this procedure and lists in columns 1 to 5: (1) the region ID; (2 and 3) the HeI\(\lambda\) 5875 A and HeI\(\lambda\) 6678 A line fluxes respectively; and (4 and 5) the reddening constant calculated using only the H\(\alpha\)/H\(\beta\) ratio and both the HI and HeI lines in the fit. The results of these two fits using are very similar and particularly, for regions Rb, Rc and Rd the results are fully compatible. Also, the intercept of all regression lines is compatible with zero. The lower panel of Tab. 3 shows, for the selected inner HII regions, the reddening corrected emission line intensities of strong lines relative to H\(\beta\), and its corresponding reddening constant. #### 3.1.5 Chemical abundances CNSFR metallicities have been traced by their sulphur abundances following the methodology described in Diaz & Zamora (2022), well suited to the use of MUSE data since it is based on red-to-near infrared spectroscopy and presents two interesting advantages: reddening effects are decreased due to the longer wavelengths involved and contrary to the case of oxygen, sulphur does not seem to be depleted in diffused clouds (Rodriguez-Baras et al. 2021). Additionally, the electron temperature sensitive line of [SIII] at \(\lambda\) 6312 A can be detected and measured up to, at least, solar abundances (Diaz et al. 2007) as those expected in the central regions of galaxies. This line has been measured with a S/N higher than 1 in \(\sim\) 35 % (8 out of 23) of the HII regions within the outer galaxy ring and in 3 out of 5 regions within the inner one. Fig. 8 shows the spectrum of the inner ring region with the highest S/N in this sulphur line as an example. For these regions, total sulphur abundances have been derived by the direct method as described in Zamora & Diaz (2023). Table 5 lists in columns 1 to 7: (1) the region ID; (2) the measured [SIII]\(\lambda\) 6312 A emission line intensity; (3) the R\({}_{S3}\) line ratio; (4) the [SIII] electron temperature; (5 and 6) the ionic abundances of S\({}^{+}\) and S\({}^{++}\) relative to H\({}^{+}\); and (7) the total S/H abundance. For the rest of the regions we have used the S\({}_{23}\) parameter and the calibration given in Diaz & Zamora (2022) to derive empirical sulphur abundances. The sulphur abundances derived from this calibration for all the objects in our sample are given in Table 6. \begin{table} \begin{tabular}{c c c c c} \hline Region ID & HeI\(\lambda\) 5875Å\({}^{a}\) & HeI\(\lambda\) 6678Å\({}^{a}\) & c(Hb)\({}_{H_{a}}\) & c(Hb)\({}_{fit}\) \\ \hline Ra & 6.903 \(\pm\) 0.095 & 2.930 \(\pm\) 0.070 & 0.99 \(\pm\) 0.02 & 1.081 \(\pm\) 0.007 \\ Rb & 5.00 \(\pm\) 0.11 & 1.80 \(\pm\) 0.16 & 0.97 \(\pm\) 0.02 & 0.9506 \(\pm\) 0.0004 \\ Rc & 6.751 \(\pm\) 0.061 & 2.12 \(\pm\) 0.32 & 0.42 \(\pm\) 0.04 & 0.4102 \(\pm\) 0.0001 \\ Rd & 3.382 \(\pm\) 0.041 & 1.20 \(\pm\) 0.18 & 0.62 \(\pm\) 0.05 & 0.661 \(\pm\) 0.001 \\ Re & 2.001 \(\pm\) 0.030 & 1.07 \(\pm\) 0.18 & 1.25 \(\pm\) 0.04 & 1.46 \(\pm\) 0.04 \\ \hline \end{tabular} \({}^{a}\) In units of 10\({}^{-16}\) erg/s/cm\({}^{2}\). \end{table} Table 4: Measured HeI line intensities and logarithmic extinction coefficients. Figure 8: [SIII]\(\lambda\) 6312 Å reddening corrected emission line as detected in region Rd. The position of [OI]\(\lambda\lambda\) 6300,6364 Å emission lines are also shown. Figure 7: Linear regressions of c(H\(\beta\)) values from hydrogen and helium lines for inner ring regions. ### Ionising clusters #### 3.2.1 Integrated magnitudes For each region of the sample, we have calculated the fluxes inside the Sloan Digital Sky Survey (SDSS) filters using their reddening corrected integrated spectrum, previously masking the nebular emission lines, and the expressions shown in Zamora & Diaz (2023a). Table 7 shows these integrated magnitudes and the corresponding derived quantities for each HII region within the inner and outer rings listing in columns 1 to 6: (1) the region ID; (2) the apparent magnitude for the i band; (3) the apparent magnitude for the r band; (4) the absolute magnitude for the i band; (5) the absolute magnitude for the r band; and (6) the r-i colour. Figure 9 shows the colour-magnitude diagram of the studied ionised regions as a first approach to their stellar population properties. Inner ring regions show r-band luminosities larger than the rest. Also, regions Rc and Rd show larger r-i values than the rest. These regions were identified with multiple clusters in the 6060 A wavelength HST filter in Sec. 3.1.2. In general, HII regions in the outer ring look up to a factor of 40 fainter and somewhat redder than those in the inner ones, an effect that seems to be real given the small reddening correction involved. #### 3.2.2 Stellar absorption lines In all the studied regions the far red CaII\(\lambda\lambda\) 8498, 8542, 8662 A (CaT) and MgL 5171 A stellar lines are clearly detected. We have calculated their equivalent widths (EW) measuring the flux in 30 A continuum bands at both sides of each of the lines and assuming a linear behavior of the continuum between them. Table 8 gives the identification of each line in column 1, its central wavelength in A in column 2, its equivalent width in A in column 3, and the limits of the two continuum side-bands, in A, in columns 4 and 5. To calculate the error of these measurements, we have \begin{table} \begin{tabular}{c c c} \hline Region ID & S23 & 12+log(S/H) \\ \hline R1 & 1.086 \(\pm\) 0.031 & 6.716 \(\pm\) 0.031 \\ R2 & 1.294 \(\pm\) 0.041 & 6.896 \(\pm\) 0.035 \\ R3 & 1.301 \(\pm\) 0.044 & 6.902 \(\pm\) 0.038 \\ R4 & 1.554 \(\pm\) 0.061 & 7.096 \(\pm\) 0.047 \\ R5 & 1.785 \(\pm\) 0.068 & 7.257 \(\pm\) 0.048 \\ R6 & 1.189 \(\pm\) 0.044 & 6.807 \(\pm\) 0.046 \\ R7 & 1.268 \(\pm\) 0.054 & 6.874 \(\pm\) 0.046 \\ R8 & 1.153 \(\pm\) 0.052 & 6.776 \(\pm\) 0.047 \\ R9 & 1.523 \(\pm\) 0.054 & 7.074 \(\pm\) 0.043 \\ R10 & 1.578 \(\pm\) 0.048 & 7.114 \(\pm\) 0.038 \\ R11 & 1.261 \(\pm\) 0.058 & 6.868 \(\pm\) 0.05 \\ R12 & 1.726 \(\pm\) 0.063 & 7.217 \(\pm\) 0.046 \\ R13 & 1.891 \(\pm\) 0.083 & 7.327 \(\pm\) 0.057 \\ R14 & 1.221 \(\pm\) 0.109 & 6.835 \(\pm\) 0.093 \\ R15 & 1.473 \(\pm\) 0.072 & 7.036 \(\pm\) 0.056 \\ R16 & 1.543 \(\pm\) 0.079 & 7.089 \(\pm\) 0.06 \\ R17 & 1.318 \(\pm\) 0.05 & 6.916 \(\pm\) 0.043 \\ R18 & 1.419 \(\pm\) 0.079 & 6.995 \(\pm\) 0.062 \\ R19 & 1.303 \(\pm\) 0.078 & 6.903 \(\pm\) 0.065 \\ R20 & 1.906 \(\pm\) 0.107 & 7.336 \(\pm\) 0.071 \\ R21 & 1.367 \(\pm\) 0.079 & 6.954 \(\pm\) 0.064 \\ R22 & 1.469 \(\pm\) 0.073 & 7.033 \(\pm\) 0.057 \\ R23 & 1.432 \(\pm\) 0.128 & 7.005 \(\pm\) 0.1 \\ \hline Ra & 0.78 \(\pm\) 0.021 & 6.411 \(\pm\) 0.026 \\ Rb & 0.755 \(\pm\) 0.022 & 6.383 \(\pm\) 0.028 \\ Rc & 1.126 \(\pm\) 0.06 & 6.752 \(\pm\) 0.054 \\ Rd & 1.309 \(\pm\) 0.053 & 6.908 \(\pm\) 0.045 \\ Re & 1.032 \(\pm\) 0.029 & 6.667 \(\pm\) 0.03 \\ \hline \end{tabular} \end{table} Table 6: Sulphur abundances of the observed CNSFRs derived by empirical methods. \begin{table} \begin{tabular}{c c c c c c c} \hline Region ID & \(\rm\{([SIII]\lambda 6312)^{a}\}\) & R\({}_{S3}\) & t\({}_{e}\)([SIII])\({}^{b}\) & 12+log(S\({}^{\star}\)/H\({}^{\star}\)) & 12+log(S\({}^{\star}\)/H\({}^{\star}\)) & 12+log(S/H) \\ \hline R1 & 13.4 \(\pm\) 0.8 & 158.7 \(\pm\) 0.9 & 0.6731 \(\pm\) 0.001 & 6.8174 \(\pm\) 0.0058 & 6.5331 \(\pm\) 0.0459 & 6.999 \(\pm\) 0.016 \\ R2 & 6.9 \(\pm\) 0.6 & 306.9 \(\pm\) 0.8 & 0.5773 \(\pm\) 0.0003 & 7.1755 \(\pm\) 0.0075 & 6.6982 \(\pm\) 0.0606 & 7.3 \(\pm\) 0.016 \\ R3 & 12.2 \(\pm\) 1.1 & 139.8 \(\pm\) 0.8 & 0.6958 \(\pm\) 0.0011 & 6.8255 \(\pm\) 0.0061 & 6.6267 \(\pm\) 0.0489 & 7.038 \(\pm\) 0.019 \\ R4 & 19.8 \(\pm\) 2.5 & 115.2 \(\pm\) 1.1 & 0.735 \(\pm\) 0.002 & 6.8197 \(\pm\) 0.0089 & 6.6489 \(\pm\) 0.0542 & 7.044 \(\pm\) 0.023 \\ R5 & 13.1 \(\pm\) 1.7 & 107.3 \(\pm\) 0.8 & 0.7507 \(\pm\) 0.0018 & 6.8585 \(\pm\) 0.0079 & 6.6651 \(\pm\) 0.0549 & 7.074 \(\pm\) 0.022 \\ R6 & 20.0 \(\pm\) 1.6 & 83.4 \(\pm\) 1.2 & 0.8157 \(\pm\) 0.004 & 6.4619 \(\pm\) 0.0101 & 6.5899 \(\pm\) 0.0378 & 6.832 \(\pm\) 0.022 \\ R18 & 3.2 \(\pm\) 0.5 & 101.6 \(\pm\) 1.3 & 0.7637 \(\pm\) 0.0032 & 6.6869 \(\pm\) 0.0119 & 6.648 \(\pm\) 0.0654 & 6.969 \(\pm\) 0.032 \\ R20 & 2.8 \(\pm\) 0.6 & 80.7 \(\pm\) 2.3 & 0.8251 \(\pm\) 0.0084 & 6.8015 \(\pm\) 0.0158 & 6.4714 \(\pm\) 0.1094 & 6.968 \(\pm\) 0.036 \\ \hline Ra & 167.7 \(\pm\) 3.0 & 365.1 \(\pm\) 0.7 & 0.555 \(\pm\) 0.0002 & 6.7187 \(\pm\) 0.0246 & 7.0098 \(\pm\) 0.011 & 7.189 \(\pm\) 0.011 \\ Rb & 49.9 \(\pm\) 0.5 & 430.9 \(\pm\) 1.2 & 0.5337 \(\pm\) 0.0004 & 6.8926 \(\pm\) 0.0185 & 6.9548 \(\pm\) 0.0174 & 7.226 \(\pm\) 0.013 \\ Rd & 208.0 \(\pm\) 1.5 & 60.8 \(\pm\) 1.1 & 0.9209 \(\pm\) 0.0068 & 6.2023 \(\pm\) 0.0347 & 6.6461 \(\pm\) 0.0272 & 6.78 \(\pm\) 0.022 \\ \hline \end{tabular} \end{table} Table 5: Ionic and total sulphur abundances derived by the direct method for the CNSFRs with measured [SIII]\(\lambda\) 6312 Å line intensities. Figure 9: Colour-magnitude diagram for outer ring (green dots) and inner ring HII regions (purple squares). taken into account the standard deviation in the continuum bands propagating them in quadrature. In principle, absorption line EW measurements in objects with different velocity dispersions have to be corrected for the broadening of the spectral lines. This effect decreases the continuum level and the line fluxes integrated in fixed width apertures, providing lower EW values. In order to evaluate this correction in our data, we have convolved some stellar spectra (giant and supergiant stars Ivanov et al. 2019) with known Gaussian functions of different \(\sigma\) (from 1A to 8 A) measuring the CaII lines EW in each broadened spectrum. The correction (\(\Delta\)EW(CaT)) has been calculated as the average difference between the EW of CaII measurements with and without broadening (see Terlevich et al. 1990). Finally, we have fitted a second order polynomial which applies to \(\sigma\) values higher than 4.5 A: \[\Delta EW(CaT)=0.02931\cdot\sigma^{2}-0.197\cdot\sigma+0.2985 \tag{1}\] where \(\sigma\) is the velocity dispersion in A. This correction takes values lower than 0.05 A for \(\sigma\) = 5, which corresponds with a velocity dispersion around 170 km/s. Since the common values for CNSFRs are lower than this, no correction has been applied. It is well known that, in the presence of an AGN, inferring stellar population properties from stellar absorption lines can be difficult due to the presence of a non-thermal extra component to the continuum coming from the galactic nucleus (see Terlevich et al. 1990). But also in regions with high star formation rates there is a non negligible contribution by the nebular continuum which usually is not taken into account. Both of them can dilute the starlight weakening and distorting the absorption features. With this in mind we have calculated a dilution factor for each of the lines as the ratio between the observed EWs and a standard value taken as reference, D = EW\({}_{obs}\) / EW\({}_{ref}\). For this comparison, we have used representative values of the EW absorption features measured in the spectra of normal spiral galaxy nuclei which correspond to old, metal rich stellar populations (EW\({}_{ref}\)(CaII) = 7.7 \(\pm\) 0.5 A, EW\({}_{ref}\)(MgI) = 5.18 \(\pm\) 0.71 A; Terlevich et al. 1990; Keel 1983). Figure 10 shows the MgI dilution as a function of CaII dilution for the studied CNSFR. The red \begin{table} \begin{tabular}{c c c c c c} \hline Region ID & m\({}_{i}\) (mag) & m\({}_{r}\) (mag) & M\({}_{i}\) (mag) & M\({}_{r}\) (mag) & r-i (mag) \\ \hline R1 & 17.48 \(\pm\) (0.35 \(\times\) 10\({}^{-4}\)) & 17.81 \(\pm\) (0.30 \(\times\) 10\({}^{-4}\)) & -14.25 \(\pm\) (0.35 \(\times\) 10\({}^{-4}\)) & -13.92 \(\pm\) (0.46 \(\times\) 10\({}^{-4}\)) & 0.324 \(\pm\) (0.460 \(\times\) 10\({}^{-4}\)) \\ R2 & 17.36 \(\pm\) (0.31 \(\times\) 10\({}^{-4}\)) & 17.78 \(\pm\) (0.30 \(\times\) 10\({}^{-4}\)) & -14.37 \(\pm\) (0.31 \(\times\) 10\({}^{-4}\)) & -13.95 \(\pm\) (0.43 \(\times\) 10\({}^{-4}\)) & 0.427 \(\pm\) (0.431 \(\times\) 10\({}^{-4}\)) \\ R3 & 17.66 \(\pm\) (0.33 \(\times\) 10\({}^{-4}\)) & 17.99 \(\pm\) (0.28 \(\times\) 10\({}^{-4}\)) & -14.07 \(\pm\) (0.33 \(\times\) 10\({}^{-4}\)) & -13.75 \(\pm\) (0.43 \(\times\) 10\({}^{-4}\)) & 0.322 \(\pm\) (0.429 \(\times\) 10\({}^{-4}\)) \\ R4 & 16.99 \(\pm\) (0.27 \(\times\) 10\({}^{-4}\)) & 17.40 \(\pm\) (0.25 \(\times\) 10\({}^{-4}\)) & -14.74 \(\pm\) (0.27 \(\times\) 10\({}^{-4}\)) & -14.33 \(\pm\) (0.37 \(\times\) 10\({}^{-4}\)) & 0.414 \(\pm\) (0.367 \(\times\) 10\({}^{-4}\)) \\ R5 & 17.52 \(\pm\) (0.29 \(\times\) 10\({}^{-4}\)) & 17.90 \(\pm\) (0.27 \(\times\) 10\({}^{-4}\)) & -14.21 \(\pm\) (0.29 \(\times\) 10\({}^{-4}\)) & -13.83 \(\pm\) (0.40 \(\times\) 10\({}^{-4}\)) & 0.385 \(\pm\) (0.396 \(\times\) 10\({}^{-4}\)) \\ R6 & 19.50 \(\pm\) (0.76 \(\times\) 10\({}^{-4}\)) & 19.85 \(\pm\) (0.67 \(\times\) 10\({}^{-4}\)) & -12.23 \(\pm\) (0.76 \(\times\) 10\({}^{-4}\)) & -11.88 \(\pm\) (1.02 \(\times\) 10\({}^{-4}\)) & 0.347 \(\pm\) (1.017 \(\times\) 10\({}^{-4}\)) \\ R7 & 18.31 \(\pm\) (0.64 \(\times\) 10\({}^{-4}\)) & 18.68 \(\pm\) (0.57 \(\times\) 10\({}^{-4}\)) & -13.42 \(\pm\) (0.64 \(\times\) 10\({}^{-4}\)) & -13.05 \(\pm\) (0.86 \(\times\) 10\({}^{-4}\)) & 0.370 \(\pm\) (0.856 \(\times\) 10\({}^{-4}\)) \\ R8 & 17.86 \(\pm\) (0.42 \(\times\) 10\({}^{-4}\)) & 18.22 \(\pm\) (0.37 \(\times\) 10\({}^{-4}\)) & -13.87 \(\pm\) (0.42 \(\times\) 10\({}^{-4}\)) & -13.52 \(\pm\) (0.56 \(\times\) 10\({}^{-4}\)) & 0.354 \(\pm\) (0.556 \(\times\) 10\({}^{-4}\)) \\ R9 & 18.55 \(\pm\) (0.42 \(\times\) 10\({}^{-4}\)) & 18.92 \(\pm\) (0.37 \(\times\) 10\({}^{-4}\)) & -13.18 \(\pm\) (0.42 \(\times\) 10\({}^{-4}\)) & -12.82 \(\pm\) (0.56 \(\times\) 10\({}^{-4}\)) & 0.366 \(\pm\) (0.560 \(\times\) 10\({}^{-4}\)) \\ R10 & 19.25 \(\pm\) (0.35 \(\times\) 10\({}^{-4}\)) & 19.72 \(\pm\) (0.34 \(\times\) 10\({}^{-4}\)) & -12.48 \(\pm\) (0.35 \(\times\) 10\({}^{-4}\)) & -12.01 \(\pm\) (0.49 \(\times\) 10\({}^{-4}\)) & 0.470 \(\pm\) (0.492 \(\times\) 10\({}^{-4}\)) \\ R11 & 17.63 \(\pm\) (0.36 \(\times\) 10\({}^{-4}\)) & 18.01 \(\pm\) (0.32 \(\times\) 10\({}^{-4}\)) & -14.10 \(\pm\) (0.36 \(\times\) 10\({}^{-4}\)) & -13.73 \(\pm\) (0.48 \(\times\) 10\({}^{-4}\)) & 0.373 \(\pm\) (0.479 \(\times\) 10\({}^{-4}\)) \\ R12 & 18.31 \(\pm\) (0.37 \(\times\) 10\({}^{-4}\)) & 18.68 \(\pm\) (0.33 \(\times\) 10\({}^{-4}\)) & -13.42 \(\pm\) (0.37 \(\times\) 10\({}^{-4}\)) & -13.06 \(\pm\) (0.50 \(\times\) 10\({}^{-4}\)) & 0.364 \(\pm\) (0.497 \(\times\) 10\({}^{-4}\)) \\ R13 & 18.56 \(\pm\) (0.34 \(\times\) 10\({}^{-4}\)) & 19.02 \(\pm\) (0.33 \(\times\) 10\({}^{-4}\)) & -13.17 \(\pm\) (0.34 \(\times\) 10\({}^{-4}\)) & -12.17 \(\pm\) (0.47 \ solid line corresponds to the dilution by the nebular continuum associated with a nebula ionised by a young star cluster synthesised using the PopStar code (Molla et al. 2009) with Salpeter's IMF (m\({}_{low}\) = 0.85 M\({}_{\odot}\), m\({}_{up}\) = 120 M\({}_{\odot}\) Salpeter 1955) and 5.5 Ma. All of our outer ring regions show MgI dilutions consistent with the contribution by a nebular continuum (about 40%) with the CaT lines looking almost undiluted. This would be expected from young clusters with red supergiant stars whose EW are larger than the reference value corresponding to a galaxy nucleus population dominated by red giants. In fact, in a few cases, the CaT EWs are even increased with respect to the reference value. In the case of the inner ring regions, larger dilutions are found in the MgI lines (about 80%) which could be due to an additional continuum originating in the AGN, but, again, the CaT lines show very little dilution pointing to a larger contribution by red supergiant stars. None of these effects can be ascribed to the additional presence of a metal rich stellar population since EW(MgI) increases with metal abundance faster than EW(CaII) in the high metallicity regime (Burstein et al. 1984) and the MgI feature should then be larger than the CaT one. Thus, in order to explain the observations, the stellar population in our clusters should have stars with CaT features stronger than those in normal early spiral galaxies, like young red supergiants. Other authors have already suggested that the presence of strong CaT features in galactic nuclei is representative of relatively recent starbursts (see for example Terlevich et al. 1990; Garcia Vargas et al. 1993; Oliva et al. 1995). #### 3.2.3 Stellar velocity dispersions Stellar velocity dispersions can be obtained from the absorption stellar lines using the cross-correlation technique proposed by Tonry & Davis (1979). This method calculates the line-of-sight velocity dispersion by comparing a stellar template with the observed stellar population spectrum. This method is based on the assumption that a galaxy spectrum is represented by the sum of different stellar spectra with different velocity offsets convolved with a broadening function: \[g(n)\sim\alpha[t(n)*b(n-\delta)] \tag{2}\] where \(*\) means the convolution product, g(n) is the galaxy spectrum, \(\alpha\) is the number of stars, t(n) is the template spectrum, b(n) is the broadening function and \(\delta\) is the offset of the broadening function with respect to the template. The broadening function is assumed to be a Gaussian and the convolution product is applied assuming a periodic spectrum with discrete Fourier transforms. Determining young stellar cluster velocity dispersion at optical wavelengths is particularly complicated due to the shortage of prominent stellar absorption lines. There are some stellar lines from OB stars but there are weak and they coincide in wavelength with the nebular hydrogen and helium emission lines. However, as shown above, our clusters have red supergiant stars (see Section 3.2.2) which dominate the longer optical wavelengths where we can find the CaII\(\lambda\lambda\) 8498, 8542, 8662 A triplet lines. The use of the CaT lines to measure the velocity dispersion of stars presents some advantages. From an observational perspective: (i) the red wavelengths are less affected by the presence of an AGN in the inner part of a galaxy since a lower dilution of stellar lines is observed; (ii) the measure of these lines is not affected by the presence of TiO bands and the nebular line contamination is small; and (iii) the velocity resolution at the near IR is higher than in the blue part of the spectrum for the same spectral dispersion. Also, although these lines show some dependence with metal abundance, at high metallicities like those found for our CNSFRs the surface gravity becomes the dominant parameter with the CaT strength increasing with decreasing gravity (Diaz et al. 1989). Hence we have used late-type red giants and supergiants as reference templates. They have been obtained from the MUSE stellar library presented by Ivanov et al. (2019). A large sample of stellar types with different luminosities is present in an HII region, but we have selected only two stellar types as templates for our cross-correlation analysis. This fact can introduce errors in our velocity dispersion measures. We have done an effort to minimise these systematic offsets estimating previously the stars which dominate the CaII features. Furthermore, the use of the calcium triplet lines minimises the mismatch between the stellar template and the galaxy lines since there are very strong in most stars, with the exception of the hottest ones. For this reason, we finally use an average template to be correlated with our cluster spectrum. We have aligned all selected stellar spectra in velocity and then compute a direct average between all of them, verifying that no apparent broadening is introduced in the procedure. The first step in the application of the cross-correlation method is the binning of each spectrum into logarithmic wavelengths in order to get a uniform velocity width. We have used 512 bins that correspond to a velocity resolution of \(\sim\) 27.1 km/s in a wavelength range between 8450 A - 8850 A. Next, the emission lines present in each spectrum have been removed, the continuum has been subtracted and the spectra have been normalised. Finally, the high and low frequency variations associated with the noise component and continuum subtracted errors respectively have been filtered. This filtering is performed by applying a band-pass to the Fourier spectrum transform with minimum and maximum wave numbers, k\({}_{min}\) and k\({}_{max}\) respectively. We have used k\({}_{min}\) = 3, which corresponds to wavelength values lower than 10 A, and k\({}_{max}\) = 60, corresponding to the nominal MUSE spectral resolution (MUSE User Manual-ESO-261650). Further detail of this procedure can be found in (Zamora & Diaz 2023b). The internal error of the method can be estimated from the asymmetric part of the correlation peak and calculated its root mean square (see Davis et al. 1978). However, this error is very small and not considered representative of the real one associated Figure 10: Dilution of the MgI absorption line vs that of the CaII triplet lines. The red lines represent the dilution due to the nebular continuum in the optical and near IR. with the measurement. Hence we have calculated the velocity dispersion error as the semi-difference between the largest and smallest Gaussian width that can be fitted considering the asymmetries in the correlation peak, as suggested in Zamora & Diaz (2023b). Fig. 11 shows the correlation function between the R3 region and the stellar template as an example. We can see that the peak amplitude is lower than 0.5. This is due to an observational artifact that appears between the two strongest calcium lines mainly in the outer ring regions, which have lower S/N. However, the correlation peak is much greater than the asymmetric component and perfectly distinguishable. Additionally, we can see that the correlation peak has a Gaussian behavior so the assumptions of the method are justified. Derived stellar velocity dispersions take values from \(78.7\pm 0.1\) km/s to \(147\pm 19\) km/s for the outer ring regions and between \(157\pm 14\) km/s and \(179\pm 13\) km/s for the inner ring ones, with median values of 97 km/s and 161 km/s respectively. Table 9 gives the identification of each line in column 1, the EW of CaII lines in column 2, the EW of MgI line in column 3 and the velocity dispersions of the observed CaII features in column 4. ## 4 Discussion ### Ionising clusters characteristics #### 4.1.1 Ionisation nature The left panels of Fig. 12 show 4 outer ring region spectra at different wavelengths. From left to right the H\(\beta\), [OIII]\(\lambda\lambda\) 4959,5007 A, [NII]\(\lambda\lambda\) 6548,84 A and H\(\alpha\), HeI\(\lambda\) 5875 A, [SII]\(\lambda\lambda\) 6716,31 Aand [SIII]\(\lambda\) 9069 A. More than one kinematical component can be seen in the [OIII] emission lines as well as asymmetries with respect to a Gaussian behaviour also seen in some other emission lines, as already reported in previous works (Roboleto-Orus et al. 2021; Xu & Wang 2022), evidence of complex velocity flows. All emission lines with the required S/N (see Sec. 3.1.3) appear to have at least two components. Besides, we have found four regions: R2, R10, R13 and Rd, showing three components in the [OIII]\(\lambda\lambda\) 4959,5007 A emission line. Neither of them corresponds to the previously identified star-forming complexes (see Fig. 5), suggesting that the third kinematical component might be associated with an outflow coming from the active galaxy nucleus. The right panel of Fig. 12 shows the [OIII]\(\lambda\) 5007 A [NII]\(\lambda\) 6584 A map, spatially smoothed with a Gaussian function of \(\sigma\) = 1.5 pix (0.3 arcsec). The four regions mentioned above are close to each other and located in the same area in which this emission line ratio shows an excess. Fig. 13 shows H\(\alpha\) (upper panel) and [OIII]\(\lambda\) 5007 A (lower panel) radial velocity maps showing that the selected young clusters follow the galaxy disc velocity distribution, hence assuring that the kinematical component associated to the observed HII regions is the correct one. Fig. 14 shows in the upper panel the BPT diagram for the observed HII regions in both the outer and inner rings (narrow and broad components). This is the diagnostic more commonly used to distinguish between star-forming and shocked and/or non-thermal activity in ionised regions, although its sensitivity to the N/O ratio is a recognised caveat. Hence we better suggest the use of the near-infrared sulphur emission lines which constitute a powerful diagnostic to distinguish between shock and photo-ionisation mechanisms (see Diaz et al. 1985), being independent of relative abundances and little sensitive to reddening. The lower panel of Fig. 14 shows the location on this diagram of the same regions. Also shown are the shock models calculated by Groves et al. (2004) and the results of photo-ionisation models for nebulae ionised by young star clusters computed using the Cloudy (Ferland et al. 2013) code with S/H abundances and ionisation parameter labeled (see Zamora & Diaz 2023a, for further in Figure 11: The upper panel shows the cross correlation function of region R3 with the stellar template used. The lower panel shows the asymmetric noise component of this function. \begin{table} \begin{tabular}{c c c c} \hline Region ID & EW(CaII)\({}^{a}\) & EW(MgI) & \(\sigma_{\star}\)(CaT) \\ & (Å) & (Å) & (km/s) \\ \hline R1 & \(7.058\pm 0.091\) & \(2.692\pm 0.054\) & \(96.61\pm 0.02\) \\ R2 & \(6.889\pm 0.087\) & \(3.022\pm 0.085\) & \(111.50\pm 0.05\) \\ R3 & \(7.116\pm 0.086\) & \(2.481\pm 0.049\) & \(78.02\pm 0.03\) \\ R4 & \(6.718\pm 0.082\) & \(3.175\pm 0.091\) & \(100.22\pm 0.15\) \\ R5 & \(6.779\pm 0.079\) & \(3.128\pm 0.076\) & \(104.08\pm 0.07\) \\ R6 & \(7.274\pm 0.292\) & \(2.821\pm 0.145\) & \(104.38\pm 0.25\) \\ R7 & \(7.060\pm 0.175\) & \(3.100\pm 0.148\) & \(97.89\pm 0.06\) \\ R8 & \(7.062\pm 0.132\) & \(2.958\pm 0.071\) & \(96.84\pm 0.11\) \\ R9 & \(6.976\pm 0.151\) & \(2.607\pm 0.084\) & \(78.66\pm 0.00\) \\ R10 & \(6.851\pm 0.085\) & \(2.802\pm 0.092\) & \(115.08\pm 0.14\) \\ R11 & \(8.051\pm 0.194\) & \(2.820\pm 0.068\) & \(97.38\pm 0.01\) \\ R12 & \(6.878\pm 0.118\) & \(2.661\pm 0.077\) & \(79.83\pm 0.07\) \\ R13 & \(7.008\pm 0.114\) & \(3.008\pm 0.092\) & \(108.77\pm 0.08\) \\ R14 & \(7.590\pm 0.151\) & \(2.892\pm 0.081\) & \(89.68\pm 0.04\) \\ R15 & \(6.891\pm 0.106\) & \(3.387\pm 0.109\) & \(91.34\pm 0.20\) \\ R16 & \(7.392\pm 0.141\) & \(2.778\pm 0.076\) & \(82.69\pm 0.13\) \\ R17 & \(7.009\pm 0.148\) & \(2.548\pm 0.087\) & \(94.14\pm 0.05\) \\ R18 & \(6.816\pm 0.146\) & \(2.945\pm 0.103\) & \(102.90\pm 0.21\) \\ R19 & \(7.426\pm 0.204\) & \(2.745\pm 0.087\) & \(83.66\pm 0.06\) \\ R20 & \(7.480\pm 0.125\) & \(3.668\pm 0.149\) & \(104.28\pm 0.09\) \\ R21 & \(6.932\pm 0.135\) & \(3.102\pm 0.107\) & \(90.92\pm 0.14\) \\ R22 & \(7.104\pm 0.114\) & \(3.144\pm 0.118\) & \(99.33\pm 0.07\) \\ R23 & \(8.382\pm 0.502\) & \(3.177\pm 0.302\) & \(146.85\pm 0.35\) \\ \hline Ra & \(5.706\pm 0.033\) & \(0.918\pm 0.010\) & \(157.08\pm 0.25\) \\ Rb & \(5.725\pm 0.055\) & \(0.886\pm 0.008\) & \(168.40\pm 0.30\) \\ Rc & \(6.180\pm 0.025\) & \(0.547\pm 0.007\) & \(178.81\pm 0.25\) \\ Rd & \(6.461\pm 0.074\) & \(0.737\pm 0.012\) & \(160.47\pm 0.20\) \\ Re & \(6.826\pm 0.086\) & \(1.179\pm 0.026\) & \(161.01\pm 0.31\) \\ \hline \end{tabular} \({}^{a}\) (\(\lambda\)8542Å + \(\lambda\)8662Å+ \(\lambda\)8498Å). \end{table} Table 9: Equivalent widths of stellar absorption lines and velocity dispersions of the observed CNSFRs. formation). Regions Rd, R2, R10 and R13 (regions with three kinematical components in [OIII]\(\lambda\) 5007 \(\AA\) line) are shown with open diamond symbols for the narrow component and open triangles and inverted triangles for the broad ones. The examination of both figures shows the adequate selection of HII region components analysed. #### 4.1.2 Characteristics of the observed CNSFRs The H\(\alpha\) luminosity, L(H\(\alpha\)), can be calculated from the extinction corrected H\(\alpha\) fluxes. For the outer ring HII regions this value is between 4.5 \(\times\) 10\({}^{38}\) erg/s/cm\({}^{2}\) and 1.2 \(\times\) 10\({}^{40}\) erg/s/cm\({}^{2}\) while for inner ring regions is from 2.4 \(\times\) 10\({}^{40}\) erg/s/cm\({}^{2}\) to 2.0 \(\times\) 10\({}^{41}\) erg/s/cm\({}^{2}\). These values are respectively on the central and higher side of the distribution found by Alvarez-Alvarez et al. (2015) for a large sample of CNSFRs. The H\(\alpha\) luminosity of the star formation in the five inner regions dominates the emission at the galactic center with 4.4 \(\times\) 10\({}^{41}\) erg/s/cm\({}^{2}\) representing 77% of the total H\(\alpha\) luminosity within the central 2.5 arcsec. The corresponding number of hydrogen ionising photons per second has been derived using the recombination coefficient of the H\(\alpha\) line assuming a constant value of electron density of 100 cm\({}^{-3}\), a temperature of 10\({}^{4}\) K and case B recombination. HII regions in the outer ring have H\(\alpha\) luminosities, hence a number of ionising photons, between 1 and 2 orders of magnitude lower than inner ring regions, implying lower star formation rates (SFR). The dimensionless ionisation parameter, u, has been estimated from the [SII]/[SIII] ratio (see Diaz et al. 1991) and ranges from -4.47 to -3.15 in logarithmic units for the outer ring regions, with a median value of -3.78. Ionisation parameters are higher for the inner HII regions by factors between 5 and 10. The distribution of both quantities, number of ionising photons Q(H\({}_{0}\)), and ionisation parameter, log(u), are shown in the two upper panels of Figure 15 in comparison to the values found for the circumnuclear ring in Galaxy NGC 7742 (Zamora & Diaz 2023a). On average, the outer ring regions in NGC 7469 are more luminous than those in NGC7742 by an order of magnitude. The lacking low luminosity tail in the distribution of NGC 7469 might be due to lack of detection given the larger distance, by a factor of 3, to this galaxy. On the other hand, half of the regions show values of Q(H\({}_{0}\)) larger than the maximum obtained in NGC 7742 pointing to larger structures being selected due to lower linear resolutions. In fact, all regions with L(H\(\alpha\)) larger than 10\({}^{39}\) erg/s have several ionising knots as revealed by HST images. For these regions, larger in extent, lower ionisation parameters are expected as found. The bottom panels of Figure 15 show, from left to right, the distribution of electron density, filling factor and ionised hydrogen mass for the inner and outer ring HII regions as compared with the NGC 7742 ones. The electron density can be calculated from the [SII]\(\lambda\) 6717 \(\AA\) / [SII]\(\lambda\) 6731 \(\AA\) ratio only for n\({}_{e}\) > 50 cm\({}^{-3}\). For the regions within the ring for which only upper limits could be estimated, the electron density has been derived from the observed region sizes, the ionisation parameter and the H\(\alpha\) fluxes. The electron densities range from 50 cm\({}^{-3}\) to 345 cm\({}^{-3}\) for outer ring regions. Only six regions (R4, R5, R11, R16, R20 and R21) have density values larger than 100 cm\({}^{-3}\) and only one of them (R21) is significantly different from the median value (>3\(\sigma\)). These values are similar to those estimated for NGC 7742 regions. On the other hand, inner ring regions present higher values of n\({}_{e}\), between 439 cm\({}^{-3}\) and 1431 cm\({}^{-3}\) with a mean value of 848 cm\({}^{-3}\) which, given their smaller sizes, could be due to their much closer to the galactic nucleus environment. Figure 12: Left panels, from up to bottom: spectra of regions R10, R13, R2 and Rd. From left to right the emission lines of: H\(\beta\), [OIII]\(\lambda\lambda\) 4069,5007 \(\AA\), H\(\alpha\) and [NII]\(\lambda\lambda\) 6548,6584 \(\AA\), [SII]\(\lambda\lambda\) 6717,6731 \(\AA\) and [SIII]\(\lambda\) 9069 \(\AA\) are shown. Right panel: map of the observed [OIII]\(\lambda\) 5007 \(\AA\) / [NII]\(\lambda\) 6583 \(\AA\) ratio smoothed with a Gaussian function of \(\sigma\) = 1.5 pix. Filling factors can be derived using the ionisation parameter and the measured angular radius of each observed HII region (see Diaz et al. [19]) with electron density larger than 50 cm\({}^{-3}\). Filling factors for the outer ring HII regions are low, ranging from \(2.14\times 10^{-5}\) to \(5.25\times 10^{-2}\), with a mean value of \(1.26\times 10^{-3}\). These values are lower than those estimated for high metallicity disc HII regions (between 0.008 and 0.52; Diaz et al. [19]; Diaz et al. [20]; Castellanos et al. [20]), CNSFRs (from 0.0006 to 0.001; Diaz et al. [20]) and for the case of NGC 7742 (from 0.0007 to 0.45; Zamora & Diaz [20]). This is consistent with the larger HII region sizes for similar electron densities. For the inner ring regions, filling factor range from \(2.09\times 10^{-4}\) to \(5.13\times 10^{-2}\), with a mean value of \(4.17\times 10^{-3}\), in the higher part of the distribution found in the outer ring regions. Finally, the mass of ionised hydrogen, in solar masses, have be derived using the expression given in Diaz et al. ([19]). Their mean value is \(3.0\times 10^{5}\) M\({}_{\odot}\) and \(6.2\times 10^{5}\) M\({}_{\odot}\) for the HII regions within the outer and inner ring respectively. Both rings show a similar distribution of this quantity, shifted to larger values with respect to the case of NGC 7742 galaxy. We can compare the estimated angular radii of the observed ring HII regions, \(\phi\), calculated using the definition of the ionisation parameter, with the actually measured ones (see Sec. 3.1.2). This has been done for regions with derived electron densities larger than 50 cm\({}^{-3}\) using the expressions given in Castellanos et al. ([20]). Figure 16 shows this comparison. In most cases, the predicted angular sizes are smaller than measured and only three cases with very large errors in the derivation of their predicted sizes, could correspond to radiation bounded ionised nebulae. One of the inner ring regions, Re, shows the opposite behaviour. However, this region shows very little continuum emission at either in 3360 A or 6600 A wavelengths (see Fig. 5) and hence the existence of an ionising young star cluster may be questioned. Tab. 10 shows the characteristics of each HII region within the ring and lists in column 1 to 9: (1) the region ID; (2) the extinction corrected H\(\alpha\) luminosity; (3) the number of hydrogen ionising photons; (4) the ionisation parameter; (5) the estimated angular radius; (6) the measured linear radius; (7) the electron density; (8) the filling factor; and (9) the mass of ionised hydrogen. #### 4.1.3 Chemical abundances Regarding chemical abundances, the upper panel of Fig. 17 shows the distribution of 12+log(S/H) derived from the empirical S\({}_{23}\) calibration. Outer ring regions show values ranging from 6.72 to 7.34 in units of 12+log(S/H) (12+log(S/H)\({}_{\odot}\)= 7.12, Asplund et al. [20]), with a mean of 7.00 and errors between 0.03 to 0.1 Figure 14: Upper panel: the [OIII]/H\(\beta\) vs [NII]/H\(\alpha\) diagnostic diagram. Over-plotted, derived separations between LINER/Seyfert (S+07, Schawinski et al. [20]) and HII regions (K+01 and K+03, Kewley et al. [20]; Kauffmann et al. [20]). Lower panel: the [SII]/H\(\alpha\)\(\sim\) [SIII]/H\(\alpha\) diagnostic diagram. Over-plotted, dust-free AGN photoionization models (G+04, Groves et al. [20]). Figure 13: H\(\alpha\) (upper panel) and [OIII] \(\lambda\) 5007 Å (lower panel) pixel-by-pixel radial velocity maps. The velocity of the narrow component associated with each cluster is superimposed. dex. Inner ring regions, on the other hand, show somewhat lower values (about a factor of two), ranging from 6.38 to 6.91 with a mean value of 6.62 and errors between 0.03 and 0.05 dex. The lower panel of the figure shows the directly derived S/H abundances. In this case, outer ring regions show logarithmic sulphur abundances between 6.832 \(\pm\) 0.022 and 7.300 \(\pm\) 0.016 in units of 12+log(S/H), i.e. between 0.52 and 1.51 times the solar value (12+log(S/H)\({}_{\odot}\)= 7.12, Asplund et al. 2009). The corresponding values for the inner ring regions are comparable within the errors, ranging from 6.780 \(\pm\) 0.022 to 7.226 \(\pm\) 0.013, between 0.46 and 1.28 times the solar value. In both cases a comparison is made with the CNSFR in NGC 7742. Figure 16: The ionisation derived angular radius against the angular radius measured from the HII region segmentation (see Sec. 3.1.2). Figure 17: Distribution of the total empirically (upper panel) and directly (lower panel) derived sulphur abundances for the CNSFRs. The dashed line corresponds to the solar value (12+log(S/H)\({}_{\odot}\)= 7.12, Asplund et al. 2009). Figure 15: The different histograms in the figure show for the ring HII regions, in green, and regions outside, in purple, the distributions of: the number of hydrogen ionising photons (upper left), the ionisation parameter (upper right), the electron density (bottom left), the filling factor (bottom centre) and the mass of ionised hydrogen (bottom right). We have represented in Fig. 18 the empirical S\({}_{23}\) calibration together with red and blue contours corresponding to data for disc HII regions and HII galaxies respectively and individual symbols representing directly derived abundances for the 8 outer ring regions (green solid circles) and the 3 inner ring regions (purple squares). In two inner HII regions, Ra and Rb, we have found empirically derived abundances somewhat lower that directly derived ones. However, the empirical S\({}_{23}\) calibration could be somewhat affected by the effective temperature of the ionising radiation. Low values of this temperature could move this calibration to higher abundances by up to 14% in the log which would reconcile directly and empirically derived abundances. The spectral energy distribution of the ionising radiation can be estimated from the quotient between the number of helium and hydrogen ionising photons, Q(He\({}_{0}\))/Q(H\({}_{0}\)) (see Zamora & Diaz 2023a). This ratio can be used when there is no direct measurement of the ionic abundances of oxygen and sulphur and it is equivalent to the calculation made with the use of the \(\eta\) parameter. We have calculated the number of ionising He\({}_{0}\) photons from the observed luminosity in the HeLJ 6678 A emission line using its corresponding flux, the distance to NGC 7469 which has been taken as 66.47 Mpc (see Tab. 1) and the recombination coefficient of HeL\(\lambda\) 6678 A emission line assuming a constant value of electron density of 100 cm\({}^{-3}\), a temperature of \(10^{4}\) K and case B recombination (Osterbrock & Ferland 2006). We have detected and measured the HeL\(\lambda\) 6678 A line in 10 outer ring regions and in all inner ring ones finding mean values of Q(He\({}_{0}\)) of \(2.5\times 10^{48}\) photons s\({}^{-1}\) and \(4.4\times 10^{49}\) photons s\({}^{-1}\) for outer and inner ring regions respectively (see Table 11). Fig. 19 shows the relation between the logarithmic numbers of HeI and HI ionising photons. Superimposed are the lines corresponding to different Cloudy models with ionisation parameter \begin{table} \begin{tabular}{l c c c c c c c} \hline Region & L(H\(\alpha\)) & Q(H\({}_{0}\)) & log(u) & \(\phi\) & R & n\({}_{e}\) & log(\(\epsilon\)) & M(HII) \\ ID & (erg s\({}^{-1}\)) & (photons s\({}^{-1}\)) & (arcsec) & (arcsec) & (cm\({}^{-3}\)) & & (M\({}_{\odot}\)) \\ \hline R1 & \((114.3\pm 6.4)\times 10^{38}\) (\(83.7\pm 4.7)\times 10^{50}\) -3.756 \(\pm\) 0.077 & - & \(1.57\pm 0.05\) & \(35\pm 22\) & -3.65 \(\pm\) 0.16 \((51.2\pm 9.7)\times 10^{4}\) \\ R2 & \((12.2\pm 1.4)\times 10^{39}\) (\(89.4\pm 7.5)\times 10^{50}\) -3.992 \(\pm\) 0.102 \(\pm\) 0.199 \(\pm\) 0.63 \(1.53\pm 0.05\) & \(59\pm 34\) & -4.14 \(\pm\) 0.21 \((28.4\pm 6.9)\times 10^{4}\) \\ R3 & \((67.8\pm 7.8)\times 10^{38}\) (\(49.6\pm 4.4)\times 10^{50}\) -3.630 \(\pm\) 0.082 & - & \(1.36\pm 0.05\) & \(31\pm 7\) & -3.11 \(\pm\) 0.17 \((5.1\pm 1.0)\times 10^{5}\) \\ R4 & \((7.4\pm 1.0)\times 10^{39}\) (\(54.4\pm 6.6)\times 10^{50}\) -3.609 \(\pm\) 0.091 \(0.71\pm 0.17\) & \(1.64\pm 0.05\) & \(119\pm 48\) & -3.19 \(\pm\) 0.19 \((7.9\pm 1.7)\times 10^{5}\) \\ R5 & \((41.8\pm 6.2)\times 10^{38}\) (\(30.6\pm 4.0)\times 10^{50}\) -3.657 \(\pm\) 0.092 \(0.60\pm 0.14\) & \(1.35\pm 0.05\) & \(103\pm 41\) & -2.95 \(\pm\) 0.19 \((4.8\pm 1.1)\times 10^{5}\) \\ R6 & \((47.9\pm 5.0)\times 10^{38}\) (\(35.1\pm 2.6)\times 10^{50}\) -3.152 \(\pm\) 0.060 & - & \(0.94\pm 0.05\) & \(34\pm 30\) & -1.84 \(\pm\) 0.13 \((7.4\pm 1.3)\times 10^{5}\) \\ R7 & \((8.4\pm 1.0)\times 10^{39}\) (\(61.1\pm 6.1)\times 10^{50}\) -3.503 \(\pm\) 0.090 & - & \(1.50\pm 0.05\) & \(23\pm 6\) & -2.98 \(\pm\) 0.19 \((8.4\pm 1.8)\times 10^{5}\) \\ R8 & \((66.5\pm 8.5)\times 10^{38}\) (\(48.7\pm 5.1)\times 10^{50}\) -4.106 \(\pm\) 0.177 & - & \(1.47\pm 0.05\) & \(78\pm 33\) & -4.08 \(\pm\) 0.36 \((20.0\pm 8.3)\times 10^{4}\) \\ R9 & \((45.0\pm 5.9)\times 10^{38}\) (\(32.9\pm 3.6)\times 10^{50}\) -3.252 \(\pm\) 0.062 \(0.44\pm 0.11\) \(1.02\pm 0.05\) & \(80\pm 37\) & -2.04 \(\pm\) 0.13 \((6.9\pm 1.2)\times 10^{5}\) \\ R10 & \((31.7\pm 5.7)\times 10^{38}\) (\(23.2\pm 3.8)\times 10^{50}\) -3.935 \(\pm\) 0.090 \(0.94\pm 0.34\) 0.69 \(\pm\) 0.05 & \(60\pm 40\) & -3.09 \(\pm\) 0.20 \((6.5\pm 1.6)\times 10^{4}\) \\ R11 & \((7.4\pm 1.2)\times 10^{39}\) (\(54.0\pm 8.2)\times 10^{50}\) -4.377 \(\pm\) 0.243 & - & \(1.47\pm 0.05\) & \(160\pm 93\) & -4.67 \(\pm\) 0.49 \((108.6\pm 6.1)\times 10^{4}\) \\ R12 & \((5.2\pm 1.0)\times 10^{39}\) (\(37.9\pm 6.9)\times 10^{50}\) -3.657 \(\pm\) 0.089 \(0.78\pm 0.23\) & \(1.06\pm 0.05\) & \(75\pm 39\) & -2.94 \(\pm\) 0.20 \((29.6\pm 6.7)\times 10^{4}\) \\ R13 & \((22.8\pm 5.4)\times 10^{38}\) (\(16.7\pm 3.7)\times 10^{50}\) -3.985 \(\pm\) 0.143 & - & \(0.94\pm 0.05\) & \(49\pm 20\) & -3.18 \(\pm\) 0.30 \((10.8\pm 3.7)\times 10^{4}\) \\ R14 & \((7.0\pm 1.2)\times 10^{38}\) (\(51.3\pm 7.9)\times 10^{49}\) -4.141 \(\pm\) 0.370 & - & \(0.98\pm 0.05\) & \(20\pm 17\) & -3.00 \((0.74\pm 8.3)\times 7.10^{4}\) \\ R15 & \((11.8\pm 1.9)\times 10^{38}\) (\(8.6\pm 1.2)\times 10^{50}\) -3.693 \(\pm\) 0.123 \(0.36\pm 0.12\) & \(0.74\pm 0.05\) & \(89\pm 52\) & -2.21 \(\pm\) 0.26 \((13.2\pm 4.1)\times 10^{4}\) \\ R16 & \((20.2\pm 3.6)\times 10^{38}\) (\(14.8\pm 2.4)\times 10^{50}\) -3.587 \(\pm\) 0.118 \(0.34\pm 0.09\) 0.75 \(\pm\) 0.05 & \(129\pm 54\) & -2.24 \(\pm\) 0.25 \((17.2\pm 5.2)\times 10^{4}\) \\ R17 & \((11.7\pm 1.8)\times 10^{38}\) (\(8.6\pm 1.2)\times 10^{50}\) -4.006 \(\pm\) 0.129 & - & \(0.53\pm 0.05\) & \(84\pm values from -4.0 to -2.5, solar metallicity and a constant value of the electron density of 100 cm\({}^{-3}\). In these models the nebula is ionised by stellar atmospheres from Mihalas (1978, non-LTE models for B and O stars, \(\log(g)=4\) and T\({}_{eff}\) from 30000 K to 55000 K). According to these models, we can deduce that the He\({}^{+}\) nebular zone is much smaller than that of H\({}^{+}\) in all the analysed ionising clusters. All of them seem to have similar effective temperatures, around 34400 K, although the outer ring regions show much larger errors. ### Cluster stellar populations #### 4.2.1 Ionising and photometric masses We have estimated the mass of the ionising clusters powering the circumnuclear HII regions from the number of Lyman continuum photons and using single stellar population (SSP) models to obtain the number of ionising photons per unit solar mass, Q(H\({}_{0}\))/M\({}_{\odot}\), which decreases with the age of the cluster. We have used the equivalent width of the H\(\beta\) emission line, EW(H\({}_{\beta}\), to parametrise this age obtaining a linear relation between the ionising photon number and EW(H\(\beta\)) (see Zamora & Diaz 2023a). In order to do this, we have used PopStar models (Molla et al. 2009) for ages under 10 Ma and metallicities between 0.004 and 0.02. The slope of the initial mass function (IMF) and the lower mass limit affect this relation thus we have used the Salpeter IMF with \(\phi(m)=m^{-\alpha}\), \(\alpha=2.35\), \(m_{low}(M_{\odot})=0.85\) and \(m_{up}(M_{\odot})=120\) that seems the most suitable for young regions. EW(H\(\beta\)) values for the selected HII regions are between 2.2 to 12.24 A for outer ring regions and from 8.91 to 15.00 A for the inner ring ones, corresponding to regions of active star formation. The larger Balmer emission line luminosities and equivalent widths shown by inner ring regions could, in principle, imply an earlier evolutionary stage. The upper panel of Fig. 20 shows the distribution of ionising masses for the HII regions in the two rings of NGC 7469 compared with the NGC 7742 ones. Ionising cluster masses in the former galaxy are higher than in the latter, something to be expected given their larger sizes. Inner ring regions harbour the most massive clusters with a mean mass value of \(2.1\times 10^{7}\) M\({}_{\odot}\) higher by an order of magnitude than the ionising clusters in the outer ring. At any rate, all the studied clusters have masses higher than \(10^{4}\) M\({}_{\odot}\), which is the lower limit for a cluster to fully sample the IMF (Garcia Vargas & Diaz 1994; Villaverde et al. 2010). Furthermore, our results are only lower limits to the ionising masses since we are assuming that: (i), there is no dust absorption and reemission at infrared wavelengths; and (ii), there is no photon escape from HII regions. Using the same SSP models, we have derived the photometric masses of our CNSFRs from their absolute r-magnitudes. The bottom panel of Fig. 20 shows the distribution of these photometric masses. For the regions within the outer ring we have obtained values between \(2.4\times 10^{5}\) and \(9.9\times 10^{6}\)\(M_{\odot}\) while inner ring regions are from \(1.0\times 10^{7}\) and \(3.7\times 10^{7}\)\(M_{\odot}\). The photometric masses of outer ring regions follow ionising star masses in a constant proportion of about 3. However, this relation seems to be lost in the inner ring regions, where the most massive ionising clusters deviate being close to the 1:1 relation. Table 11 summarizes these results listing in columns 1 to 5: (1) region ID; (2) Q(He\({}_{0}\)); (3) EW(H\(\beta\); (4) ionising cluster mass; (5) integrated photometric mass. #### 4.2.2 Cnsfr evolutionary stage The evolutionary stage of the analysed clusters can be interpreted with the help of evolutionary population synthesis models which predict the EW of Balmer lines as a function of the continuum colour of a SSP. Under, this assumption, these EW, mainly EW(H\(\beta\)), can be considered as a good estimator of the age of a given cluster (Dottori 1981) since, in fact, provides the ratio between the present and past star formation rates respectively. The former being related with the evolutionary time scale of the ionising star clusters and decreasing with age up to 10 Ma, and the latter sampling a longer time scale ( \(\geq\) 300 Ma) and becoming redder with age (see Diaz et al. 2000b). Fig. 21 shows the relation between logEW(H\(\beta\)), and the r-i colour together with Figure 19: Relation between the logarithmic numbers of HeI and HI ionising photons per second (see text for details). Figure 20: Histograms of the distributions of ionising (upper panel) and photometric (bottom panel) stellar masses for the outer and inner ring HII regions, in green and purple respectively. The dashed line corresponds to \(10^{4}\) M\({}_{\odot}\) (see text for details). evolutionary tracks calculated with the SSP PopStar models described above with IMF parameters more appropriate for the case of evolved star clusters (m\({}_{low}\) = 0.15 M\({}_{\odot}\), m\({}_{up}\) = 100 M\({}_{\odot}\)). Our observed ring regions, taken at face value (green and purple open symbols for outer and inner regions respectively), lie to the right of the line defined by SSP showing low values of logEW(H\(\beta\)) at mean ages of 6.9 Ma for the outer ring regions and 5.8 Ma for the inner ring ones. However, as shown in Martin-Manjon et al. (2010), star clusters older than 5.2 Ma do not produce a detectable emission-line spectrum. After deprojection of the galaxy outer ring using observed inclination angle (45 \({}^{\circ}\)) and position angle of the major axis of (128 \({}^{\circ}\)) from Davies et al. (2004), we have calculated a radial profile for the r and i continuum bands and the continuum underlying the H\(\beta\) emission line, fitting a Serici disc and subtracting it from the outer galaxy ring. We have repeated this procedure for the inner ring but using an additional component to fit the central emission and remeasured both EW(H\(\beta\)) and the r-i colour. Solid symbols (green for outer ring regions and purple for inner ring ones) show the locations of the isolated young ionising clusters at evolutionary track mean ages of 5.7 Ma for the outer ring regions and younger ages, between 5.1 and 5.7 Ma for the inner ring ones. These regions can be identified as the young (5-6 Ma), stellar population accounting for most of the IR luminosity in the central region of the galaxy (Diaz-Santos et al. 2007). #### 4.2.3 Wolf-Rayet stellar population We have detected carbon Wolf-Rayet (WRC) broad star features in the spectra of all analysed HII regions. The presence of these Figure 21: Relation between the equivalent width of the H\(\beta\) emission line and the r-i colour. The solid line has been calculated using PopStar models (Mollá et al. 2009). The beginning and end of the line correspond to ages of 0.1 and 8.5 Ma. Observational errors are inside the symbols in the graph. \begin{table} \begin{tabular}{c c c c c} \hline Region ID & Q(He\({}_{0}\)) & EW(H\(\beta\)) & M\({}_{ion}\) & M\({}_{phot}\) \\ & (photons s\({}^{-1}\)) & (Å) & (M\({}_{\odot}\)) & (M\({}_{\odot}\)) \\ \hline R1 & \((4.9\pm 1.9)\times 10^{48}\) & \(10.54\pm 0.52\) & \((30.0\pm 2.9)\times 10^{5}\) & \((54.4\pm 9.8)\times 10^{5}\) \\ R2 & \((5.5\pm 3.7)\times 10^{48}\) & \(6.69\pm 0.38\) & \((47.4\pm 5.4)\times 10^{5}\) & \((8.9\pm 1.2)\times 10^{6}\) \\ R3 & \((2.2\pm 1.4)\times 10^{48}\) & \(6.24\pm 0.36\) & \((28.0\pm 3.3)\times 10^{5}\) & \((53.8\pm 7.5)\times 10^{5}\) \\ R4 & \((2.6\pm 2.8)\times 10^{48}\) & \(4.98\pm 0.34\) & \((37.2\pm 5.5)\times 10^{5}\) & \((9.1\pm 1.5)\times 10^{6}\) \\ R5 & \((0.8\pm 1.6)\times 10^{48}\) & \(5.10\pm 0.38\) & \((20.5\pm 3.2)\times 10^{5}\) & \((50.3\pm 8.3)\times 10^{5}\) \\ R6 & \((2.4\pm 1.0)\times 10^{48}\) & \(12.24\pm 0.87\) & \((11.0\pm 1.3)\times 10^{5}\) & \((14.5\pm 2.7)\times 10^{5}\) \\ R7 & \((3.8\pm 1.8)\times 10^{48}\) & \(9.09\pm 0.73\) & \((24.9\pm 3.4)\times 10^{5}\) & \((38.4\pm 2.5)\times 10^{5}\) \\ R8 & - & \(5.67\pm 0.36\) & \((29.8\pm 4.0)\times 10^{5}\) & \((56.8\pm 8.7)\times 10^{5}\) \\ R9 & \((1.4\pm 1.2)\times 10^{48}\) & \(5.64\pm 0.37\) & \((20.3\pm 2.8)\times 10^{5}\) & \((37.3\pm 5.7)\times 10^{5}\) \\ R10 & - & \(5.25\pm 0.51\) & \((15.2\pm 2.9)\times 10^{5}\) & \((27.6\pm 4.4)\times 10^{5}\) \\ R11 & - & \(3.58\pm 0.26\) & \((49.3\pm 8.6)\times 10^{5}\) & \((9.9\pm 1.7)\times 10^{6}\) \\ R12 & - & \(2.67\pm 0.20\) & \((44.5\pm 9.0)\times 10^{5}\) & \((8.0\pm 1.4)\times 10^{6}\) \\ R13 & - & \(3.44\pm 0.36\) & \((15.8\pm 3.9)\times 10^{5}\) & \((35.8\pm 6.3)\times 10^{5}\) \\ R14 & - & \(5.19\pm 0.46\) & \((33.9\pm 6.2)\times 10^{4}\) & \((9.0\pm 1.5)\times 10^{5}\) \\ R15 & - & \(4.19\pm 0.30\) & \((6.9\pm 1.1)\times 10^{5}\) & \((15.7\pm 2.6)\times 10^{5}\) \\ R16 & - & \(3.65\pm 0.29\) & \((13.3\pm 2.5)\times 10^{5}\) & \((26.1\pm 4.5)\times 10^{5}\) \\ R17 & - & \(4.64\pm 0.34\) & \((6.3\pm 1.0)\times 10^{5}\) & \((12.5\pm 2.1)\times 10^{5}\) \\ R18 & \((4.2\pm 4.0)\times 10^{47}\) & \(4.26\pm 0.31\) & \((54.4\pm 8.9)\times 10^{4}\) & \((12.4\pm 2.1)\times 10^{5}\) \\ R19 & - & \(4.27\pm 0.31\) & \((45.7\pm 7.6)\times 10^{4}\) & \((10.3\pm 1.7)\times 10^{5}\) \\ R20 & - & \(3.30\pm 0.26\) & \((5.8\pm 1.2)\times 10^{5}\) & \((16.3\pm 2.9)\times 10^{5}\) \\ R21 & \((8.0\pm 8.0)\times 10^{47}\) & \(4.67\pm 0.32\) & \((9.4\pm 1.4)\times 10^{5}\) & \((20.4\pm 3.4)\times 10^{5}\) \\ R22 & - & \(2.20\pm 0.18\) & \((5.3\pm 1.2)\times 10^{6}\) & \((9.6\pm 1.7)\times 10^{6}\) \\ R23 & - & \(10.90\pm 1.56\) & \((11.4\pm 2.4)\times 10^{4}\) & \((23.5\pm 4.2)\times 10^{4}\) \\ \hline Ra & \((83.2\pm 2.4)\times 10^{48}\) & \(15.00\pm 0.68\) & \((38.0\pm 3.0)\times 10^{6}\) & \((37.2\pm 6.3)\times 10^{6}\) \\ Rb & \((41.9\pm 3.8)\times 10^{48}\) & \(8.91\pm 0.33\) & \((27.0\pm 1.9)\times 10^{6}\) & \((30.8\pm 6.2)\times 10^{6}\) \\ Rc & \((21.4\pm 3.3)\times 10^{48}\) & \(10.93\pm 0.84\) & \((11.2\pm 1.0)\times 10^{6}\) & \((18.8\pm 3.3)\times 10^{6}\) \\ Rd & \((17.8\pm 2.6)\times 10^{48}\) & \(9.87\pm 0.84\) & \((66.8\pm 6.5)\times 10^{5}\) & \((10.4\pm 1.9)\times 10^{6}\) \\ Re & \((5.5\pm 1.0)\times 10^{49}\) & \(10.67\pm 0.81\) & \((23.4\pm 3.0)\times 10^{6}\) & \((21.8\pm 3.9)\times 10^{6}\) \\ \hline \end{tabular} \end{table} Table 11: Ionising cluster properties. features would place the age of the regions between 3.2 and 5.25 Ma according to PopStar models. Wolf-Rayet (WR) are massive stars (M \(>\) 25 M\({}_{\odot}\)) which have left the main sequence having lost part of their hydrogen-rich envelope by means of powerful stellar winds. They can be classified as nitrogen stars (WN) and carbon stars (WC and WO) which are in the CNO and He burning evolutionary phases respectively. The Wolf-Rayet (WR) population of NGC 7469 was studied by Miralles-Caballero et al. (2016) as part of a systematic search of extragalactic regions including this kind of stars. They used PMAS-PPAK IFU at the CAHA 3.5m telescope, with a field of view (FoV) of 74 arcsec \(\times\) 64 arcsec providing a spatial sampling of 1 arcsec/pix and a spectral resolution R = 850. They observed the blue WR bump around He II\(\lambda\) 4686 A which is associated with nitrogen WR stars (WN). However, the red feature is less prominent than the blue one and hence they did not detect the red WR bump around C IV \(\lambda\) 5808 A, associated with carbon WR stars (WC). The larger sensitivity of the MUSE spectrograph in the red part of the spectrum has allowed the measurement of the fluxes in the so called "red bump" centred at about 5808 A assuming a linear behavior of the continuum and choosing the same sidebands as Miralles-Caballero et al. (2016) (5600 - 5700 A, 5920 - 6000 A). We have integrated the bump flux from 5798 A to 5850 A. Fig. 22 shows as an example the spectrum of region R21 showing the position of the CIV\(\lambda\lambda\) 5801,12 A and CIII\(\lambda\) 5696 A lines (see Massey et al. 1992) and the red WR bump. We can see, in the spectrum of this region, the two CIV lines although we cannot detect the CIII line, hence we assume that the red bump emission originates in early WC stars (WCE). Assuming the average luminosity of WCE stars as L\({}_{WCE}\)(CIV\(\lambda\) 5808 A) \(\sim\) 3.3 \(\times\) 10\({}^{36}\) erg/s (Vacca & Conti 1992), their number can be calculated as: \[N_{WCE}=36.26\cdot 10^{-3}\left(\frac{F(Red-bump)}{10^{-17}}\right)\left(\frac{ D}{10}\right)^{2} \tag{3}\] where F(Red-bump) is expressed in erg s\({}^{-1}\) cm\({}^{-2}\) and D is the distance to NGC 7469 which has been taken as 66.47 Mpc (see Tab. 1). Outer ring regions contain between 12 and 415 stars, while these numbers are increased to 185 and 1477 in the inner ring regions. This is something to be expected since the inner regions are larger, more massive and their metallicity is oversolar. Miralles-Caballero et al. (2016) reported 21772 late-type WN stars (WNL) inside a radius of 1306 pc (4.146 arcsec; see Tab. 1) from the galactic center. This area includes our 5 inner regions which we have found to include a total of 3672 WCE. This gives a WCE/WNL ratio of 0.17 consistent with the range calculated by these authors (15 - 25 %). The large number of WR in these CNSFRs could affect their properties. In fact, the low filling factors and high radii measured in the HII regions of this galaxy can be explained by the stellar winds produced by these WR stars. The radius associated with the kinetic energy produced by the combined wind of a population of WR stars can be found using the expression in Castor et al. (1975) as: \[R_{WR}=1.6\ (\epsilon/n)^{1/5}\cdot r^{3/5} \tag{4}\] where \(\epsilon\) is the total ejected energy in units of 10\({}^{36}\) erg/s, n is the interstellar medium density in cm\({}^{-3}\), t is the age of the expanding shell in units of 10\({}^{4}\) yr and R\({}_{WR}\) is expressed in pc. Using this information PopStar models provide this radius, R\({}_{PopStar}\), for the number of WR present in ionising clusters of a given age, metallicity, and IMF. Hence we estimate R\({}_{WR}\) of our CNSFRs as: \[R_{WR}=R_{PopStar}\left[\frac{n_{WR}}{n_{WR,PopStar}}\cdot\frac{\epsilon_{WR, PopStar}}{\epsilon_{total,PopStar}}\right]^{1/5} \tag{5}\] where n\({}_{WR}\) is the number of WR in each observed cluster and \(n_{WR,PopStar}\), \(\epsilon_{WR,PopStar}\) and n\({}_{WR,PopStar}\) are the the number of WR stars, the total energy ejected by WRs and the total energy ejected by WR and supernova winds in the used model. In our case we have assumed an age of 5.5 Ma, solar metallicity and the IMF parameters given in Sec. 4.2. Fig. 23 shows the measured radii of the selected HII regions (see Sec. 3.1.2) vs those blown out by WR winds. We can see that both radii are in good agreement contrary to the case of the ionisation derived angular radii (\(\phi\), see Fig. 16 in Sec. 4.1.2) and all regions look spatially resolved, as from the H\(\alpha\) map. Most regions predicted angular radii fully compatible with those measured, so Figure 23: The derived angular radius associated with the kinetic energy produced by the carbon WR stars against the angular radius measured from the HII region segmentation. The arrows show this value with the addition of the energy produced by the nitrogen WR stars assuming the ratio proposed by PosStar models (63 % WC and 37 % WN). Open marks show the HII complexes identified in H\(\alpha\). Figure 22: Emission lines of WR stars in region R21. The red solid line shows the side-bands selected to calculate the continuum, shown with a blue dashed line. The integrated red bump is shown in grey. we can assume that their ionised bubbles are inflated by the winds from WR stars. There are still 7 outer ring regions whose position in the plot cannot be explained by only by the model computed WR winds. They are located in the upper right area of the graph and have been represented by green solid dots. However, we should keep in mind that PopStar models only include single WR stars and the inclusion of binaries in the evolutionary codes would increase the production of WRs also increasing the duration of the WR evolutionary phase (Eldridge et al. 2008). On the other hand, the inner ring regions have the greatest number of WRs, but the radii predicted by the model winds are too large. In these regions, the pressure is probably balanced by the more crowded central environment of the galaxy. We have found a relation between the number of WR and the ionisation and the photometric masses of each region. From this result, we can conclude that the IMF is similar among clusters. Additionally, we have verified that the constant ratio between these two masses (M\({}^{phot}\) / M\({}^{ion}\)\(\sim\) 3) is lost for regions with the highest number of this kind of stars (see Sec. 4.2.1). Table 12 summarizes our results listing for each region in columns 1 to 4: (1) the region ID; (2) the integrated red bump flux; (3) the number of early WC stars; and (4) the radius of the WR wind blown region. #### 4.2.4 Dynamical masses We have calculated the dynamical mass for each observed cluster using their measured stellar velocity dispersion (see 3.2.3) and size assuming the system to be virialized and that : (i) it has spherical symmetry; (ii) it is gravitationally bounded; and (iii) it has an isotropic velocity distribution. Then, the dynamical mass is given by (Ho & Filippenko 1996a,b) as: \[M^{dyn}=3\cdot\sigma^{2}\cdot\frac{R}{G} \tag{6}\] where \(\sigma\) is the velocity dispersion of the system, R is its radius and G is the Gravitational Constant. In Section 3.1.2, we measured the radii of our CNSFRs from the selected HII regions on the observed H\(\alpha\) image thus including the total gas emission. However, in order to better identify the stellar clusters, we have now calculated the star cluster sizes from the F606W WFPC2-HST image. First, we have calculated and subtracted the region background by fitting a three-order polynomial. Next, we have fitted each knot present in the observed clusters assuming a two-dimensional Gaussian profile. The radius of each knot has been taken as 1/2 - FWHM. Fig. 24 shows two examples of the described procedure. All the observed regions are composed by more than one knot. Only regions R17, R19, R20 and Rb seem to host single clusters. There are two large complexes R1 and R13, with 20 and 19 knots respectively. The radii of the single knots vary between 7.3 pc and 58.2 pc for outer ring regions. Inner ring regions have knots with very similar sizes, with their radii taking values from 11.0 to 27.5 pc. No knot is appreciable in the continuum at 6060 A in the position of region Re and hence we have not measured any size for Re region. Also, no cluster can be seen either in UV-HST images (see Diaz-Santos et al. 2007). However, this region shows CaII triplet lines with the same equivalent width as the rest of regions and is identified in H\(\alpha\), [ArII]\(\lambda\) 6.99 \(\mu\)m, PAH\(\lambda\) 6.2 \(\mu\)m and in 11.7 \(\mu\)m imaging (U et al. 2022; Garcia-Bernete et al. 2022; Miles et al. 1994). A possible explanation could come \begin{table} \begin{tabular}{c c c c} \hline \hline Region ID & \multicolumn{2}{c}{Red-bump} & \multicolumn{2}{c}{R\({}_{WR}\)} \\ (erg s\({}^{-1}\) cm\({}^{2}\)) & N\({}_{WCE}\) & (arcsec) \\ \hline R1 & \((135.4\pm 7.5)\times 10^{-17}\) & \(217\pm 2\) & 0.94 \\ R2 & \((23.6\pm 1.9)\times 10^{-16}\) & \(378\pm 4\) & 1.05 \\ R3 & \((13.0\pm 1.1)\times 10^{-16}\) & \(208\pm 3\) & 0.93 \\ R4 & \((21.5\pm 2.5)\times 10^{-16}\) & \(345\pm 4\) & 1.03 \\ R5 & \((12.9\pm 1.6)\times 10^{-16}\) & \(207\pm 2\) & 0.93 \\ R6 & \((45.2\pm 3.3)\times 10^{-17}\) & \(72\pm 1\) & 0.75 \\ R7 & \((12.9\pm 1.2)\times 10^{-16}\) & \(206\pm 3\) & 0.93 \\ R8 & \((14.5\pm 1.5)\times 10^{-16}\) & \(232\pm 3\) & 0.95 \\ R9 & \((10.1\pm 1.1)\times 10^{-16}\) & \(162\pm 2\) & 0.89 \\ R10 & \((7.2\pm 1.2)\times 10^{-16}\) & \(115\pm 2\) & 0.83 \\ R11 & \((24.1\pm 3.6)\times 10^{-16}\) & \(386\pm 4\) & 1.05 \\ R12 & \((18.0\pm 3.2)\times 10^{-16}\) & \(289\pm 4\) & 0.99 \\ R13 & \((10.0\pm 2.2)\times 10^{-16}\) & \(161\pm 2\) & 0.88 \\ R14 & \((20.4\pm 3.1)\times 10^{-17}\) & \(33\pm 0\) & 0.64 \\ R15 & \((40.2\pm 5.5)\times 10^{-17}\) & \(64\pm 1\) & 0.74 \\ R16 & \((7.0\pm 1.1)\times 10^{-16}\) & \(113\pm 1\) & 0.82 \\ R17 & \((34.0\pm 4.4)\times 10^{-17}\) & \(54\pm 1\) & 0.71 \\ R18 & \((30.5\pm 4.1)\times 10^{-17}\) & \(49\pm 1\) & 0.70 \\ R19 & \((24.2\pm 3.4)\times 10^{-17}\) & \(39\pm 1\) & 0.66 \\ R20 & \((35.8\pm 6.2)\times 10^{-17}\) & \(57\pm 1\) & 0.72 \\ R21 & \((51.9\pm 6.4)\times 10^{-17}\) & \(83\pm 1\) & 0.77 \\ R22 & \((25.9\pm 5.4)\times 10^{-16}\) & \(415\pm 5\) & 1.07 \\ R23 & \((7.2\pm 1.1)\times 10^{-17}\) & \(12\pm 0\) & 0.52 \\ \hline Ra & \((92.2\pm 2.0)\times 10^{-16}\) & \(1477\pm 20\) & 1.38 \\ Rb & \((47.9\pm 4.0)\times 10^{-16}\) & \(767\pm 14\) & 1.21 \\ Rc & \((58.1\pm 5.1)\times 10^{-16}\) & \(931\pm 10\) & 1.26 \\ Rd & \((20.6\pm 2.4)\times 10^{-16}\) & \(330\pm 3\) & 1.02 \\ Re & \((11.5\pm 1.5)\times 10^{-16}\) & \(185\pm 2\) & 0.91 \\ \hline \end{tabular} \end{table} Table 12: WR features and number for observed CNSFRs. Figure 24: Results of the star cluster radius measurement procedure for regions R1 and R17 (upper and lower panels respectively). Left panels show the F606W WFPC2-HST image for each selected region and the right panels show the same image after the background subtraction. Selected clusters are shown with blue circles and the angular scale is shown in the corner of each panel. from its high interstellar extinction value, since photons at short wavelengths could be absorbed. We have used the total velocity dispersion corresponding to the entire selected region to infer the mass of each knot inside it. Then, we have added all masses to calculate the mass of each complete CNSFR. This method can overestimate the mass of the individual clusters but MUSE spatial resolution is insufficient to allow the integration of each individual knot. Table 13 shows our results listing for each region in columns 1 to 5: (1) the region ID; (2) the number of stellar knots present in each region, (3) the mean radius of them, (4) the dynamical masses and (5) the ratio of ionising to dynamical masses. Figure 25 shows the distribution of the calculated dynamical masses for each CNSFR. Contrary to the case of ionising and photometric masses, the clusters in both rings show similar masses, with median values \(1.07\times 10^{9}\) and \(6.58\times 10^{8}\) for inner and outer ring regions respectively. These values are larger than the ones found in CNSFR reported for other galaxies which range from \(5.0\times 10^{6}\) and \(2.0\times 10^{8}\)(Hagele et al. 2007b, 2009, 2010). The total masses of the outer and inner rings are \((197.3\pm 3.1)\times 10^{8}\) M\({}_{\odot}\) and \((41.1\pm 2.3)\times 10^{8}\) M\({}_{\odot}\). From gas stellar dynamics, Genzel et al. (1995) inferred a dynamical mass for the inner ring of \(4.5\times 10^{9}\) M\({}_{\odot}\), very similar to the value found here, whereas Davies et al. (2004) estimated a dynamical mass \(6.5\times 10^{9}\) M\({}_{\odot}\), within a radius of 2.5 arcsec including \(\sim 15\%\) corresponding to the galaxy nucleus. A more significant insight in the characteristics of star formation in CNSFR comes from the ratio of ionising to dynamical masses. This ratio ranges between 0.02 and 0.91 % for the outer ring regions and between 0.76 and 4.86 % for the inner ring ones implying a larger contribution by recent star formation for the regions closer to the galactic nucleus. For comparison, this percentage ranges from 1% to 11% for the CNSFR analysed in the galaxies NGC 3351, NGC 2903 and NGC 3310 (see respectively Hagele et al. 2007b, 2009, 2010), whose average distances to their host galaxy nuclei are about 300 pc, comparable to the inner regions of NGC 7469. ## 5 Conclusions In this second paper of a series, we analyse the circumnuclear environment of the almost face-on galaxy NGC 7469, an early spiral (SABa) hosting a Seyfert 1 nucleus. The galaxy shows two prominent star- forming rings, one of them very close to their active galactic nucleus, within 1.5 arcsec from the galaxy centre, and a second incomplete ring with elliptical appearance and with dimensions of 21 and 13.2 arcsec for its major and minor axes respectively. We have used publicly available observations obtained by the IFS MUSE spectrograph as part of the first Science Verification run. These data have been analysed following the methodology already described in the first paper of the series. We have constructed 2D flux maps for different emission lines and two continuum bands. The [OIII]\(\lambda\) 5007 A line is predominant in the emission from the active nucleus and it seems to blur along the galaxy disc. A map of the EW(H\(\alpha\)) emission shows the circumnuclear regions within the rings, object of this study, having EW(H\(\alpha\)) > 50 A, consistent with the presence of recent star formation. This map has been used to select the ionised outer ring regions. For the inner ring regions we have used the observed He\(\lambda\) 6678 A flux map due to the presence of saturation effects in the central parts of the galaxy. At the end of the entire procedure, we have selected a total of 23 HII regions in the outer ring and 5 in the inner one. In the same way, extinction correction have been estimated using the ratios of H\(\alpha\) and H\(\beta\) and He\(\lambda\) 5875 A/He\(\lambda\) 6678 A for the for the outer and inner ring regions respectively. All emission lines appear to have at least two kinematical components and four of them show three components in the [OIII]\(\lambda\lambda\) 4959,5007 A emission lines, which might be associated with an outflow coming from the active galaxy nucleus. We have ascribed the most intense and narrow component to the emission lines originated by the ionising cluster since they follow the radial \begin{table} \begin{tabular}{c c c c c} \hline Region & \multicolumn{2}{c}{R\({}_{mean}^{*}\)} & M\({}_{dyn}\) & M\({}_{ion}\)/M\({}_{dyn}\) \\ ID & N\({}_{knots}\) & (pc/knot) & (M\({}_{\odot}\)) & (per cent) \\ \hline R1 & 20 & 21.0 & \((2734.8\pm 8.1)\times 10^{6}\) & 0.11 \\ R2 & 4 & 28.0 & \((98.3\pm 2.5)\times 10^{7}\) & 0.48 \\ R3 & 10 & 20.3 & \((88.1\pm 2.6)\times 10^{7}\) & 0.32 \\ R4 & 7 & 16.5 & \((80.6\pm 5.7)\times 10^{7}\) & 0.46 \\ R5 & 5 & 19.5 & \((74.1\pm 2.8)\times 10^{7}\) & 0.28 \\ R6 & 4 & 20.9 & \((57.7\pm 6.0)\times 10^{7}\) & 0.19 \\ R7 & 4 & 36.4 & \((99.6\pm 6.7)\times 10^{7}\) & 0.25 \\ R8 & 8 & 24.6 & \((13.1\pm 1.3)\times 10^{8}\) & 0.23 \\ R9 & 3 & 28.2 & \((3681.5\pm 5.0)\times 10^{5}\) & 0.55 \\ R10 & 3 & 23.3 & \((62.3\pm 4.4)\times 10^{7}\) & 0.24 \\ R11 & 7 & 25.9 & \((1229.2\pm 6.9)\times 10^{6}\) & 0.40 \\ R12 & 6 & 32.9 & \((88.6\pm 6.1)\times 10^{7}\) & 0.50 \\ R13 & 19 & 21.7 & \((34.2\pm 1.4)\times 10^{8}\) & 0.05 \\ R14 & 6 & 17.7 & \((568.3\pm 6.6)\times 10^{6}\) & 0.06 \\ R15 & 3 & 27.6 & \((44.6\pm 4.7)\times 10^{7}\) & 0.15 \\ R16 & 4 & 22.4 & \((39.6\pm 3.0)\times 10^{7}\) & 0.33 \\ R17 & 1 & 16.1 & \((102.9\pm 2.8)\times 10^{6}\) & 0.61 \\ R18 & 2 & 18.6 & \((25.6\pm 2.4)\times 10^{7}\) & 0.21 \\ R19 & 1 & 24.3 & \((114.4\pm 2.9)\times 10^{6}\) & 0.40 \\ R20 & 1 & 8.5 & \((64.1\pm 3.8)\times 10^{6}\) & 0.91 \\ R21 & 4 & 22.6 & \((48.9\pm 2.3)\times 10^{7}\) & 0.19 \\ R22 & 3 & 31.8 & \((628.1\pm 4.5)\times 10^{6}\) & 0.84 \\ R23 & 2 & 19.6 & \((54.0\pm 5.3)\times 10^{7}\) & 0.02 \\ \hline Ra & 3 & 21.3 & \((78.1\pm 5.7)\times 10^{7}\) & 4.86 \\ Rb & 1 & 16.9 & \((30.8\pm 4.0)\times 10^{7}\) & 8.77 \\ Rc & 3 & 23.8 & \((14.8\pm 1.1)\times 10^{8}\) & 0.76 \\ Rd & 2 & 26.5 & \((72.4\pm 1.4)\times 10^{7}\) & 0.92 \\ \hline \end{tabular} \end{table} Table 13: Stellar cluster masses and sizes for observed CNSFRs. Figure 25: Histograms of the distribution of dynamical masses for outer and inner ring HII regions, in green and purple respectively. velocity of the galaxy disc and their emission and the emission line ratios of the HII regions within the ring are consistent with the predictions of star forming models thus assuring that the kinematical component associated with the observed HII regions is the appropriate one. The first part of this work concerns the properties of the ionising gas of the CNFSRs from the measured emission lines in their spectra: H\(\alpha\) and H\(\beta\) Balmer lines; [OIII]\(\lambda\lambda\) 4959,5007 A, [NIII]\(\lambda\lambda\) 6548,84 A, [SII]\(\lambda\lambda\) 6716,31 A, [ArIII]\(\lambda\) 7136 Aand [SIII]\(\lambda\) 9069 Aforbidden lines and also the weaker lines of [SIII]\(\lambda\) 6312 A, HeI\(\lambda\) 6678 Aand [OII]\(\lambda\lambda\) 7320,30 A. For each observed ring HII region we have derived: (1) the number of Hydrogen ionising photons per second, Q(H\({}_{0}\)); (2) the electron density of the emitting gas per cubic centimeter, n\({}_{e}\); (3) the ionisation parameter, u; (4) the corresponding angular radius, \(\phi\); (5) the filling factor, \(\epsilon\); and (6) the mass of ionised hydrogen in solar masses, M(HII). The inner ring regions seem to be more compact, showing lower sizes and higher filling factors and electron densities than the outer ring ones. The characteristics of the outer ring regions in NGC 7469 are more luminous than those in NGC 7742, the first galaxy studied in this series. They also have higher electron densities, lower filling factors lower ionisation parameters and higher masses of ionised hydrogen. This is consistent with its larger HII region sizes. Although their model predicted angular sizes, \(\phi\), are found to be smaller than the measured ones implying that their behaviour do not correspond to a radiation or matter bounded nebula, the larger radii measured in the HII regions of this galaxy can be explained by the stellar winds produced by WR stars present in all the observed HII regions in numbers up to a few hundreds for the outer ring and as large as \(\sim\) 1500 in the inner one. We have used sulphur as a tracer for chemical abundances of the selected HII regions with the temperature sensitive [SIII]\(\lambda\) 6312 A emission line having been measured in \(\sim\)48 % of the outer ring regions and in 3 out of 5 regions within the inner ring ones which allows deriving abundances by the direct method. For the rest of the regions the empirical calibration S\({}_{23}\) has been used. Outer ring regions show sulphur abundances, 12+log(S/H), ranging from 6.72 to 7.34 with the inner ring regions showing similar values within the errors. Ionising and photometric cluster masses have been estimated from the number of Lyman continuum photons and absolute r-magnitudes respectively using SSP models. The inner ring regions show masses more than an order of magnitude larger than the outer ring ones. Additionally, the photometric masses of outer ring regions follow ionising star masses in a constant proportion of about 3 although this relation seems to be lost in the inner ring regions. In order to infer stellar population properties from stellar absorption lines we have calculated a dilution factor for CaII and MgI lines. The outer ring regions show MgI dilutions consistent with the contribution by a nebular continuum but the CaT lines look almost undiluted, pointing to a larger contribution by red supergiant stars. In the inner regions, the behaviour of the CaT is similar and we have found larger dilutions in the MgI lines which could be due to an additional continuum originating in the AGN. Stellar velocity dispersions have been derived from the measured CaT absorption lines using the cross-correlation technique proposed by Tonry & Davis (Tonry & Davis 1979) using the measured The dynamical masses have been derived from the measured CaT velocity dispersion and the stellar sizes of each cluster, assuming virialisation. Contrary to the case of ionising and photometric masses, the clusters in both rings show similar masses, with median values 1.07 \(\times\) 10\({}^{9}\) M\({}_{\odot}\) and 6.58 \(\times\) 10\({}^{8}\) M\({}_{\odot}\) for inner and outer ring regions respectively. The ratio between the ionising and the dynamical masses takes mean values of 3.8 % for inner ring regions and 0.3 % for the outer ring ones. The evolutionary state of the analysed clusters can be inferred from SSP evolutionary population synthesis models using the EW of Balmer lines as a function of the continuum color. Our observed ring regions, show mean ages of 5.7 Ma for the outer ring regions and younger ages, between 5.1 and 5.7 Ma for the inner ring ones according to SSP models. These regions can be identified as the young (5-6 Ma), stellar population accounting for most of the IR luminosity in the central region of the galaxy (Diaz-Santos et al. 2007). The detection of carbon Wolf-Rayet star features places the age of the regions between 3.2 and 5.25 Ma which is compatible with ages found using SSP models. Finally, the comparison between the characteristics of inner and outer ring ionising clusters, together with their derived dynamical masses, point to circumnuclear regions close to the active galactic nucleus being more compact and having higher gas density. This suggests that they may be survivors in the hostile environment powered by nuclear activity. ###### Acknowledgements. This research has made use of the services of the ESO Science Archive Facility and NASA's Astrophysics Data System Abstract Service. It is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 60.A-9301(A) and data products created thereof. Also we have used observations obtained with the NASA/ESA HST and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA), and the Canadian Astronomy Data Centre (CADCN/RC/SA). This work has been supported by Spanish grants from the former Ministry of Economy, Industry and Competitiveness through the MINECO-FEDER research grant AYA2016-79724-C4-1-P, and the present Ministry of Science and Innovation through the research grant PID2019-107408GB-C42. S.Z. acknowledges the support from contract: BES-2017-080509 associated to the first of these grants.
2307.12282
Milimili. Collecting Parallel Data via Crowdsourcing
We present a methodology for gathering a parallel corpus through crowdsourcing, which is more cost-effective than hiring professional translators, albeit at the expense of quality. Additionally, we have made available experimental parallel data collected for Chechen-Russian and Fula-English language pairs.
Alexander Antonov
2023-07-23T10:23:00Z
http://arxiv.org/abs/2307.12282v1
# Milimili. Collecting Parallel Data via Crowdsourcing ###### Abstract We present a methodology for gathering a parallel corpus through crowdsourcing, which is more cost-effective than hiring professional translators, albeit at the expense of quality. Additionally, we have made available experimental parallel data collected for Chechen-Russian and Fula-English language pairs. ## 1 Introduction NMT (Vaswani et al., 2017) has made significant advancements in recent years, particularly in enhancing the quality of translations for low-resource languages. However, the relative scarcity of data on the internet makes creating parallel data from scratch quite challenging. Large multilingual datasets like NLLB (Team et al., 2022) and NTREX (Federmann et al., 2022), which have been collected with the help of professional translators, can be quite expensive. In response to this, we conducted an experiment to ascertain if it's feasible to create affordable parallel data with crowd assistance. We included Chechen-Russian and Fula-English language pairs in our research--languages diverse enough to allow evaluating different scenarios. All the data is available on GitHub1. Footnote 1: [https://github.com/AlAntonov/milimili](https://github.com/AlAntonov/milimili) ## 2 Crowd Settings All the experiments were conducted on the Toloka crowd platform2(Pavlichenko et al., 2021), where anyone can register and carry out various data labeling tasks. Footnote 2: [https://toloka.ai](https://toloka.ai) In our case, an additional prerequisite for task participation was the knowledge of two languages. We did not create specific language tests. It was sufficient for the user to indicate their language proficiency in the settings. The complete pipeline consisted of two tasks: translation and quality control. Additionally, automatic control was employed. ### Translation Task The translation task was straightforward where we displayed a source sentence and requested a translation into the target language. The instruction was as simple as "Translate the sentence from SRCLANG to TGTLANG". Only workers who had indicated in their settings that they were proficient in both SRCLANG and TGTLANG were allowed to undertake this task. In total, we had four translation tasks: from Chechen to Russian, from Russian to Chechen, from Fula to English, and from English to Fula. ### Quality Control Task Translated sentences were sent to the quality control task where we asked workers to evaluate whether the translation of the source sentence was good or bad. To ensure the accuracy of the evaluation, we gave the same translation to three workers. The majority vote was selected. For the quality control task, we had created an additional exam, so performers had to prove their language proficiency. However, the exam was fairly easy. We had ten sentences. Five sentence pairs were correct translations, mined from various parallel resources. The other five sentence pairs were incorrect translations. Two were translations of different sentences, one was a translation into a different language, and two were word-for-word translations collected from an online dictionary, specifically Glosbe3. Footnote 3: [https://glosbe.com](https://glosbe.com) In total, we had the same four tasks for different language directions. ### Automatic Control Certain bad translations could be detected automatically, so we didn't need to send all translated sentences for human verification. To verify if the translation was in the appropriate language, we used a language detector, specifically Fasttext [Joulin et al., 2016]. We also conducted a length relation check between the source and translated sentences. If the difference exceeded three times, we rejected the translation. The entire pipeline was prepared so that it could run automatically. However, we carried out manual verification from time to time to check the quality. ### Crowd Quality We utilized basic crowdsourcing tools to detect poor quality, such as excessively fast responses. Generally, our method is not recommended for language pairs that are already represented in machine translation systems, as people began using machine translation to complete the task. Furthermore, we want to highlight that the quality of the translated sentences cannot be compared with professional translation. Therefore, we recommend using crowd-translated tasks only in training sets, excluding test sets. ### Price Addressing the problem of low resource languages can entail significant costs. Let's consider a scenario where there are approximately 7,000 languages, each needing a million parallel texts. This totals 7 billion sentences. If we estimate the translation of one sentence at around one dollar, the total expense would rise to 7 billion dollars. Clearly, that is a quite substantial amount. We paid $0.02 for each translation and $0.01 for every set of 10 verifications. In fact, the total expenditure for all the experiments discussed in this context amounted to approximately $100. ## 3 Data As mentioned above, we used source sentences in four languages. We decided to utilize two different language pairs to better understand scalability. One language in each pair was a low-resource language, and the other was a high-resource language. It's worth noting that the low-resource language could potentially link to other languages throw high-resource language. ### Chechen We had taken all the source sentences in Chechen from Wikipedia4. From the raw data, we had discarded the template sentences for frequently repeated words. Additionally, we had filtered sentences in other languages using Fasttext [Joulin et al., 2016]. Footnote 4: [https://ce.wikipedia.org](https://ce.wikipedia.org) Russian language, we used the data from the WMT21 test set [Akhbardeh et al., 2021], which was presented at the News Task. ### Fula All the source sentences in Fula were taken from Wikipedia5, in the same way as the Chechen language. It's worth noting that even monolingual data for Fula was quite difficult to find on the Internet. One more important point about Fula is that it has several dialects which are significantly different from each other. We attempted to work with Nigerian Fulfulde but didn't impose strict restrictions. Footnote 5: [https://ff.wikipedia.org](https://ff.wikipedia.org) ### English For English, we also used the WMT21 News Task test set [Akhbardeh et al., 2021]. ## 4 Results and Discussion ### Results Table 1 shows the main results of the experiment. The first row is about how many sentences were initially translated. The second row shows how many of them were verified. In the case of Fula, the quantity is less than the number of translated sentences since we don't have enough users who passed our exam. The final row tells us how many sentences were marked as good and hence were included in the corpus. All the data is available on GitHub. ### Scaling to Other Languages Much to our chagrin, we must acknowledge that scaling to other languages is exceedingly difficult due to the scarcity of users who are bilingual and willing to contribute to a crowdsourcing platform. Although we can confidently declare that the proposed method is able to gather sufficient data for training in the Chechen and Russian languages, the overall experiment cannot be deemed a success. This is because we encountered a shortage of contributors for the Fula and English pair. Other language pairs also faced similar challenges. We believe that the future expansion of crowdsourcing platforms could potentially help ameliorate this issue. ### Corpus quality At the moment, we do not have any proof that the quality of the corpus is good enough to be used in machine translation model training. However, we believe that it is at least as good as automatically mined parallel corpora.
2303.00371
AI-Based Multi-Object Relative State Estimation with Self-Calibration Capabilities
The capability to extract task specific, semantic information from raw sensory data is a crucial requirement for many applications of mobile robotics. Autonomous inspection of critical infrastructure with Unmanned Aerial Vehicles (UAVs), for example, requires precise navigation relative to the structure that is to be inspected. Recently, Artificial Intelligence (AI)-based methods have been shown to excel at extracting semantic information such as 6 degree-of-freedom (6-DoF) poses of objects from images. In this paper, we propose a method combining a state-of-the-art AI-based pose estimator for objects in camera images with data from an inertial measurement unit (IMU) for 6-DoF multi-object relative state estimation of a mobile robot. The AI-based pose estimator detects multiple objects of interest in camera images along with their relative poses. These measurements are fused with IMU data in a state-of-the-art sensor fusion framework. We illustrate the feasibility of our proposed method with real world experiments for different trajectories and number of arbitrarily placed objects. We show that the results can be reliably reproduced due to the self-calibrating capabilities of our approach.
Thomas Jantos, Christian Brommer, Eren Allak, Stephan Weiss, Jan Steinbrener
2023-03-01T09:52:15Z
http://arxiv.org/abs/2303.00371v1
# AI-Based Multi-Object Relative State Estimation with ###### Abstract The capability to extract task specific, semantic information from raw sensory data is a crucial requirement for many applications of mobile robotics. Autonomous inspection of critical infrastructure with Unmanned Aerial Vehicles (UAVs), for example, requires precise navigation relative to the structure that is to be inspected. Recently, Artificial Intelligence (AI)-based methods have been shown to excel at extracting semantic information such as 6 degree-of-freedom (6-DoF) poses of objects from images. In this paper, we propose a method combining a state-of-the-art AI-based pose estimator for objects in camera images with data from an inertial measurement unit (IMU) for 6-DoF multi-object relative state estimation of a mobile robot. The AI-based pose estimator detects multiple objects of interest in camera images along with their relative poses. These measurements are fused with IMU data in a state-of-the-art sensor fusion framework. We illustrate the feasibility of our proposed method with real world experiments for different trajectories and number of arbitrarily placed objects. We show that the results can be reliably reproduced due to the self-calibrating capabilities of our approach. ## I Introduction Mobile robots, such as unmanned aerial vehicles (UAVs), rely on the information of their on-board sensors to autonomously navigate the world. Semantic information, i.e. the higher-level meaning of sensor data, can improve a robot's ability to navigate in its surroundings and allows for more complicated tasks [1]. In semantic navigation, the robot moves depending on context or task, in many cases with respect to objects of interest in the scene. Such tasks include infrastructure inspection [2] or object tracking [3]. While the goal for the latter is to keep the moving object in the field of view of the camera, infrastructure inspection requires accurate positioning of the robot with respect to a typically static object of interest. Semantic information extracted from the robot's sensor data, namely the detection of the object of interest and its pose relative to the robot are important elements to achieving this task. For example, monitoring power pole insulators for possible damages requires a UAV to fly around the desired insulator and take high resolution images from specific positions to allow for detection of damage or changes over time. Current autonomous mission execution is typically based on global navigation satellite system (GNSS) for localization of the UAV. GNSS always provides a global position and not a relative position with respect to an object of interest. Moreover, the accuracy is too low for precise, centimeter-range navigation and GNSS is prone to signal loss in proximity to large structures. In this case often other sensor modalities are considered, e.g. visual-inertial odometry (VIO) [4]. With VIO, a local pose can be estimated by combining the movement of geometrical features (edges, corners) in monocular camera images with data from an inertial measurement unit (IMU). Classical, feature extraction based algorithms are not well suited for semantic navigation as they rely on raw features that do not provide information about any objects in the scene and they struggle with fast or slow motion [5]. Recent advancements in artificial intelligence (AI) led to a breakthrough in the extraction of semantic information from raw sensor measurements like path detection using camera images [6], object recognition with laser scanners [7], semantic segmentation for scene understanding [8], and recently, 6 degree-of-freedom (6-DoF) pose estimation of objects for robotic grasping [9]. Furthermore, the availability of AI capable edge computing devices enables the usage of such methods on mobile robots. In this paper, we investigate the suitability of AI-based pose estimation for full 6-DoF, object relative state estimation for mobile robotics. We consider a minimal sensor configuration consisting of a single monocular camera and an IMU in line with size, weight and computational power constraints of mobile robotic platforms such as UAVs. We utilize an AI-based pose estimator to detect, classify and Fig. 1: Visualization of the coordinate frames in this work. We estimate the state of a fixed rigid body consisting of IMU \(I\) and camera \(C\) relative to up to \(N\) different objects \(O_{u}\) with respect to a fixed but arbitrary navigation world \(W\). In addition to the core states (red), we also estimate the calibration between IMU and camera (blue). We also estimate the pose of the object frames with respect to the world (blue). Our pose sensor consists of AI-based 6-DoF relative pose measurements between camera and objects (green). estimate the 6-DoF poses of objects of interest contained in each camera image, and then fuse the information with IMU measurements in a state-of-the-art sensor fusion framework to infer the 6-DoF object-relative pose of the robot. A schematic overview of our approach is presented in Fig. 1. Our main contributions can be summarized as follows: * Extracting semantic information from images with AI and fusing this relative pose information with IMU data for accurate, 6-DoF, object relative state estimation. * Formulating a filter-based method to estimate the state of the mobile robot and the pose of multiple, different objects based on 6-DoF relative pose measurements. * Providing a self-calibrating formulation of the filter that does not require any assumptions about the global or relative positions of the different objects in a scene. * Validating the proposed approach with several real world experiments using objects of a popular 6-DoF object pose challenge data set to show that our method works for different trajectories and different number of objects with reproducible performance. The remainder of the paper is organized as follows. In Section II, we summarize the related work. In Section III, we present how we integrate object relative pose measurements into a state estimation framework. In Section IV, the experiments and the corresponding results are discussed. Finally, the paper is concluded in Section V. ## II Related Work For state estimation in mobile robotics, typically IMU data and one or more pose sensors such as GNSS are fused together. GNSS provides global position information but not 3D orientation information. In the absence of GNSS signals, VIO, the combination of a monocular camera and IMU data, can estimate the pose of a robot by triangulating the position of the camera given geometrical features from an image and estimating the remaining scale factor with inertial data [4, 10]. Fusing multiple sensors yields a more robust and reliable estimate of the robot's state. There exist mainly two different approaches for sensor fusion: filter-based, recursive and optimization-based methods. The latter can yield more accurate state estimates but is computationally more demanding due to optimizing across several sensor measurements [11]. In comparison, filtering-based methods, such as Extended Kalman Filters (EKF) [12, 13], are computationally more efficient and thus are well-suited for mobile robotics. GNSS and classical VIO do not provide object relative pose measurements and thus are not suitable for object relative state estimation. However, image-based 6-DoF relative object pose estimation methods can be utilized as a pose sensor in state estimation frameworks. There exist classical approaches and AI methods based on deep learning. Classical approaches are either template-based, where the object pose is determined by finding a matching template for the current image [14], or feature-based, where keypoints are extracted from the image and then matched to the 3D object model [15]. On the other hand, deep learning-based approaches are mostly end-to-end learned methods, where the 6-DoF pose is directly estimated from the input image using convolutional neural networks (CNNs). Deep learning-based methods can be divided even further depending on the amount of additional information used. [9, 16] take a single RGB image as input to their network and employ symmetry aware losses during training that make use of 3D object model information. 3D object models can also be provided as an additional input to the network [17], utilized for refining an initial pose estimate [18, 19, 20] or for matching keypoints, which were regressed by the network [21]. Other forms of additional information consist of taking multiple images [22, 23] or depth maps [24, 25]. Recently, we have proposed PoET [26] a 6-DoF multi-object pose estimation framework that achieves state-of-the-art results on benchmark datasets and only takes a single RGB image as input and does not require any additional information during training or inference. An alternative to object relative state estimation is simultaneous localization and mapping (SLAM) on an object level. [27] uses depth images to extract 6-DoF object pose information. Similar to us, [28] use an AI-based pose estimator to predict 6-DoF relative object poses from images. Both approaches fuse the 6-DoF relative pose information from multiple view points together and combine it with graph optimization to estimate the pose of the camera and objects with respect to a map. However, they do not use any other sensors, such as IMU, in their approach. [29] combines IMU measurements, geometric features from images, and 6-DoF object poses in a SLAM approach. Graph optimization still needs to be performed for a graph containing all object poses. In general, the requirement for a map and optimization in SLAM results in a higher computational load for the mobile robot. Our approach still allows for object relative state estimation without this requirement. Object relative state estimation for mobile robotics has been shown in [30], where a UAV localizes itself with respect to cylinder shaped infrastructure by extracting geometrical features from images and assuming a known radius. Similarly, [31] used a color-based ellipse-detection algorithm to first detect the object of interest in the image and then used the knowledge about object size, visual appearance and camera parameters to calculate the relative pose to the object. Meanwhile, [32] investigated different classical 6-DoF object pose estimation approaches for object-relative state estimation. They also investigated the use of machine learning to detect the presence of objects by training simple classifiers on classical features extracted from images. In contrast to that, we propose here a fully AI-based method to extract semantic information from camera images. We do not need to define a geometric model, keypoints or templates to map object appearances in images to relative 6-DoF poses. Moreover, AI-based models are not limited to specific geometric object shapes and remove the need for handcrafted features. In our previous work [26], we have introduced PoET for 6-DoF pose estimation of objects in RGB images using state-of-the-art AI methods. We mainly focused on the definition, the training, a thorough ablation study and comparison to other deep learning-based methods on benchmark datasets for 6-DoF multi-object pose estimation. In this work, we present a detailed investigation of the suitability of our AI-based object pose estimator as pose sensor for 6-DoF object-relative state estimation of a mobile robot using a state-of-the-art sensor fusion framework with multiple real world experiments. ## III Method In this section, we present the design of our approach. First, we explain the notation used for the measurement equations and transformations of coordinate frames. Second, we reason about the choice of frameworks for 6-DoF pose estimation and state estimation. Finally, we describe how the estimated 6-DoF of several known objects can be combined to estimate the 6-DoF pose of the robot. This includes a detailed description of how our choice of sensor fusion algorithm is extended to include 6-DoF pose measurements of each individual object. ### _Notation_ Throughout this paper we use the following notation: given three coordinate frames \(A\), \(B\) and \(C\), the transformation \({}_{A}\mathbf{T}_{BC}\) defines frame \(C\) with respect to frame \(B\) expressed in frame \(A\). If the left subscript \(A\) is omitted, the transformation is defined in frame \(B\). Furthermore, the transformation \({}_{A}\mathbf{T}_{AB}\) can be split up into two parts namely \({}_{A}\mathbf{p}_{AB}\) and \(\mathbf{R}_{AB}\), which describe the translation and rotation respectively. Alternatively, the rotation can also be expressed by a quaternion \(\mathbf{q}_{AB}\). Each quaternion \(\mathbf{q}\) can be represented by \(\mathbf{q}=[\mathbf{q}_{\mathbf{v}}\ q_{w}]^{T}=[q_{x}\ q_{y}\ q_{z}\ q_{w}]^{T}\). The quaternion multiplication is represented by \(\otimes\). \(\mathbf{I}_{3}\) and \(\mathbf{0}_{3}\) refer to the identity and the null matrix in \(\mathbb{R}^{3x3}\), respectively. \([\omega]_{\times}\) is the skew-symmetric operator as defined in [33]. ### _Pose and State Estimation Frameworks_ Mobile robots, in particular UAVs, are subject to payload constraints, which impose not only limitations on the size and amount of sensors a robot can carry, but also on the computational power available for data processing. Hence, the necessity arises for efficient and computationally light algorithms. Therefore, we chose our object pose estimation framework PoET [26] as a 6-DoF pose sensor as it only uses RGB images and does not rely on any depth information and 3D object models, removing the need for additional hardware components and reducing the computational load by not having to process 3D models. In a first step, PoET detects all objects it was trained to detect in an image and also predicts their classes. Afterwards, the predicted bounding boxes and multi-scale feature maps are fed to a transformer architecture to predict the relative, up-to scale 6-DoF pose between the camera and each object. The predicted rotation and translation are unique for non-symmetric objects. For objects with one or more symmetry axes, the rotation or translation for some object poses becomes ambiguous with more than one possible solution. The obvious negative effects of this ambiguity on the pose estimation of the robot can be minimized by considering multiple objects in heterogeneous configuration and fusing individual measurements in a proper sensor fusion framework. This mimics an inspection workflow where typically several distinct parts of interest of the structure to be inspected are visible at the same time. For the sensor fusion framework, we use MaRS [12] for multi-sensor fusion and state estimation due to being lightweight and computationally efficient as it was developed specifically with mobile robotics in mind. MaRS was designed for modularity and separates the propagation of the core state variables based on inertial data from the state updates based on the measurements of the individual sensors. It also uses abstract sensor classes that are type agnostic. This allows for straightforward integration of new sensor modules. For our method, we define a multi-pose sensor, where a single measurement consists of a single RGB image. From each image, we then extract the 6-DoF relative poses of all detected objects with PoET and use them for the EKF update step as described below. ### _EKF State and Update_ As reference frame for the mobile robotic system, we chose the frame of its IMU (\(I\)). Thus, our goal is to estimate the pose of the IMU (\(I\)) with respect to the world (\(W\)) by measuring the 6-DoF relative poses between the camera (\(C\)) and a set of objects (\(O_{k}\)). The different coordinate frames are visualized in Fig. 1. As mentioned earlier, the relative poses of the objects with respect to the camera are extracted from the image by our AI-based pose estimator dubbed PoET. PoET will only consider objects that it was trained for. Given a single RGB image, a 6-DoF pose \(\mathbf{T}_{CO_{k}}\) is predicted for each detected object of interest and the assignment of the predicted pose to an object is based on the predicted class. For details about the architectural choices in PoET, we refer the reader to [26]. Depending on the total number of objects \(N\) in a scene, the full state vector \(\mathbf{X}\) is then defined as: \[\mathbf{X}=[\mathbf{p}_{WI}^{T},\mathbf{v}_{WI}^{T},\mathbf{q}_{WI }^{T},\mathbf{b}_{a}^{T}, \tag{1}\] \[\mathbf{p}_{IC}^{T},\mathbf{q}_{IC}^{T},\mathbf{p}_{O_{0}W}^{T}, \mathbf{q}_{O_{0}W}^{T},\ldots,\mathbf{p}_{O_{N}W}^{T},\mathbf{q}_{O_{N}W}^{T} ]^{T}\] The core states necessary for state propagation are the position \(\mathbf{p}_{WI}\) of the IMU, its velocity \(\mathbf{v}_{WI}\) and its orientation \(\mathbf{q}_{WI}\) as well as the gyroscopic bias \(\mathbf{b}_{a}\) and the accelerometer bias \(\mathbf{b}_{a}\). The pose and velocity dynamics are given as [4] \[\dot{\mathbf{p}}_{WI} =\mathbf{v}_{WI} \tag{2}\] \[\dot{\mathbf{v}}_{WI} =\mathbf{R}_{WI}\ (\mathbf{a}_{m}-\mathbf{b}_{a}-\mathbf{n}_{a})- \mathbf{g}\] (3) \[\dot{\mathbf{q}}_{WI} =\frac{1}{2}\Omega(\omega_{b}-\mathbf{b}_{m}-\mathbf{n}_{a})\ \mathbf{q}_{WI} \tag{4}\] where \(\mathbf{a}_{a}\) is the measured acceleration in the IMU frame, \(\mathbf{n}_{a}\) is the accelerometer noise parameter, \(\mathbf{g}\) is the gravity vector in \(W\), \(\omega_{b}\) is the measured angular velocity in the IMU frame, \(\mathbf{n}_{a}\) is the gyroscopic noise parameter, and \(\Omega(\omega)\) is the quaternion multiplication matrix of \(\omega\). The IMU biases are modeled as random walks. Furthermore, we estimate the calibration between the IMU and the camera given by \(\mathbf{p}_{IC}\) and \(\mathbf{q}_{IC}\). Due to implementation reasons we assume the number of objects in a scene to be known a priori, but neither the global poses of the objects nor the relative poses between objects is known. Therefore, we additionally estimate an object-world which describes the transformation \((\mathbf{p}_{O_{k}W},\mathbf{q}_{O_{k}W})\) between the object frame and the navigation world. Both the camera-IMU extrinsics \(\mathbf{p}_{IC},\mathbf{q}_{IC}\) as well as the object poses \(\mathbf{p}_{O_{k}W},\mathbf{q}_{O_{k}W}\) are modeled to remain constant over time. For each image, every measured relative pose is treated as an independent measurement with which the pose of the camera can be estimated. To calculate the required Jacobians for the update step, we consider the inverted relative pose measurements \(\mathbf{T}_{O_{k}C}\), i.e. the camera pose relative to the object frame. Based on the relationship between the different coordinate frames and the independent relative position \(z_{\mathbf{p}_{O_{k}}}\) and orientation \(z_{\mathbf{q}_{O_{k}}}\) measurements, the residuals for position \(\tilde{z}_{\mathbf{p}_{O_{k}}}\) and orientation \(\tilde{z}_{\mathbf{R}_{O_{k}}}\) can be written as: \[\tilde{z}_{\mathbf{p}_{O_{k}}} =z_{\mathbf{p}_{O_{k}}}-\hat{z}_{\mathbf{p}_{O_{k}}}\] \[=\mathbf{p}_{O_{k}C}-(\mathbf{p}_{O_{k}W}+\mathbf{R}_{O_{k}W}( \mathbf{p}_{Wt}+\mathbf{R}_{Wt}\ \mathbf{p}_{IC})) \tag{5}\] \[\tilde{z}_{\mathbf{R}_{O_{k}}} =2\frac{\tilde{z}_{\mathbf{q}_{O_{k}}}}{\tilde{z}_{\mathbf{q}_{ o_{k}}O_{k}}}\] (6) \[\tilde{z}_{\mathbf{q}_{O_{k}}} =\hat{z}_{\mathbf{q}_{O_{k}}}^{-1}\otimes z_{\mathbf{q}_{O_{k}}} =(\mathbf{q}_{O_{k}W}\otimes\mathbf{q}_{Wt}\otimes\mathbf{q}_{IC})^{-1} \otimes\mathbf{q}_{O_{k}C}\] (7) \[\tilde{z}_{O_{k}} =\begin{bmatrix}\tilde{z}_{\mathbf{p}_{O_{k}}}\\ \tilde{z}_{\mathbf{R}_{O_{k}}}\end{bmatrix} \tag{8}\] Given these residuals and depending on a single pose measurement from object \(O_{k}\), the Jacobian for the position \(H_{\mathbf{p}}\) and orientation \(H_{\mathbf{R}}\) with respect to the states is [33]: \[H_{\mathbf{p},\mathbf{p}_{Wt}} =\mathbf{R}_{O_{k}W} \tag{9}\] \[H_{\mathbf{p},\mathbf{R}_{Wt}} =-\mathbf{R}_{O_{k}W}\mathbf{R}_{Wt}|\mathbf{p}_{IC}]_{\times}\] (10) \[H_{\mathbf{p},\mathbf{p}_{IC}} =\mathbf{R}_{O_{k}W}\mathbf{R}_{Wt}\] (11) \[H_{\mathbf{p},\mathbf{p}_{O_{k}W}} =\mathbf{I}_{3}\] (12) \[H_{\mathbf{p},\mathbf{R}_{O_{k}W}} =-\mathbf{R}_{O_{k}W}[\mathbf{p}_{Wt}]_{\times}-\mathbf{R}_{O_{k}W }[\mathbf{R}_{Wt}\ \mathbf{p}_{IC}]_{\times}\] (13) \[H_{\mathbf{R},\mathbf{R}_{Wt}} =\mathbf{R}_{IC}^{T}\] (14) \[H_{\mathbf{R},\mathbf{R}_{IC}} =\mathbf{I}_{3}\] (15) \[H_{\mathbf{R},\mathbf{R}_{O_{k}W}} =\mathbf{R}_{IC}^{T}\mathbf{R}_{Wt}^{T} \tag{16}\] where, e.g. \(H_{\mathbf{p},\mathbf{p}_{Wt}}\) only considers the part of the residual \(\tilde{z}_{\mathbf{p}_{O_{k}}}\) that depends on the state \(\mathbf{p}_{Wt}\). The rest of the Jacobians are equal to \(\mathbf{0}_{3}\). As relative pose measurements for different objects are independent of each other, the Jacobians for the other (\(i\neq n\)) object-world states, i.e. \(H_{\mathbf{p},\mathbf{p}_{O_{i}W}}\), \(H_{\mathbf{p},\mathbf{R}_{O_{i}W}}\), \(H_{\mathbf{R},\mathbf{p}_{O_{i}W}}\), \(H_{\mathbf{R},\mathbf{R}_{O_{i}W}}\), are all equal to \(\mathbf{0}_{3}\). For a single object \(O_{k}\), the Jacobian is given by stacking the individual components: \[H_{\mathbf{p},O_{k}} =[H_{\mathbf{p},\mathbf{p}_{Wt}}\ H_{\mathbf{p},\mathbf{w}_{Wt}} \ H_{\mathbf{p},\mathbf{b}_{Wt}}\ H_{\mathbf{p},\mathbf{b}_{Wt}}\ H_{\mathbf{p}, \mathbf{b}_{Wt}} \tag{17}\] \[H_{\mathbf{p},\mathbf{p}_{IC}}\ H_{\mathbf{p},\mathbf{R}_{IC}}\ H_{ \mathbf{p},\mathbf{p}_{O_{i}W}}\ H_{\mathbf{p},\mathbf{R}_{O_{i}W}}\] \[\ldots H_{\mathbf{p},\mathbf{p}_{O_{i}W}}\ H_{\mathbf{p},\mathbf{ R}_{O_{i}W}}]\] \[H_{\mathbf{R},O_{k}} =[H_{\mathbf{R},\mathbf{R}_{Wt}}\ H_{\mathbf{R},\mathbf{w}_{Wt}} \ H_{\mathbf{R},\mathbf{R}_{Wt}}\ H_{\mathbf{R},\mathbf{b}_{Wt}}\ H_{\mathbf{R}, \mathbf{b}_{O_{i}W}}\] (18) \[H_{\mathbf{R},\mathbf{p}_{IC}}\ H_{\mathbf{R},\mathbf{R}_{IC}}\ H_{ \mathbf{R},\mathbf{p}_{O_{i}W}}\ H_{\mathbf{R},\mathbf{R}_{O_{i}W}}\] \[\ldots H_{\mathbf{R},\mathbf{p}_{O_{i}W}}\ H_{\mathbf{R},\mathbf{R}_{O_{i}W}}]\] \[H_{O_{k}} =\begin{bmatrix}H_{\mathbf{p},O_{k}}\\ H_{\mathbf{R},O_{k}}\end{bmatrix} \tag{19}\] Depending on the current image, the final residual \(z\) and observation matrix \(\mathbf{H}\) for the state update is determined by vertically stacking the residuals and Jacobians, individually, from Eq. (8) and Eq. (19), respectively, for each object that was detected for the current update step. Similar to hardware sensors, our AI-based pose sensor might return faulty or inaccurate measurements. In a similar fashion as described in [34], we conduct a \(\chi^{2}\) test based on the EKF innovation \(\mathbf{S}\) and the residual to detect outlier measurements. This is applied for the measurement of each object individually. Outlier measurements for object \(O_{k}\) are then rejected and the final residual and Jacobian have to be rebuild by masking the corresponding rows. The correction is then calculated based on this final total residual and associated Jacobian. In the update step, measurement uncertainties for each measurement can be considered. For the present work, these uncertainties have been fixed to 10 cm and 20 degrees for the position and orientation measurement of each object, respectively. These numbers were determined based on the standard deviation of PoET across a video sequence reported in [26]. The proper initialization of the individual frames is an important aspect to consider. At the beginning of the recording, we initialize an arbitrary but fixed navigation world \(W\). Without loss of generality, the IMU frame is initialized at the origin of \(W\). The initial extrinsic calibration between the IMU and camera is determined through visual-inertial calibration [35]. Each object-world is initialized when the corresponding object is seen by the camera for the first time. The object frame is then placed with respect to the world frame by taking the currently estimated pose of the camera and the relative pose measurement: \[\mathbf{R}_{O_{k}W} =\mathbf{R}_{O_{k}C}\mathbf{R}_{IC}^{T}\mathbf{R}_{Wt}^{T} \tag{20}\] \[\mathbf{p}_{O_{k}W} =\mathbf{p}_{O_{k}C}-\mathbf{R}_{O_{k}W}(\mathbf{R}_{Wt}\ \mathbf{p}_{IC}+\mathbf{p}_{Wt}) \tag{21}\] In our problem formulation, the robot's pose \(I\) is estimated relative to a set of object-worlds \(O_{k}\) through relative pose measurements. As the robot's pose is relative to a world frame and the measurements are relative to the corresponding object frames, the object-worlds can be placed freely with respect to the world frame, which does cause observability issues. By fixing the state of one object-world reference frame, the system is rendered observable. This fixed object, from now on called the main object \(O_{m}\), serves as the anchor point for the object-relative 6-DoF state estimation. The position of the main object's frame with respect to the navigation world is not changed, i.e. the corresponding Jacobians \(H_{\mathbf{p},\mathbf{p}_{O_{m}W}}\) and \(H_{\mathbf{R},\mathbf{p}_{O_{m}W}}\) are set to \(\mathbf{0}_{3}\). Measurements in which the main object is not visible in the picture are directly rejected. Otherwise, estimating the object-world of additional objects while the anchor is not visible leads to ambiguous updates. The propagation step is performed at the rate of the IMU sensor readings, while the update step happens with the frequency of the camera images. ## IV Experiments & Results In this section, we present the experiments conducted and discuss the results in detail. We trained PoET on the YCB-V dataset [9] as described in [26], a benchmark dataset for 6-DoF pose estimation, and chose a subset of objects to serve as objects of interest in our experiments. The images and IMU data were recorded using an Intel Realsense D435i with an RGB resolution of 1280x720 and 30 FPS. After undistorting the images, they were cropped to a resolution of 640x480, which is the standard resolution of the YCB-V dataset. We record our own real data by placing the objects in our motion capture room and moving around the objects with the camera while tracking the body of the camera. This mimics the inspection of a set of objects of interest with a mobile robotic platform with 6-DoFs. An example object configuration and image can be found in Fig. 2. Because we do not record any information regarding the global position of the objects in the room, the trajectory derived from the object-relative state estimation has to be aligned with the ground truth trajectory to calculate the error metrics. It is important to note the differences between the benchmark data and our own real data. The camera which was used to record the YCB-V dataset has a different recording resolution, field of view, and focal point than the camera used during our experiments. In addition, some real world objects had slightly different appearance (size and coloring) than the ones used for the data set that PoET was trained with. The results reported here have been obtained with the YCB-V trained model of PoET. We did not perform any retraining or fine-tuning of the model to adapt to these differences. for each object and measurement would lead to better results rather than working with the fixed values described above. Integration of aleatoric and epistemic uncertainties for the predictions of PoET is subject to future work. To illustrate the reproducibility of our approach, we chose two out of the 8 sequences (sequence 4 and sequence 6) and repeated each sequence 10 times resulting in similar but not exactly the same trajectories. For each sequence, the RMSE across the whole trajectory for each run and the average RMSE and std across all runs are summarized in Table II and Table III, respectively. For both sequences, we are able to reproduce the performance of our method across 10 independent runs with a low standard deviation. This shows an AI-based component can be reliably incorporated into the state estimation of a robot. Moreover, we compare the estimated and the ground truth position and orientation across the whole trajectory for an example recording in Fig. 3. This example shows that our approach reliably estimates the position and orientation for the whole duration of the recording. Furthermore, the graphs show that the raw measurements of a single object sometimes lead to a reprojected IMU pose that does not align with the ground truth trajectory. However, by fusing IMU information with pose measurements from multiple objects our method is able to reliably estimate the trajectory, despite outlier measurements. In addition to that, we show in Fig. 4 an example for the self-calibration capabilities of our approach with respect to the object-world states. The object-world is wrongly initialized after it was first observed due to a possible noisy measurement. Nonetheless, the state converges after 5 seconds. ## V Conclusions In this paper, we investigated object relative state estimation for mobile robots with an AI-based method to extract semantic information (object class and pose) from single RGB images. We defined a minimal sensor configuration, consisting of an RGB camera and IMU, and an experimental scenario in which object relative state estimation is required, mimicking the task of inspection and monitoring. We derived and implemented a filter-based solution for full state estimation of a mobile robot given 6-DoF relative pose measurements. Additionally, our method does not require any initial information about the global and relative poses of the objects. By defining object-world states, the coordinate frame of each object is estimated concurrently with respect to a common navigation world by using one of the objects as an anchor point. Our experiments with own real data showed that our method can be used for state estimation of the mobile robot in different scenarios and that the results can be reliably reproduced. Our results show that AI-based, semantic information from a single sensor is sufficient in combination with IMU data for accurate state estimation. Future work will consider incorporating aleatoric and epistemic uncertainties of the AI-based predictions in the sensor fusion framework for improved outlier rejection as well the integration of our proposed approach on a real UAV for closed loop experiments. Fig. 4: Visualization of the estimated object-world state (\(\mathbf{p}_{0i,w},\mathbf{q}_{0i,w}\)) and the corresponding state covariance represented by the std for a non-main object for run 8 of sequence 4. The position is split up into (x, y, z), while the orientation is represented by the Euler angles. The states are plotted from the point of time the object is first observed (at about 10s) until the states converge. At the beginning the object state is wrongly initialized due to perhaps a noisy measurement. However, after about 5 seconds the state converges and the uncertainty becomes minimal. Fig. 3: Comparison of estimated position and orientation in Euler angles (mars) and the ground (gt) for run 8 of sequence 4. The components of the position (x, y, z) and orientation (roll, pitch, yaw) are plotted individually for the whole sequence. Additionally, we compare the reprojected IMU pose given the raw PoET estimates for object 3 (obj). The black arrows enclose a section in which the reprojected IMU pose is out of plotting range. Important to note, the object was not visible in the camera images between 6.4s and 8.8s.
2305.04299
Evaluation of P3-type layered oxides as K-ion battery cathodes
Given increasing energy storage demands and limited natural resources of Li, K-ion batteries (KIBs) could be promising next-generation systems having natural abundance, similar chemistry and energy density. Here, we have investigated the P3-type K$_{0.5}$TMO$_2$ (where TM = Ti, V, Cr, Mn, Co, or Ni) systems using density functional theory calculations, as potential positive intercalation electrodes (or cathodes) for KIBs. Specifically, we have identified the ground state configurations, and calculated the average topotactic voltages, electronic structures, on-site magnetic moments, and thermodynamic stabilities of all P3-K$_{0.5}$TMO$_2$ compositions and their corresponding depotassiated P3-TMO$_2$ frameworks. We find that K adopts the honeycomb or zig-zag configuration within each K-layer of all P3 structures considered, irrespective of the TM. In terms of voltages, we find the Co- and Ti-based compositions to exhibit the highest (4.59 V vs. K) and lowest (2.24 V) voltages, respectively, with the TM contributing to the redox behavior upon K (de-)intercalation. We observe all P3-K$_{0.5}$TMO$_2$ to be (meta)stable, and hence experimentally synthesizeable according to our 0 K convex hull calculations, while all depotassiated P3-TMO$_2$ configurations are unstable and may appear during electrochemical cycling. Also, we verified the stability of the prismatic coordination environment of K compared to octahedral coordination at the K$_{0.5}$TMO$_2$ compositions using Rouxel and cationic potential models. Finally, combining our voltage and stability calculations, we find P3-K$_x$CoO$_2$ to be the most promising cathode composition, while P3-K$_x$NiO$_2$ is worth exploring. Our work should contribute to the exploration of strategies and materials required to make practical KIBs.
Pawan Kumar Jha, Sanyam Nitin Totade, Prabeer Barpanda, Gopalakrishnan Sai Gautam
2023-05-07T14:58:49Z
http://arxiv.org/abs/2305.04299v1
## Evaluation of P3-type layered oxides as K-ion battery cathodes ## Abstract Given increasing energy storage demands and limited natural resources of Li, K-ion batteries (KIBs) could be promising next-generation systems having natural abundance, similar chemistry and energy density. Here, we have investigated the P3-type K\({}_{0.5}\)TMO\({}_{2}\) (where TM = Ti, V, Cr, Mn, Co, or Ni) systems using density functional theory calculations, as potential positive intercalation electrodes (or cathodes) for KIBs. Specifically, we have identified the ground state configurations, and calculated the average topotactic voltages, electronic structures, on-site magnetic moments, and thermodynamic stabilities of all P3-K\({}_{0.5}\)TMO\({}_{2}\) compositions and their corresponding depotassiated P3-TMO\({}_{2}\) frameworks. We find that K adopts the honeycomb or zig-zag configuration within each K-layer of all P3 structures considered, irrespective of the TM. In terms of voltages, we find the Co- and Ti-based compositions to exhibit the highest (4.59 V vs. K) and lowest (2.24 V) voltages, respectively, with the TM contributing to the redox behavior upon K (de-)intercalation. We observe all P3-K\({}_{0.5}\)TMO\({}_{2}\) to be (meta)stable, and hence experimentally synthesize according to our 0 K convex hull calculations, while all depotassiated P3-TMO\({}_{2}\) configurations are unstable and may appear during electrochemical cycling. Also, we verified the stability of the prismatic coordination environment of K compared to octahedral coordination at the K\({}_{0.5}\)TMO\({}_{2}\) compositions using Rouxel and cationic potential models. Finally, combining our voltage and stability calculations, we find P3-K\({}_{\rm x}\)CoO\({}_{2}\) to be the most promising cathode composition, while P3-K\({}_{x}\)NiO\({}_{2}\) is worth exploring. Our work should contribute to the exploration of strategies and materials required to make practical KIBs. ## Introduction Lithium-ion batteries (LIBs) have played a pre-eminent role in energy storage for the last three decades [1, 2, 3]. However, our excessive dependence on LIBs raises several challenges on natural abundance of critical elements, fragile supply chains, and cost [4], which has motivated the search for intercalation chemistries that can be an alternative to LIBs. Notably, K-ion batteries (KIBs) have emerged as a viable alternative for economic grid-scale storage applications owing to the natural abundance of K, reversible (de)intercalation into graphite (as anode), and lower standard redox potential (K/K\({}^{+}\), -2.936 V vs. standard hydrogen electrode -SHE) [5, 6, 7]. Despite several advantages over Li (and Na), the practical development of KIB is constrained by the need for robust cathode materials that can (de)intercalate K\({}^{+}\) reversibly. Several classes of materials have been explored as K-intercalation cathodes, namely layered oxides, polyanionic frameworks, Prussian blue analogous, and organic compounds [5]. Out of these, layered transition metal oxides (TMOs), similar to those used for Li and Na (de)intercalation, are promising in terms of their high theoretical energy density, high rate capability owing to the large two-dimensional K\({}^{+}\) diffusion pathways, and possible structural stability during cycling due to the large slab spacing that can suppress detrimental transition metal (TM) migration into K-layers [8]. Typical K-containing layered structures, of composition K\({}_{x}\)TMO\({}_{2}\) (x \(\leq\) 1, TM = transition metal), exhibit a variety of stacking sequences, including prismatic-based P3 or P2, and octahedral-based O3 or O2, where the P3, P2, O3, and O2 notations are defined as per the nomenclature of Delmas and co-workers [9]. The type of coordination environment preferred by K (i.e., octahedral vs. prismatic, see **Figure S1** of the supporting information -SI, for an illustration) in a given framework, which in turn determines the type of stacking sequence of the structure, is primarily determined by the TM itself, the oxidation state(s) of the TM, and the K-concentration. For example, KScO\({}_{2}\) and KCrO\({}_{2}\) exhibit the O3 framework, while K\({}_{0.5}\)MnO\({}_{2}\) adopts the P3 framework [10, 11, 12, 13, 14, 15]. The relative stability of octahedral and prismatic coordination can also be quantified via the "cationic potential" model and/or the Rouxel diagram [9, 16]. Notably, prismatic coordination is often stabilized at intermediate (x\(\sim\)0.5) or non-stoichiometric (x \(<\) 1) K-concentrations in layered frameworks, similar to observations in analogous Na-containing layered systems [17, 18, 19, 20], as illustrated by K\({}_{x}\)CrO\({}_{2}\), K\({}_{x}\)MnO\({}_{2}\), and K\({}_{x}\)CoO systems.[11, 19] Indeed, Hagenmuller and co-workers' experiment-derived phase diagram of A\({}_{\rm x}\)MO\({}_{2}\) (A = Na or K; M = Cr, Mn, or Co), indicates that either the P3 or the P'3 is the stable phase at intermediate Na or K concentrations, irrespective of the TM.[11] Experimentally, stoichiometric K\({}_{\rm 2}\) (x=1) has been synthesized only in K\({}_{\rm x}\)ScO\({}_{\rm 2}\), K\({}_{\rm x}\)CrO\({}_{\rm 2}\), K\({}_{\rm x}\)FeO\({}_{\rm 2}\), and K\({}_{\rm x}\)MnO\({}_{\rm 2}\) systems, with limitations on the observable electrochemical capacity.[10, 11, 12, 13, 14, 15] As a result, previous studies have investigated non-stoichiometric K\({}_{\rm x}\)TMO\({}_{\rm 2}\) frameworks, often dealing with prismatic phases. For example, Vaalma _et al_. demonstrated K\({}_{\rm 0.3}\)MnO\({}_{\rm 2}\) as a possible K-intercalation host,[21] which was followed by Kim _et al_.'s report that found P3-K\({}_{\rm 0.5}\)MnO\({}_{\rm 2}\) as a viable candidate as well.[22] Interestingly, the analogous P3-Na\({}_{\rm x}\)MnO\({}_{\rm 2}\) compound is metastable and cannot be trivially synthesized.[23, 24] Hironaka _et al_. showed P3-K\({}_{\rm x}\)CoO\({}_{\rm 2}\) as an efficient reversible K-intercalation host,[25] while Hwang _et al_. developed P3-K\({}_{\rm 0.69}\)CrO\({}_{\rm 2}\)_via_ electrochemical ion exchange from the parent O3-NaCrO\({}_{\rm 2}\) compound.[26] Notably, previous computational studies have revealed that the diffusivity of K\({}^{+}\) in prismatic stacking is higher than in octahedral stacking.[24, 26, 27] Thus, the existing literature indicates that K-containing layered TMOs with prismatic stacking sequences can be easily synthesized, exhibit reasonable cyclability, and good rate performance. However, systematic computational or experimental studies of P3-type K-containing layered TMOs are missing, so far. Here, we have used density functional theory (DFT[28, 29]) calculations to systematically evaluate various K-ion containing P3-layered oxides as candidate electrodes for KIBs. Specifically, we have calculated the lattice parameters, average intercalation voltage, thermodynamic stability, electronic properties, and on-site magnetic moments in K\({}_{\rm x}\)TMO\({}_{\rm 2}\) systems, where TM = Ti, V, Cr, Mn, Co, or Ni. We enumerate the possible in-plane K-ion orderings for K\({}_{\rm 0.5}\)TMO\({}_{\rm 2}\) compositions, and determine the ground states using DFT. Subsequently, we evaluate the aforementioned properties for the ground state K\({}_{\rm 0.5}\)TMO\({}_{\rm 2}\) configuration and its corresponding depotassiated composition, namely TMO\({}_{\rm 2}\). Notably, we observe that all P3-type K\({}_{\rm 0.5}\)TMO\({}_{\rm 2}\) systems are thermodynamically stable, except V and Cr, with Co (Ti) system exhibiting the highest (lowest) predicted voltage of 4.59 V (2.24 V) vs. K. The ground state configurations for all K\({}_{\rm 0.5}\)TMO\({}_{\rm 2}\) systems are identical, with K exhibiting a honeycomb or zig-zag ordering in each K-layer. Based on calculated projected density of states (pDOS) and on-site magnetic moments, we expect the TM to be redox-active upon K (de)intercalation in P3-K\({}_{\rm x}\)TMO\({}_{\rm 2}\). Also, we demonstrate the stability of prismatic over octahedral coordination in the K\({}_{\rm 0.5}\)TMO\({}_{\rm 2}\) systems considered, via the Rouxel and cationic potential model approaches. Finally, based on voltage and stability metrics, we expect P3 K\({}_{x}\)CoO\({}_{2}\) and K\({}_{x}\)NiO\({}_{2}\) to be promising candidates. We hope that our study will reinvigorate the computational and experimental investigations of P3-K\({}_{x}\)TMO\({}_{2}\) systems as K-intercalating hosts. ### Computational Methods We used the Vienna Ab Initio Simulation Package[30, 31, 32] to perform the DFT calculations, using the plane wave basis set with a kinetic energy cut-off of 520 eV and the projected-augmented wave (PAW[32, 33]) potentials to model the ionic cores, consistent with our previous work[34, 35]. We sampled the irreducible Brillouin zone using \(\Gamma\)-centered Monkhorst-Pack[36]\(k\)-point meshes with a density of 32 points per A, and we integrated the Fermi surface with a Gaussian smearing of width 0.05 eV. We relaxed the cell volume, shape, and ionic positions of all our structures without any symmetry constraints, till the atomic forces and the total energy were converged within \(|0.05|\) eV/A and \(10^{-5}\) eV, respectively[37]. All our calculations were spin-polarized, and we initialized the magnetic moments of all TM metals in a high-spin ferromagnetic ordering, except Co and Ni where we initialized with a low-spin ferromagnetic ordering for both the \(+3\) and \(+4\) oxidation states. For describing the electronic exchange-correlation, we utilized the Hubbard \(U\) corrected, strongly constrained and appropriately normed (SCAN[38]) functional, i.e., SCAN\(+U\). As derived in previous work, we used Hubbard \(U\) corrections[39] of 2.5, 1.0, 2.7, 3.0, and 2.5 eV for Ti, V, Mn, Co, and Ni, respectively[34, 35]. All pDOS calculations with the 'fake-self-consistent-field' procedure, as detailed in previous work[40, 41]. The starting structure for Ti, V, Mn, and Ni K\({}_{0.5}\)TMO\({}_{2}\) compositions was the P3-K\({}_{0.3}\)MnO\({}_{2}\), as obtained from the inorganic crystal structure database (ICSD[42]), where we created the Ti, V, and Ni structures via ionic substitution of Mn in P3-K\({}_{0.3}\)MnO\({}_{2}\). We constructed the Cr- and Co-based structures based on previous reports[10, 25]. Since several K-vacancy configurations are possible at the target K\({}_{0.5}\)TMO\({}_{2}\) composition, we used the pymatgen package to enumerate all symmetrically distinct orderings[43] within each K\({}_{x}\)TMO\({}_{2}\) supercell of size 2x2x1. We used VESTA for the visualization and illustration of structures used in our calculations[44]. The redox reaction of topotactic (de-)intercalation of K\({}^{+}\) in a P3-type TMO\({}_{2}\) can be represented as: \[\mathrm{yK^{+}+ye^{-}+K_{0.5-y}TMO_{2}\to K_{0.5}TMO_{2}} \tag{1}\] K\({}_{0.5}\)TMO\({}_{2}\) and K\({}_{0.5-y}\)TMO\({}_{2}\) represent the potassiated and depotassiated structures, respectively. The average intercalation voltage can be calculated via the Nernst equation from the difference in the Gibbs energies (\(G\)) of the potassiated and depotassiated compositions. We approximated the Gibbs energies with the DFT calculated total energies (i.e., \(G\approx E\)), thus ignoring the \(p-V\) and entropic contributions [45, 46]. Within this approximation and with \(F\) being the Faraday's constant, the average K-intercalation voltage _vs._ K/K\({}^{+}\) is: \[V=\frac{-(E_{\text{K}_{0.5}\text{TMO}_{2}}-E_{\text{K}_{0.5}-\text{TMO}_{2}}- yE_{\text{K}})}{y.F} \tag{2}\] \(E_{\text{K}_{0.5}\text{TMO}_{2}}\), \(E_{\text{K}_{0.5}-\text{yTMO}_{2}}\), and \(E_{\text{K}}\) are the DFT-calculated total energies of the ground state potassiated configuration, depotassiated composition, and the body-centered-cubic phase of pure K, respectively. In order to assess the thermodynamic stability of the P3-type K\({}_{0.5}\)TMO\({}_{2}\) and TMO\({}_{2}\) compositions, we computed the 0 K phase diagram (or the convex hull) of each ternary K-TM-O system, based on the DFT-calculated total energies of all elements, and compounds (i.e., binary and ternaries), whose experimentally reported structures are available in the ICSD. We used the pymatgen package to construct the phase diagrams [43]. We quantify the instability (stability) of K\({}_{0.5}\)TMO\({}_{2}\) and TMO\({}_{2}\) by calculating the energy above (below) the hull, denoted by \(E^{hull}\), based on the 0 K phase diagrams [47, 48, 49]. Note that we used a \(E^{hull}\leq 50\) meV/atom as a threshold value for a structure being experimentally synthesizeable, but this threshold is arbitrary and is highly chemistry-dependent [49]. ## Results ### Structure, K-ordering, and lattice parameters **Figure 1a** illustrates the typical unit cell of P3-K\({}_{\text{x}}\)TMO\({}_{2}\), consisting of three TMO\({}_{2}\) layers, denoted by the brown polyhedra. The topologically distinct prismatic sites of K are shown by the blue and orange polyhedra in **Figure 1a** and equivalently by the blue and orange spheres in **Figure 1b**. Combined, the two prismatic sites arrange themselves in a hexagonal lattice, as shown by the black guidelines in panels b and c of **Figure 1**. Each KO\({}_{6}\) prism shares one of its triangular faces with one TMO\({}_{6}\) octahedra, thus violating the third Pauling's rule [50, 51], while the other triangular face shares its three edges with three different TMO\({}_{6}\) octahedra. The oxygen packing in P3 compounds follows the ABBCCA sequence. The K-vacancy arrangement in each K-layer is an optimization of the steric and electrostatic interactions between the K-ions for any K\({}_{0.5}\)TMO\({}_{2}\). Interestingly, we found the ground state K-vacancy configuration of all K\({}_{x}\)TMO\({}_{2}\) systems to be identical, as indicated in **Figure 1c** by the honeycomb or zig-zag ordering of K\({}^{+}\) in each K-layer. Specifically, in each of the blue\(+\)orange hexagon of **Figure 1b**, K\({}^{+}\) occupies the farthest possible combination of one blue and one orange site, resulting in the honeycomb ordering of **Figure 1c**. This is equivalent to a K-K distance of 2\(a\) if the side-length of the hexagon in **Figure 1b** is \(a\). Also, our in-plane K-ordering is similar to previous experimental and theoretical studies [52, 53, 54, 55], with marginal differences in the construction of the honeycomb ordering. Note that the scale of panels b and c in **Figure 1** are different, where the pink guidelines in both panels connect identical set of K-sites. The SCAN\(+U\)-calculated lattice parameters for the ground state configurations of all K\({}_{0.5}\)TMO\({}_{2}\) systems considered are compiled in **Table S1**. The computed lattice parameters are in good agreement with available experimental values [22, 25, 26], with the maximum overestimation (underestimation) of the \(c\) parameter of 4.69% (2.79%). Increase in the atomic number of the TM monotonically decreases the \(a\) and \(b\) lattice parameters (see trendlines in **Figure 2a**), caused primarily by a decrease in TM-O bonds, except for Mn and Ni which can be attributed to the Jahn-Teller distortion of Mn\({}^{3+}\) and Ni\({}^{3+}\) **(Figure S2)**. As shown in **Figure 2a** and Figure 1: (a) Unit cell of typical P3-K\({}_{x}\)TMO\({}_{2}\) structure. (b) Visualization of different types of prismatic sites (blue and orange spheres) that are available for K-occupation, per K-layer in P3-K\({}_{x}\)TMO\({}_{2}\). The blue and orange spheres are equivalent to the blue and orange polyhedra shown in panel a. (c) Honeycomb or zig-zag ordering in each K-layer that constitutes the ground state configuration for all P3-K\({}_{0.5}\)TMO\({}_{2}\) considered. Pink guidelines in panels b and c connect identical K-sites. **Table S1**, the Jahn-Teller distortion also causes a decrease in the \(c\) parameter for Mn compared to Cr and Co, whereas in Ni, the \(c\) parameter remains similar to Co. **Average Voltages** **Figure 2b** presents the calculated average topotactic intercalation voltage, referenced against K/K\({}^{+}\), for the P3-type K\({}_{0.5}\)TMO\({}_{2}\)-TMO\({}_{2}\) systems considered in this work. Notably, the predicted voltages range from 2.24 V for the K\({}_{0.5}\)TiO\({}_{2}\)-TiO\({}_{2}\) system to 4.59 V for the K\({}_{0.5}\)CoO\({}_{2}\)-CoO\({}_{2}\) system, which is within the stable window of typical electrolytes used for KIBs.[5] The calculated intercalation voltage increases progressively as we move from Ti to Co, in accordance with the standard reduction potentials of the corresponding TMs, as noted in a previous study.[40] Importantly, Mn and Ni systems show markedly lower voltages compared to their neighboring TMs, which can be partly attributed to the Jahn-Teller distortions of Mn\({}^{3+}\) and Ni\({}^{3+}\). Also, the predicted voltage drop from Co to Ni is similar to the observation in Li-containing layered oxides, caused by the filling of the antibonding _e\({}_{g}\)_ orbital of NiO\({}_{2}\).[40] Finally, the lower intercalation voltage of 2.24 V for the K\({}_{x}\)TiO\({}_{2}\) system suggest that this system can be explored as an anode for KIBs. **Figure 3**: The calculated average topotactic intercalation voltage, referenced against K/K\({}^{+}\), for the P3-type K\({}_{0.5}\)TMO\({}_{2}\)-TMO\({}_{2}\) systems considered in this work. Notably, the predicted voltages range from 2.24 V for the K\({}_{0.5}\)TiO\({}_{2}\)-TiO\({}_{2}\) system to 4.59 V for the K\({}_{0.5}\)CoO\({}_{2}\)-CoO\({}_{2}\) system, which is within the stable window of typical electrolytes used for KIBs.[5] The calculated intercalation voltage increases progressively as we move from Ti to Co, in accordance with the standard reduction potentials of the corresponding TMs, as noted in a previous study.[40] Importantly, Mn and Ni systems show markedly lower voltages compared to their neighboring TMs, which can be partly attributed to the Jahn-Teller distortions of Mn\({}^{3+}\) and Ni\({}^{3+}\). Also, the predicted voltage drop from Co to Ni is similar to the observation in Li-containing layered oxides, caused by the filling of the antibonding _e\({}_{g}\)_ orbital of NiO\({}_{2}\).[40] Finally, the lower intercalation voltage of 2.24 V for the K\({}_{x}\)TiO\({}_{2}\) system suggest that this system can be explored as an anode for KIBs. **Figure 4**: The calculated intercalation voltage, referenced against K/K\({}^{+}\), for the P3-type K\({}_{0.5}\)TMO\({}_{2}\)-TMO\({}_{2}\) system considered in this work. Notably, the predicted voltages range from 2.24 V for the K\({}_{0.5}\)TiO\({}_{2}\)-TiO\({}_{2}\) system to 4.59 V for the K\({}_{0.5}\)CoO\({}_{2}\)-CoO\({}_{2}\) system, which is within the stable window of typical electrolytes used for KIBs.[5] The calculated intercalation voltage increases progressively as we move from Ti to Co, in accordance with the standard reduction potentials of the corresponding TMs, as noted in a previous study.[40] Importantly, Mn and Ni systems show markedly lower voltages compared to their neighboring TMs, which can be partly attributed to the Jahn-Teller distortions of Mn\({}^{3+}\) and Ni\({}^{3+}\). Also, the predicted voltage drop from Co to Ni is similar to the observation in Li-containing layered oxides, caused by the filling of the antibonding _e\({}_{g}\)_ orbital of NiO\({}_{2}\).[40] Finally, the lower intercalation voltage of 2.24 V for the K\({}_{x}\)TiO\({}_{2}\) system suggest that this system can be explored as an anode for KIBs. The calculated pDOS of all K\({}_{0.5}\)TMO\({}_{2}\) ground state structures are displayed in **Figure 3**, with **Figure S3** compiling the pDOS of all TMO\({}_{2}\) structures. The red, blue, orange, dotted blue, and dashed black lines represent K \(s\) states, O \(p\) states, TM \(d\) states, band edges, and Fermi level, respectively, with the numbers in each panel indicating band gaps. Except K\({}_{0.5}\)CrO\({}_{2}\), K\({}_{0.5}\)VO2 and CrO\({}_{2}\), all K\({}_{0.5}\)TMO\({}_{2}\) and TMO\({}_{2}\) structures are predicted to be semiconductors by SCAN\(+U\). The calculated band gaps exhibit a non-monotonic trend as we move along the 3\(d\) series in K\({}_{0.5}\)TMO\({}_{2}\), decreasing from 0.49 eV in Ti to 0 eV in Cr, subsequently increasing up to 1.66 eV in Co and further decreasing to 0.30 eV in Ni. Band gap trends in TMO\({}_{2}\) structures (**Figure S3**) are similar to K\({}_{0.5}\)TMO\({}_{2}\), with the gap decreasing from 2.85 eV to 0 eV from Ti to Cr, then increasing to 2.40 eV in Mn and finally decreasing to 1.39 eV in Ni. While TM \(d\) states dominate the valence band edge (VBE) or the Fermi level in Ti, V, and Cr versions of K\({}_{0.5}\)TMO\({}_{2}\) (**Figure 3**), both O \(p\) and TM \(d\) states contribute equally in the case of Mn, Co, and Ni analogs, attributed to increased hybridization of the TM-O bonds as we Figure 3: SCAN\(+U\)-calculated pDOS for all P3-K\({}_{0.5}\)TMO\({}_{2}\) systems. Blue, orange, and red curves correspond to TM \(d\), O \(p\), and K \(s\) states, respectively. Positive (negative) values of pDOS correspond to up (down) spin electrons. Dotted blue lines represent the valence and conduction band edges, with the numbers indicating band gap values. Dashed black lines signify Fermi level. The zero on the energy scale in each panel is referenced either to the valence band maximum or to the Fermi level. move across the 3\(d\) series. In the case of conduction band edges (CBEs), the TM \(d\) states dominate from Ti to Mn, while O \(p\) states contribute significantly alongside TM \(d\) states in Co and Ni structures. In the case of depotassiated TMO\({}_{2}\) structures (**Figure S3)**, O \(p\) states dominate the VBE in TiO\({}_{2}\), CoO\({}_{2}\), and NiO\({}_{2}\), TM \(d\) states dominate VBE in VO\({}_{2}\), while a mixture of O \(p\) and TM \(d\) states contribute to the VBE/Fermi level in MnO\({}_{2}\) and CrO\({}_{2}\). The CBE of TMO\({}_{2}\) structures are dominated by TM \(d\) states, with the exception of NiO\({}_{2}\), where a mixture of O \(p\) and Ni \(d\) states contribute. Given that TM \(d\) states contribute significantly to the VBE/Fermi level (responsible for oxidation) of all K\({}_{0.5}\)TMO\({}_{2}\) and the CBE/Fermi level (responsible for reduction) of all TMO\({}_{2}\) structures (except NiO\({}_{2}\)), we expect the TM to be predominantly redox-active during K\({}^{+}\) (de)intercalation across all P3 systems (with Ni being a possible exception). To further probe the possible origins of redox activity in the P3 frameworks, we analysed the calculated on-site magnetic moments of the TM in each system, as tabulated in **Table S2**. In all ground state K\({}_{0.5}\)TMO\({}_{2}\) configurations, we observe that half the TM ions are in 3+, and the rest in 4+ oxidation state, except K\({}_{0.5}\)CrO\({}_{2}\), where the \(d\) electrons appear delocalized across Cr centers due to its metallic nature. Upon K-removal, all the transition metals in all TMO\({}_{2}\) structures are in a uniform 4+ oxidation state, as suggested by the calculated magnetic moments (see **Table S2**), highlighting that the TM exclusively contributes to the redox activity with K\({}^{+}\) (de)intercalation. Also, we observe from the magnetic moments that each K\({}^{+}\) in K\({}_{0.5}\)TMO\({}_{2}\) shares the triangular face with a TM\({}^{4+}\) octahedra and the triangular edges with three TM\({}^{3+}\) octahedra. ### Thermodynamic stability The _E\({}^{hull}\)_ for the K\({}_{0.5}\)TMO\({}_{2}\) and TMO\({}_{2}\) compositions are displayed as a heatmap in **Figure 4**, where blue (red) tiles indicate compositions that are stable (unstable). The solid green line across the legend bar in **Figure 4** signifies the 50 meV/atom stability threshold. The 0 K convex hulls of the K-TM-O ternaries (relevant for potassiated compositions) and TM-O binaries (relevant for depotassiated compositions) are compiled in **Figures S4** and **S5**, respectively. Importantly, the _E\({}^{hull}\)_ data indicates high degree of stability for all K\({}_{0.5}\)TMO\({}_{2}\) frameworks, with the exception of K\({}_{0.5}\)VO\({}_{2}\) and K\({}_{0.5}\)CrO\({}_{2}\), which are metastable with _E\({}^{hull}\)_ of 47 and 13 meV/atom, respectively, below the 50 meV/atom threshold. Thus, we expect all P3-type potassiated compositions considered in this work to be experimentally synthesizable. In case of depotassiated compositions, we find all P3-TMO\({}_{2}\) structures to be unstable, with \(E^{hull}\) more than 50 meV/atom. Thus, we don't expect the synthesis of P3-TMO\({}_{2}\) configurations to be facile. However, during electrochemical cycling, the P3-TMO\({}_{2}\) structures may exist in a metastable manner, due to kinetic barriers to transform to the corresponding stable states. Notably, the lower extent of instability displayed by P3-CoO\({}_{2}\) and P3-MnO\({}_{2}\) (\(E^{hull}\)\(\sim\)83 meV/atom) is more promising than the other frameworks in terms of their ability to appear during electrochemical cycling and not decompose to other stable compositions. Finally, combining both stability and voltage metrics, we find P3-K\({}_{x}\)CoO\({}_{2}\) to be the most promising cathode composition, while P3-K\({}_{x}\)NiO\({}_{2}\) can also be explored as a candidate. ## Discussion Using DFT-based calculations, we have explored the P3-K\({}_{0.5}\)TMO\({}_{2}\) frameworks as potential intercalation hosts for KIBs in this work. Specially, we have computed the lattice parameters, ground state K-vacancy configurations, average K-interaction voltage, electronic structure, on-site magnetic moments, and 0 K thermodynamic stability for P3-K\({}_{0.5}\)TMO\({}_{2}\) and the corresponding depotassiated-TMO\({}_{2}\) structures, where TM is Ti, V, Cr, Mn, Co, or Ni. We found that all K\({}_{0.5}\)TMO\({}_{2}\) ground states adopted the honeycomb or zig-zag ordering of K-ions. With respect to voltage predictions, we observed the highest (lowest) voltages to arise from the Co Figure 4: DFT-calculated \(E^{hull}\) for P3-K\({}_{0.5}\)TMO\({}_{2}\) (top row) and P3-TMO\({}_{2}\) (bottom row) compositions. Each column represents a given TM. Blue (red) squares indicate high degrees of stability (instability), with the specific \(E^{hull}\) value of each composition listed as a text annotation in the corresponding square. The green line on the legend bar indicates the rule-of-thumb \(E^{hull}\)\(\sim\) 50 meV/atom threshold for experimental synthesizability. (Ti) system, consistent with trends in standard reduction potentials. While we found all potassiated P3-K\({}_{0.5}\)TMO\({}_{2}\) compositions to be stable or metastable (i.e., \(E^{hull}\leq 50\) meV/atom) highlighting experimental synthesizability, all depotassiated P3-TMO\({}_{2}\) compositions were unstable, indicating that they may not be synthesizable experimentally but may appear during electrochemical cycling due to kinetic barriers for decomposition. Also, we observed the TM to be the primary participant in the redox process, characterized by the electronic structure and on-site magnetic moments of the potassiated and depotassiated compositions. Finally, combining voltage and stability metrics, we find the P3 frameworks of K\({}_{x}\)CoO\({}_{2}\), and K\({}_{x}\)NiO\({}_{2}\) to be promising cathode candidates, while P3-K\({}_{x}\)TiO\({}_{2}\) may be explored as an anode. Typically, in layered oxide frameworks, K can occupy either prismatic or octahedral coordination (**Figure S1**), depending on the K-concentration, and associated steric and electrostatic interactions within the structure. The relative stability of prismatic vs. octahedral coordination environment of K (and hence the stacking sequence of the layered structure) can be modelled via the modified Rouxel diagram[9] and the cationic potential[16] phase map, which are displayed in panels a and b of **Figure 5**, respectively. Numerical details of the Rouxel diagram and cationic potential frameworks are described in the SI, with **Tables S3-S6** compiling relevant parameters or values. In the Rouxel diagram framework, the critical parameter (\(\beta\)) is used a classifier of prismatic and octahedral phases (see formulation provided in SI). \(\beta\) depends on the ionic or Figure 5: (a) Rouxel diagram and (b) cationic potential phase map for various K\({}_{x}\)TMO\({}_{2}\) compositions. The dashed line in both panels separates the region of stability of octahedral-coordinated phases from prismatic-coordinated phases. Each column of data points in panel a represents a distinct TM while the symbols represent various K compositions (x). In panel b, each row of data points corresponds to a unique x while the symbols distinguish the TMs. covalent nature of the bonds between the cations (K and TM) and anions (O), and the K concentration (x). **Figure 5a** plots \(\beta\) at different K compositions (x = 1/4, 1/3, 1/2, 2/3, 3/4, and 1) as the TM is varied in K\({}_{x}\)TMO\({}_{2}\). Importantly, we observe that all K\({}_{0.5}\)TMO\({}_{2}\) compositions prefer prismatic coordination, while several KTMO\({}_{2}\) compositions (except KScO\({}_{2}\), KTiO\({}_{2}\), and KMnO\({}_{2}\)) also favour the prismatic coordination for any K content. Cationic potential (\(\phi_{cation}\)) is a descriptor of interslab interactions, i.e., higher the cationic potential of a metal higher is the ionic polarizability and more covalent is the bond between the metal and an anion. Higher cationic potential also indicates stronger repulsion between adjacent TMO\({}_{2}\) octahedra (resulting from larger electrostatic repulsion of higher oxidation state metals) and weaker interaction between adjacent KO\({}_{2}\) slabs. The larger interlayer distance in a prismatic structure usually coincides with a higher cationic potential implying more covalent TM-O bonds. Conversely, the smaller interlayer distances in O3 structures (at similar compositions as corresponding P3 structures) coincide with a lower cationic potential. Apart from interlayer distances, \(\phi_{cation}\) is also dependent on K-concentration. For example, at low x in K\({}_{x}\)TMO\({}_{2}\) (or equivalently lower mean ionic potential of K, \(\bar{\phi}_{K}\)), binding of TMO\({}_{2}\) layers by K via electrostatic attraction between K\({}^{+}\) and O\({}^{2-}\) is weaker resulting in larger interlayer spacings and prismatic stacking, which is what we observe in **Figure 5b**. Additionally, the variations in \(\bar{\phi}_{K}\) and \(\phi_{cation}\) for various K\({}_{x}\)TMO\({}_{2}\) compositions indicate that prismatic structures should be observed at x = 1/2 for all TM, consistent with the Rouxel diagram framework as well **(Figure 5a)**. Trends in panels a and b of **Figure 5** are also in agreement with the literature reported so far for the K-based layered oxides,[5] highlighting the utility of such empirical frameworks. Additionally, \(\phi_{cation}\) can be tuned by the addition/doping of multiple TMs within the same layered oxide, thereby stabilizing either octahedral or prismatic coordination for the K-ions. We find the honeycomb or zig-zag arrangement of K\({}^{+}\) to be the ground state configuration of all K\({}_{0.5}\)TMO\({}_{2}\) compositions, as displayed in **Figure 1c**. Owing to the larger size and higher ionicity of K\({}^{+}\), there is strong in-plane electrostatic and steric repulsion that tends to maximize the K\({}^{+}\)-K\({}^{+}\) distance, irrespective of the TM. Thus, K\({}^{+}\) can be considered to screen effectively TM-TM interactions across the layers (at x=0.5), which can be a cause of our observation of identical K\({}_{0.5}\)TMO\({}_{2}\) ground states. Additionally, we find that each K\({}^{+}\) shares its triangular face and three triangular edges with TM\({}^{4+}\) and TM\({}^{3+}\), respectively, which may be a result of electrostatic interactions as well. Another consequence of the large size and higher ionicity of K\({}^{+}\) is the observed lower intercalation voltages than analogous Na-layered compounds, despite K/K\({}^{+}\) exhibiting a more negative standard reduction potential than Na/Na\({}^{+}\)[5, 40, 56]. Specifically, the stronger electrostatic interactions between K\({}^{+}\) and the TM ions can result in increasing the interlayer distance, and could weaken the TM-O bonds by increasing the TM-O bond lengths compared to the Na-analogues [57]. While we have used the SCAN\(+U\) framework for describing the electronic exchange and correlation, recent studies have reported that SCAN\(+U\) overestimates average voltages in Li-intercalation electrodes [40, 41]. Indeed, we observe a similar overestimation of average voltages in P3-K\({}_{x}\)CrO\({}_{2}\), K\({}_{x}\)MnO\({}_{2}\) and K\({}_{x}\)CoO\({}_{2}\), where our predicted values are \(\sim\)3.94 V, \(\sim\)3.47 V and \(\sim\)4.59 V vs. K, respectively (**Figure 2b**), compared to the experimental \(\sim\)2.7 V in Cr (cathode composition was K\({}_{0.69}\)CrO\({}_{2}\)), \(\sim\)2.7 V in Mn (K\({}_{0.5}\)MnO\({}_{2}\)), and \(\sim\) 3.1 V in Co (K\({}_{2/3}\)CoO\({}_{2}\)) [22, 25, 26]. Such overestimation of voltages may arise from an underestimation of energies (i.e., total DFT-calculated energies are less negative) of metastable/unstable phases, such as depotassiated-TMO\({}_{2}\) structures [40]. Note that all our calculated voltages, despite being overestimated by SCAN\(+U\), are within the stability window of the commonly used electrolyte, KPF\({}_{6}\) in ethylene carbonate: diethyl carbonate. Finally, when it comes to phase stability, SCAN\(+U\) frequently doesn't provide quantitative precision owing to underestimation of total energies of metastable phases. Yet, our calculations predicts that the traditional chemical synthesis pathway can produce P3-K\({}_{0.5}\)TMO\({}_{2}\), owing to the calculated (meta)stability of P3-K\({}_{0.5}\)TMO\({}_{2}\), whereas the depotassiated P3-TMO\({}_{2}\) may appear during electrochemical cycling. Apart from the single transition metal-based system, investigating possible inclusion of multiple TMs within the P3 framework could be result in better insertion host(s) for KIBs, where higher voltages arising from the presence of one TM can be combined with the stability contributed by another TM. ## Conclusion We explored the K-containing P3-type layered TMOs as potential intercalation hosts for KIBs using DFT calculations and the SCAN\(+U\) framework for describing electronic exchange and correlation. We considered six different TMOs, namely, K\({}_{0.5}\)TiO\({}_{2}\), K\({}_{0.5}\)VO\({}_{2}\), K\({}_{0.5}\)CrO\({}_{2}\), K\({}_{0.5}\)MnO\({}_{2}\), K\({}_{0.5}\)CoO\({}_{2}\), and K\({}_{0.5}\)NiO\({}_{2}\), and their corresponding depotassiated compositions, as the candidate K-intercalation hosts. Apart from estimating the ground state K-vacancy configuration in each TMO system considered, we evaluated the DFT-relaxed lattice parameters, topotactic average intercalation voltage, and 0 K thermodynamic stability. Additionally, we probed the nature of redox activity upon K (de)intercalation in these compounds by analysing the electronic structure and on-site TM magnetic moments in the potassiated and depotassiated structures. Importantly, we find that K\({}^{+}\) prefers the honeycomb or zig-zag ordering in K\({}_{0.5}\)TMO\({}_{2}\), irrespective of the TM, highlighting the dominance of electrostatic interactions between K\({}^{+}\) ions within the same layer. Our calculated voltages follow the general trend of standard reduction potentials of the TMs involved, with the low-voltage P3-K\({}_{\mathrm{x}}\)TiO\({}_{2}\) framework being more suitable as an anode, and P3-K\({}_{\mathrm{x}}\)CoO\({}_{2}\) exhibiting the highest predicted voltage. Notably, we find the redox activity to be centered on the TM sites in all K\({}_{\mathrm{x}}\)TMO\({}_{2}\) systems, with negligible contribution from redox on the anionic sites. In terms of thermodynamic stability, we find all P3-K\({}_{0.5}\)TMO\({}_{2}\) frameworks considered to be below the \(E^{hull}=50\) meV/atom threshold, indicating that synthesis of such compounds is likely to be facile. Finally, given the combination of our thermodynamic stability and average voltage estimates, we find P3-K\({}_{\mathrm{x}}\)CoO\({}_{2}\), and K\({}_{\mathrm{x}}\)NiO\({}_{2}\) as potential candidates as KIB cathodes. ## Conflicts of interest There are no conflicts of interest to declare. ## Acknowledgments G.S.G. acknowledges financial support from the Indian Institute of Science (IISc) Seed Grant, SG/MHRD/20/0020 and SR/MHRD/20/0013, and support from the Science and Engineering Research Board (SERB) of Government of India, under sanction numbers SRG/2021/000201 and IPA/2021/000007. P.B. is grateful to the Alexander von Humboldt Foundation (Bonn, Germany) for a 2022 Humboldt fellowship for experienced researchers. P.B. acknowledges financial support from the HP Green R&D Centre (Bangalore). P.K.J. and S.N.T. would like to thank the Ministry of Human Resource Development (MHRD), Government of India, for financial assistance. We also acknowledge the computational resources provided by the Supercomputer Education and Research Centre (SERC), IISc.
2303.10162
Probing the interference between non-linear, axionic and space-time-anisotropy effects in the QED vacuum
We pursue the investigation of a generic non-linear extension of axionic electrodynamics in a Carroll-Field-Jackiw (CFJ) scenario that implements Lorentz-symmetry violation (LSV). The model we inspect consists of an arbitrary non-linear electrodynamic action coupled to the axion field in presence of an anisotropy four-vector that realizes the breaking of Lorentz symmetry under the particle point of view. The non-linear electromagnetic field is expanded around a constant and uniform magnetic background up to second order in the propagating photon field. The focus of our attention is the study of the material properties of the vacuum in the particular case of a space-like CFJ $4$-vector. The dispersion relations associated to the plane wave solutions are explicitly worked out in two situations: the magnetic background perpendicular and parallel to the wave direction. We extend these results to consider the analysis of the birefringence phenomenon in presence of non-linearity, the axion and the LSV manifested through the spatial anisotropy. Three specific proposals of non-linear electrodynamics are contemplated: Euler-Heisenberg, Born-Infeld and the Modified Maxwell electrodynamics. Throughout the paper, we shall justify why we follow the unusual path of connecting, in a single Lagrangian density, three pieces of physics beyond the Standard Model, namely, non-linearity, axions and LSV. Our true goal is to actually inspect and describe how axionic, non-linear and LSV effects interfere with one another whenever physical entities like group velocity, refraction indices, birefringence and effective masses of physical excitations are computed in presence of an external constant and homogeneous magnetic field.
J. M. A. Paixão, L. P. R. Ospedal, M. J. Neves, J. A. Helayël-Neto
2023-03-13T15:09:29Z
http://arxiv.org/abs/2303.10162v3
Probing the interference between non-linear, axionic and space-time-anisotropy effects in the QED vacuum ###### Abstract In this paper, we pursue the investigation of a generic non-linear extension of axionic electrodynamics in a Carroll-Field-Jackiw (CFJ) scenario that implements Lorentz-symmetry violation (LSV). The model we inspect consists of an arbitrary non-linear electrodynamic action coupled to the axion field in presence of an isotropy four-vector that realizes the breaking of Lorentz symmetry under the particle point of view. For the sake of our considerations, the non-linear electromagnetic field is expanded around a constant and uniform magnetic background up to second order in the propagating photon field. The focus of our attention is the study of the material properties of the vacuum in the particular case of a space-like CFJ 4-vector. The dispersion relations associated to the plane wave solutions are explicitly worked out in two situations: the magnetic background perpendicular and parallel to the wave direction. We extend these results to consider the analysis of the birefringence phenomenon in presence of non-linearity, the axion and the LSV manifested through the spatial anisotropy. Three specific proposals of non-linear electrodynamics are contemplated: Euler-Heisenberg (EH), Born-Infeld (BI) and the Modified Maxwell electrodynamics (ModMax). Throughout the paper, we shall justify why we follow the unusual path of connecting, in a single Lagrangian density, three pieces of physics beyond the Standard Model, namely, non-linearity, axions and LSV. ## I Introduction The strong CP problem is still an intriguing question in the Standard Model (SM) of elementary particles. Certainly, the mechanism proposed by Peccei and Quinn is the most popular and elegant approach to solve this issue by introducing the axions [1; 2]. The Axion-like Particles (ALPs) has been the subject of investigation in diverse branches of the high energy physics. A good motivation is that such particles are strong candidates for the dark matter content [3; 4; 5]. Furthermore, ALPs naturally arise in string theories [6]. Over the past decades, a considerable effort has been made for the detection of ALPs, both in astrophysical experiments [7; 8; 9; 10; 11] and in particle accelerators [12; 13; 14; 15]. The challenge is that the ALPs couple very weakly to the SM matter, so the bounds obtained have a stringent parameter space. For example, the CAST experiment that searches for ALPs produced in the solar core provides a well-established limit for the ALP-photon interaction with coupling constant \(g_{a\gamma}\simeq 0.66\times 10^{-10}\,\mathrm{GeV}^{-1}\) and ALP mass restricted to \(m_{a}<0.02\,\mathrm{eV}\) at 95% CL [10]. We also highlight that ALPs can be produced by ALP-photon conversion in the presence of an intense magnetic background field, as described by the Primakoff process. In presence of intense magnetic fields close to the Schwinger's critical magnetic field, _i.e._, \(|\mathbf{B}|_{S}=m_{e}^{2}/q_{e}=4.41\times 10^{9}\,\mathrm{T}\), non-linear effects acquire relevance [16]. In the work [17], a general approach has been followed to investigate ALPs in non-linear electrodynamic scenarios, where some optical properties of the vacuum have been investigated, such as the vacuum magnetic birefringence (VMB) and Kerr effect. Furthermore, it has been shown that the presence of the axion generates dispersion relations that depend on the wavelength, so that dispersive refractive indices show up that would not be present if only non-linearity were considered. In a seminal work [18], the authors connected axionic physics with the Euler-Heisenberg electrodynamics and discussed birefringence experiments, photon-axion conversion, as well as the axion-graviton conversion in the vicinity of stars with an intense magnetic fields. It is well-known that the ALP-photon conversion in a magnetic background changes the optical properties of the vacuum. Therefore, the measure of the VMB can provide bounds on the axion mass and coupling constant \(g_{a\gamma}\)[19; 20; 21]. Although VMB is an effect predicted by quantum electrodynamics (QED), there is still no experimental evidence of its existence produced in laboratories. The PVLAS Collaboration was one of the most notable projects in this search, having ended its activity in 2017 after 25 years of efforts to measure the birefringence and vacuum dichroism phenomena, providing very reliable limits for such quantities [22; 23; 24]. Even so, indirect evidence of vacuum birefringence was found from measurement of optical polarization of the neutron star RX J1856.5-3754 [25]. At this stage, it is worthy mentioning that the axionic interaction term can be generated via radiative corrections in a theory with Lorentz symmetry violation (LSV) [26]. In this perspective, we are motivated to study which influence a LSV background would have on an axionic theory. To generalize our approach, we consider that the electrodynamics model may have non-linear contributions. Although, in specific cases, one can easily reduce to the usual Maxwell's theory. Moreover, we point out that LSV theories introduce an anisotropy in space-time, such that is reasonable to obtain a birefringence effect [27; 28]. This characteristic added to the non-linear ALP-photon mixing model [17] generates a very rich effective model with implications in the optical properties of the vacuum. In particular, for the LSV term, we adopt the Carroll-Field-Jackiw (CFJ) electrodynamics [29], which is a generalization of a Chern-Simons term for \((3+1)\) dimensions. For the consistency of the model, a quadrivector is introduced that guarantees the gauge symmetry of the theory, but does not preserve the Lorentz and CPT symmetry. There is vast literature in the CFJ electrodynamic model. In the work [30], limits were obtained for the CFJ Lorentz-breaking parameter in the time-like case through laboratory experiments such as quantum corrections to the spectrum of the hydrogen atom, electric dipole moment, as well as the interparticle potential between fermions. Studies on the possible effects of contributions of the CFJ model for the cosmic microwave background (CMB) were carried out in ref. [31]. Recently, in the supersymmetry scenario, the gauge boson-gaugino mixing was investigated taking into account the effects of the LSV due to a CFJ term [32]. Furthermore, in the work of ref. [33], the electrodynamics of CFJ was studied in a pre-metric framework, where the author discussed the relation between the birefringence phenomenon with Lorentz and CPT symmetry violation. It is possible to associate the non-observation of birefringence with the preservation of these symmetries. For more details on LSV, we indicate the review [34] and references therein. Before going on and starting to work out the developments of our paper, we would like to call into question our motivation to bring together three different physical scenarios beyond the Standard Model in a single Lagrangian, namely: axions, non-linear electrodynamic extensions and Lorentz-symmetry violating physics (LSV is here realized by means of the Carroll-Field-Jackiw term). The usual procedure is to consider each of these physical situations separately, once we expect that their respective individual effects correspond to tiny corrections to current physics. Connecting these three diverse physics in a single action might appear as a waste of efforts or, simply, an exercise to mix up different effects. Nevertheless, what we truly wish by coupling axions to non-linear electrodynamics and LSV physics is to show how the parameters associated to the axion and LSV sectors couple to external electric and magnetic fields approached by introducing non-linearity. Actually, the main effort we endeavor is to inspect to which extent strong external electric and magnetic fields may broaden the effects of the tiny axionic and LSV parameters on physical properties such as birefringence, refractive indices, dichroism and group velocity. This is investigated with the help of the dispersion relations we shall derive in different situations characterized by particular configurations of external fields. In this contribution, we investigate the propagation effects of a general axionic non-linear ED in the presence of a CFJ term. As mentioned, the CFJ introduces the 4-vector that breaks the Lorentz symmetry, and the isotropy of the space-time. We introduce a uniform magnetic field expanding the propagating field of the model up to second order around this background field. The properties of the medium are discussed in presence of the magnetic background. We obtain the dispersion relations of the linearized theory in terms of the magnetic background, the CFJ 4-vector, and the axion coupling constant. The case of a space-like quadrivector is analysed, such that the plane wave frequencies are functions of the wave vector (\(\mathbf{k}\)), the magnetic background (\(\mathbf{B}\)), and the CFJ background vector (\(\mathbf{v}\)). Thereby, we consider two cases : (a) when \(\mathbf{k}\), \(\mathbf{B}\) and \(\mathbf{v}\) are perpendiculars, and (b) when \(\mathbf{k}\) is parallel to \(\mathbf{B}\), but both vectors remain perpendicular to \(\mathbf{v}\). The solutions of these cases define the perpendicular and parallel frequencies, respectively. Using these dispersion relations, we calculate the birefringence through the perpendicular and parallel refractive indices. We apply our results to the non-linear electrodynamics of Euler-Heisenberg [35], Born-Infeld [36], and Modified Maxwell (ModMax) [37; 38; 39]. This paper is organized according to the following outline: In Section (II), the axionic non-linear theory is presented with the CFJ term in an electromagnetic background field. In Section (III), we consider a purely magnetic background field and obtain the permittivity and permeability tensors, as well as the dispersion relations associated with the plane wave solutions. Next, in Section (IV), the birefringence phenomenon is discussed in the framework of Euler-Heisenberg, Born-Infeld, and ModMax electrodynamics. Finally, the Conclusions and Perspectives are cast in Section (V). We adopt the natural units in which \(\hbar=c=1\), \(4\pi\epsilon_{0}=1\), and the electric and magnetic fields have squared-energy dimension. Thereby, the conversion of Volt/m and Tesla (T) to the natural system is as follows: \(1\,\mathrm{Volt/m}=2.27\times 10^{-24}\,\mathrm{GeV}^{2}\) and \(1\,\mathrm{T}=6.8\times 10^{-16}\,\mathrm{GeV}^{2}\), respectively. The metric convention is \(\eta^{\mu\nu}=\mathrm{diag}\,(+1,-1,-1,-1)\). ## II The non-linear axion-photon electrodynamics including the Carroll-Field-Jackiw term We initiate with the description of the model whose Lagrangian density reads as follows : \[\mathcal{L} = \mathcal{L}_{nl}(\mathcal{F}_{0},\mathcal{G}_{0})+\frac{1}{2}\, \left(\partial_{\mu}\phi\right)^{2}-\frac{1}{2}\,m^{2}\,\phi^{2}+g\,\phi\, \mathcal{G}_{0} \tag{1}\] \[+\frac{1}{4}\,\epsilon^{\mu\nu\kappa\lambda}\,v_{\mu}\,A_{0\nu}\,F _{0\kappa\lambda}-J_{\mu}\,A_{0}^{\ \mu}\;,\] where \({\cal L}_{nl}({\cal F}_{0},{\cal G}_{0})\) denotes the most general Lagrangian of a non-linear electrodynamics that is function of the Lorentz- and gauge-invariant bilineus : \({\cal F}_{0}=-\frac{1}{4}\,F_{0\mu\nu}^{2}=\frac{1}{2}\left({\bf E}_{0}^{2}-{\bf B }_{0}^{2}\right)\) and \({\cal G}_{0}=-\frac{1}{4}\,F_{0\mu\nu}\widetilde{F}_{0}\,^{\mu\nu}={\bf E}_{0} \cdot{\bf B}_{0}\). These definitions introduce the antisymmetric field strength tensor as \(F_{0}^{\ \mu\nu}=\partial^{\mu}A_{0}^{\ \nu}-\partial^{\nu}A_{0}^{\ \mu}=\left(-\,{ \rm E}_{0}^{\ \ \ \mu},\,-\epsilon^{ijk}B_{0}^{\ \ k}\right)\), and the correspondent dual tensor is \(\widetilde{F}_{0}^{\ \mu\nu}=\epsilon^{\mu\nu\alpha\beta}F_{0\alpha}/2=\left(-B_{0} ^{\ i}\,,\,\epsilon^{ijk}E_{0}^{\ \ k}\right)\), which satisfies the Bianchi identity \(\partial^{\mu}\widetilde{F}_{0\mu\nu}=0\). The CFJ term introduces the background 4-vector \(v^{\mu}=(v^{0},{\bf v})\) whose components do not depend on the space-time coordinates. It has mass dimension in natural units and is responsible for the Lorentz symmetry violation in the gauge sector of the model. In addition, \(\phi\) is the axion scalar field with mass \(m\), and \(g\) is the non-minimal coupling constant (with length dimension) of the axion with the electromagnetic field, _i.e._, the usual coupling with the \({\cal G}_{0}\)-invariant in the axion-photon model. There are many investigations and experiments to constraint the possible regions in the space of the parameters \(g\) and \(m\), which still remains with a wide range of values, depending on the phenomenological scale in analysis. We expand the abelian gauge field as \(A_{0}^{\ \mu}=a^{\mu}+A_{B}^{\ \mu}\), in which \(a^{\mu}\) is the photon 4-potential, and \(A_{B}^{\ \mu}\) denotes a background potential. In this conjecture, the tensor \(F_{0}^{\ \mu\nu}\) is also written as the combination \(F_{0}^{\ \mu\nu}=f^{\mu\nu}\,+F_{B}^{\ \mu\nu}\), in which \(f^{\mu\nu}=\partial^{\mu}a^{\nu}-\partial^{\nu}a^{\mu}=\left(-e^{\dagger},\,- \epsilon^{ijk}b^{\ k}\right)\) is the EM field strength tensor that propagates in the spacetime, and \(F_{B}^{\ \mu\nu}=\left(-E^{i}\,,\,-\epsilon^{ijk}B^{k}\,\right)\) corresponds to the EM background field. The notation of the 4-vector and tensors with index \((B)\) indicates that it is associated with the background. At this stage, we consider the general case in which the background depends on the space-time coordinates. Under this prescription, we also expand the Lagrangian (1) around the background up to second order in the propagating field \(a^{\mu}\) to yield the expression \[{\cal L}^{(2)} = -\frac{1}{4}\,c_{1}\,f_{\mu\nu}^{\ 2}-\frac{1}{4}\,c_{2}\,f_{\mu\nu} \widetilde{f}^{\mu\nu}+\frac{1}{8}\,Q_{B\mu\nu\kappa\lambda}\,f^{\mu\nu}f^{ \kappa\lambda} \tag{2}\] \[+\frac{1}{2}\,\left(\partial_{\mu}\widetilde{\phi}\right)^{2}- \frac{1}{2}\,m^{2}\,\widetilde{\phi}^{2}-\frac{1}{2}\,g\,\widetilde{\phi}\, \widetilde{F}_{B\mu\nu}\,f^{\mu\nu}\] \[+\frac{1}{4}\,\epsilon^{\mu\nu\kappa\lambda}\,v_{\mu}\,a_{\nu}\,f _{\kappa\lambda}-\bar{J}_{\nu}\,a^{\nu}\,\] where \(\bar{J}_{\nu}=J_{\nu}-\partial^{\mu}\left(H_{B\mu\nu}\right)-v^{\mu} \widetilde{F}_{B\mu\nu}\) represent an effective external current that couples to the photon field; it includes an eventual matter current and the contributions that stem from the background electromagnetic fields. The tensors associated with this electromagnetic background are defined in what follows: \[H_{B\mu\nu} = c_{1}\,F_{B\mu\nu}+c_{2}\,\widetilde{F}_{B\mu\nu}+\frac{g^{2}}{m ^{2}}\,{\cal G}_{B}\,\widetilde{F}_{B\mu\nu}\,, \tag{3a}\] \[Q_{B\mu\nu\kappa\lambda} = d_{1}\,F_{B\mu\nu}\,F_{B\kappa\lambda}+d_{2}\,\widetilde{F}_{B \mu\nu}\,\widetilde{F}_{B\kappa\lambda}\] (3b) \[+ d_{3}\,F_{B\mu\nu}\,\widetilde{F}_{B\kappa\lambda}+d_{3}\, \widetilde{F}_{B\mu\nu}\,F_{B\kappa\lambda}\.\] The axion field was shifted as \(\phi\rightarrow\widetilde{\phi}+\phi_{0}\) in order to eliminate the \(g\,\phi\,{\cal G}_{B}\) term that would appear in the Lagrangian (2). The coefficients \(c_{1}\), \(c_{2}\), \(d_{1}\), \(d_{2}\) and \(d_{3}\) are evaluated at \({\bf E}\) and \({\bf B}\), as follows : \[c_{1} = \left.\frac{\partial{\cal L}_{nl}}{\partial{\cal F}_{0}}\right|_{ {\bf E},{\bf B}},\ c_{2}=\left.\frac{\partial{\cal L}_{nl}}{\partial{\cal G}_{ 0}}\right|_{{\bf E},{\bf B}},\ d_{1}=\left.\frac{\partial^{2}{\cal L}_{nl}}{ \partial{\cal F}_{0}^{\ 2}}\right|_{{\bf E},{\bf B}},\] \[d_{2} = \left.\frac{\partial^{2}{\cal L}_{nl}}{\partial{\cal G}_{0}^{2}} \right|_{{\bf E},{\bf B}},\ d_{3}=\left.\frac{\partial^{2}{\cal L}_{nl}}{ \partial{\cal F}_{0}\partial{\cal G}_{0}}\right|_{{\bf E},{\bf B}}, \tag{4}\] that depend on the EM field magnitude and may also be functions of the space-time coordinates. Following the previous definitions, the background tensors satisfy the properties \(H_{B\mu\nu}=-H_{B\nu\mu}\), whereas \(Q_{B\mu\nu\kappa\lambda}\) is symmetric under exchange \(\mu\nu\leftrightarrow\kappa\lambda\), and antisymmetric under \(\mu\leftrightarrow\nu\) and \(\kappa\leftrightarrow\lambda\). Note that the current \(J^{\mu}\) couples to the external potential \(A_{B}^{\ \mu}\), but this term and \({\cal L}_{nl}\left({\cal F}_{B},{\cal G}_{B}\right)\) are irrelevant for the field equations in which we are interested. Using the action principle in relation to \(a^{\mu}\), the Lagrangian (2) yields the EM field equations with source \(\widetilde{J}^{\mu}\) \[\partial^{\mu}\left[\,c_{1}\,f_{\mu\nu}+c_{2}\,\widetilde{f}_{\mu \nu}-\frac{1}{2}\,Q_{B\mu\nu\kappa\lambda}\,f^{\mu\lambda}\,\right]+\] \[+v^{\mu}\,\tilde{f}_{\mu\nu}=-g\left(\partial^{\mu}\widetilde{ \phi}\right)\widetilde{F}_{B\mu\nu}+\tilde{J}_{\nu}\, \tag{5}\] and the Bianchi identity remains the same one for the photon field, namely, \(\partial_{\mu}\widetilde{f}^{\mu\nu}=0\). The action principle in relation to \(\widetilde{\phi}\) in (2) yields the axion field equation evaluated at the EM background : \[\left(\Box+m^{2}\right)\widetilde{\phi}=-\frac{1}{2}\,g\,\widetilde{F}_{B\mu\nu} \,f^{\mu\nu}. \tag{6}\] Since we consider a non-linear ED with CPT invariance, the \(c_{2}\)-coefficient of this expansion vanishes, so we can unconsider it in the previous expressions. Furthermore, all the non-linear EDs studied in the literature, as Born-Infeld, ModMax, Euler-Heisenberg, logarithm and others, the \(d_{3}\)-coefficient also is null when it is fixed just a magnetic background field. These considerations simplify the results that we will obtain in the next sections ahead. The usual axionic ED coupled to the CFJ-term is recovered when \(d_{1}\to 0\), \(d_{2}\to 0\) and \(c_{1}\to 1\) in all the cases of non-linear ED mentioned previously. ## III The dispersion relations in presence of a uniform magnetic field In this Section, we obtain the dispersion relations of the axion and photon fields in a uniform magnetic background. Thus, we can make \({\bf E}=0\) in the equations of the section II. Thereby, from now on, all the coefficients defined in (4) are not dependent on the space-time coordinates, and depend only on the magnetic vector \({\bf B}\). We start the description of the field propagating with the equations written in terms of \({\bf e}\) and \({\bf b}\), in the presence of constant and uniform magnetic background field. For the analysis of the free wave propagation, we just consider the linear terms in \({\bf e}\), \({\bf b}\) and \(\widetilde{\phi}\), as well as, the equations with no source, \(\mathbf{\bar{J}}=\mathbf{0}\) and \(\bar{\rho}=0\). Under these conditions, the electrodynamics equations in terms of the propagating vector field are read below : \[\nabla\cdot\mathbf{D} = \mathbf{v}\cdot\mathbf{b}\;, \tag{7a}\] \[\nabla\times\mathbf{e}+\frac{\partial\mathbf{b}}{\partial t} = \mathbf{0}\;,\] (7b) \[\nabla\cdot\mathbf{b} = 0\;,\] (7c) \[\nabla\times\mathbf{H}+\mathbf{v}\times\mathbf{e} = v^{0}\,\mathbf{b}+\frac{\partial\mathbf{D}}{\partial t}\;, \tag{7d}\] where the vectors \(\mathbf{D}\) and \(\mathbf{H}\) are, respectively, given by \[\mathbf{D} = c_{1}\,\mathbf{e}+d_{2}\,\mathbf{B}\left(\mathbf{B}\cdot \mathbf{e}\right)+g\,\widetilde{\phi}\,\mathbf{B}\,, \tag{8a}\] \[\mathbf{H} = c_{1}\,\mathbf{b}-d_{1}\,\mathbf{B}\left(\mathbf{B}\cdot \mathbf{b}\right). \tag{8b}\] The scalar field equation (6) in terms of the magnetic background field leads to \[\left(\Box+m^{2}\right)\widetilde{\phi}=g\left(\mathbf{e}\cdot \mathbf{B}\right)\;. \tag{9}\] We substitute the plane wave solutions of \(\mathbf{e}\), \(\mathbf{b}\) and \(\widetilde{\phi}\) in the field equations (7)(a-d) and (9). Eliminating conveniently the amplitudes of \(\mathbf{b}\) and \(\widetilde{\phi}\) in terms of the electric field amplitude, the wave equation in the momentum space is read below : \[M^{ij}(\omega,\mathbf{k})\,e_{0}^{\;j}=0\;, \tag{10}\] where \(e_{0}^{\;j}\,(j=1,2,3)\) are the components of the electric amplitude, and the matrix elements \(M^{ij}\) are given by \[M^{ij}(\omega,\mathbf{k}) = a\,\delta^{ij}+b\,k^{i}\,k^{j}+c\,B^{i}\,B^{j}+ \tag{11}\] \[+ d\left(\mathbf{B}\cdot\mathbf{k}\right)\left(B^{i}\,k^{j}+B^{j} \,k^{i}\right)\] \[- i\,\epsilon^{ijm}\,\left(\,v^{0}\,k^{m}\,-\,\omega\,v^{m}\, \right)\;,\] whose the coefficients \(a\), \(b\), \(c\) are defined by \[a = \omega^{2}-\mathbf{k}^{2}+d\,\left(\mathbf{k}\times\mathbf{B} \right)^{2}\;, \tag{12a}\] \[b = 1-d\,\mathbf{B}^{2}\;,\] (12b) \[c = \xi(\omega,\mathbf{k})\,\omega^{2}-d\,\mathbf{k}^{2}\;,\] (12c) \[\xi(\omega,\mathbf{k}) = f+\frac{g_{a}^{2}}{\mathbf{k}^{2}-\omega^{2}+m^{2}}\;, \tag{12d}\] in which \(d:=d_{1}/c_{1}\), \(f:=d_{2}/c_{1}\) and \(g_{a}:=\sqrt{g^{2}/c_{1}}\) for simplicity in the equations. Thus, the non-linearity evaluated on the magnetic background is manifested in the parameters \(d\) and \(f\), and the coupling constant \(g_{a}\) corrects the axion coupling constant with the coefficient \(c_{1}\). Notice that the \(b\)-coefficient depends only on the magnetic background, but the others one depends on the \(\omega\)-frequency and on the \(\mathbf{k}\)-wave vector. Back to the expressions of \(\mathbf{D}\) and \(\mathbf{H}\) in (8a)-(8b) with the plane wave solutions, the components of \(\mathbf{D}\) and \(\mathbf{H}\) in terms of the electric and magnetic amplitudes can be written as \[D_{i}=\epsilon_{ij}(\mathbf{k},\omega)\,e_{j}\quad\text{and}\quad H_{i}=(\mu_ {ij})^{-1}\,b_{j}\;, \tag{13}\] where \(\epsilon_{ij}\) and \((\mu_{ij})^{-1}\) are the permittivity and permeability (inverse) tensors, respectively, \[\epsilon_{ij}(\mathbf{k},\omega) = c_{1}\,\delta_{ij}+c_{1}\,\xi(\mathbf{k},\omega)\,B_{i}\,B_{j}\;, \tag{14a}\] \[(\mu_{ij})^{-1} = c_{1}\,\delta_{ij}-d_{1}\,B_{i}\,B_{j}\;. \tag{14b}\] The permeability tensor is obtained by computing the inverse of (14b) \[\mu_{ij}=\frac{1}{c_{1}}\frac{\left(1-d\,\mathbf{B}^{2}\right)\delta_{ij}+d\,B _{i}\,B_{j}}{1-d\,\mathbf{B}^{2}}\;. \tag{15}\] Notice that the electric permittivity depends on the \(\omega\)-frequency and the \(\mathbf{k}\)-wave vector due to the axion coupling \(g\neq 0\). Also, the definition of these tensors do not include the components of the CFJ 4-vector \(v^{\mu}\). Thereby, this LSV scenario does not contribute with the physical properties of the tensors. For reasons of simplicity, we choose a space-like for the CFJ 4-vector, _i.e._, \(v^{0}=0\) in the matrix element \(M^{ij}\) from (11). The dispersion relations come from the non-trivial solutions of the wave equation (11). The condition for the non-trivial solution is \(\det M^{ij}=0\), that for the space-like case in CFJ, is reduced to \(\omega\)-polynomial equation: \[a^{3}+a^{2}\,\left[\,\mathbf{k}^{2}+\omega^{2}\,\xi(\omega, \mathbf{k})\,\mathbf{B}^{2}-2d\,\mathbf{(B}\times\mathbf{k})^{2}\,\right]+a \left[(1-d\mathbf{B}^{2})\,\xi(\omega,\mathbf{k})(\mathbf{B}\times\mathbf{k}) ^{2}\omega^{2}\right.\] \[\left.-d\mathbf{k}^{2}(\mathbf{B}\times\mathbf{k})^{2}+d^{2}( \mathbf{B}\times\mathbf{k})^{4}-\omega^{2}\,\mathbf{v}^{2}\right]-\,\omega^{2} \left[(1-d\mathbf{B}^{2})\,(\mathbf{k}\cdot\mathbf{v})^{2}-d\,\mathbf{k}^{2} \,(\mathbf{B}\cdot\mathbf{v})^{2}\right.\] \[\left.+\omega^{2}\,\xi(\omega,\mathbf{k})(\mathbf{B}\cdot \mathbf{v})^{2}+2d\,(\mathbf{B}\cdot\mathbf{k})\,(\mathbf{k}\cdot\mathbf{v})\,( \mathbf{B}\cdot\mathbf{v})\,\right]=0\;. \tag{16}\] The solution to the equation (16) is hard to be worked out in view of the coefficients (12a)-(12d). For simplicity, we consider the two cases below : * The case of the vectors \(\mathbf{B}\), \(\mathbf{k}\) and \(\mathbf{v}\) perpendiculars among themselves : \(\mathbf{B}\cdot\mathbf{k}=\mathbf{B}\cdot\mathbf{v}=\mathbf{k}\cdot\mathbf{v}=0\). Considering this condition, the equation (16) is reduced to : \[\omega_{\perp}^{2}\left[\omega_{\perp}^{2}-k^{2}+d\,B^{2}\,k^{2}\right] \left[\left(1+\xi\,B^{2}\right)\omega_{\perp}^{2}-k^{2}-v^{2}\right]=0\,, \tag{17}\] where we denote the perpendicular frequency \(\omega_{\perp}\), and \(B\), \(k\) and \(v\) are the magnitudes of the previous vectors. The first solution is \(\omega_{\perp}=0\), and the non-trivial solutions from (17) are given by \[\omega_{1\perp}(k) = k\,\sqrt{1-d\,B^{2}}\;, \tag{18a}\] \[\omega_{2\perp}(k) = \left\{\frac{2k^{2}+m^{2}+v^{2}+g_{a}^{2}\,B^{2}+f\,B^{2}\left(k^ {2}+m^{2}\right)}{2\left(1+f\,B^{2}\right)}-\right.\] (18b) \[\left.-\frac{\sqrt{\left(f\,B^{2}\left(k^{2}+m^{2}\right)+g_{a}^{2}\,B^{2}+2 k^{2}+m^{2}+v^{2}\right)^{2}-4\left(1+f\,B^{2}\right)\left(k^{2}+m^{2}\right) \left(k^{2}+v^{2}\right)}}{2\left(1+f\,B^{2}\right)}\right\}^{1/2},\] \[\omega_{3\perp}(k) = \left\{\frac{2k^{2}+m^{2}+v^{2}+g_{a}^{2}\,B^{2}+f\,B^{2}\left(k ^{2}+m^{2}\right)}{2\left(1+f\,B^{2}\right)}+\right.\] (18c) \[+ \left.\frac{\sqrt{\left(f\,B^{2}\left(k^{2}+m^{2}\right)+g_{a}^{2 }\,B^{2}+2k^{2}+m^{2}+v^{2}\right)^{2}-4\left(1+f\,B^{2}\right)\left(k^{2}+m^{ 2}\right)\left(k^{2}+v^{2}\right)}}{2\left(1+f\,B^{2}\right)}\right\}^{1/2}.\] The analysis of the limits to establish comparisons with the results in the literature is immediate. The limits \(f\to 0\) and \(c_{1}=1\) yield the dispersion relations of the axionic ED coupled to CFJ term in the presence of an external magnetic field. Furthermore, if we also take \(g_{a}\to 0\), the dispersion relations are reduce to \(\omega_{2\perp}(k)=\sqrt{(k^{2}+v^{2})(1+f\,B^{2})^{-1}}\) and \(\omega_{3\perp}(k)=\sqrt{k^{2}+m^{2}}\) for \(m>v\). Note that in \(\omega_{2\perp}(k)\), occurs the characteristic effect of CFJ, where the Lorentz-breaking parameter gives a small mass for the photon. The usual Maxwell limit reduces all the frequencies to : \(\omega_{1\perp}(k)=\omega_{2\perp}(k)=k\) and \(\omega_{3\perp}(k)=\sqrt{k^{2}+m^{2}}\). The refractive (perpendicular) index are defined by \[n_{i\perp}({\bf k})=\frac{|{\bf k}|}{\omega_{i\perp}({\bf k})}\;,\;(i=1,2,3)\;. \tag{19}\] where we denote the perpendicular frequency \(\omega_{\perp}\), and \(B\), \(k\) and \(v\) are the magnitudes of the previous vectors. The first solution is \(\omega_{\perp}=0\), and the non-trivial solutions from (17) are given by \[\omega_{1\perp}(k) = k\;, \tag{20}\] \[\omega_{2\parallel}(k) = \left\{\frac{\left(2k^{2}+m^{2}\right)\left(1+f\,B^{2}\right)+v^{ 2}+g_{a}^{2}\,B^{2}}{2\left(1+f\,B^{2}\right)}\right.\] (21a) \[\left.-\frac{\sqrt{\left[\,g_{a}^{2}\,B^{2}+m^{2}(1+f\,B^{2}) \,\right]^{2}+2v^{2}\,g_{a}^{2}\,B^{2}-2v^{2}\,m^{2}\left(1+f\,B^{2}\right)+v^ {4}}}{2\left(1+f\,B^{2}\right)}\right\}^{1/2}\;,\] \[\omega_{3\parallel}(k) = \left\{\frac{\left(2k^{2}+m^{2}\right)\left(1+f\,B^{2}\right)+v^ {2}+g_{a}^{2}\,B^{2}}{2\left(1+f\,B^{2}\right)}\right.\] (21b) \[\left.+\frac{\sqrt{\left[\,g_{a}^{2}\,B^{2}+m^{2}(1+f\,B^{2}) \,\right]^{2}+2v^{2}\,g_{a}^{2}\,B^{2}-2v^{2}\,m^{2}\left(1+f\,B^{2}\right)+v^ {4}}}{2\left(1+f\,B^{2}\right)}\right\}^{1/2}\;.\] The first solution (21a) is the usual photon DR due to \({\bf B}\times{\bf k}={\bf 0}\) in the \(a\)-parameter in (12a). The limits of \(f\to 0\) and \(c_{1}=1\) also recover the DRs of the axionic ED coupled to CFJ term in the presence of the external magnetic field \(B\). The limit \(g_{a}\to 0\), when the axion is decoupled from the CFJ ED, the DRs are reduced to the results : \(\omega_{2\parallel}(k)=\sqrt{k^{2}+v^{2}\left(1+f\,B^{2}\right)^{-1}}\) and \(\omega_{3\parallel}(k)=\sqrt{k^{2}+m^{2}}\) for \(m>v\). This confirm the same results recovered in the case (a). The correspondent refractive (parallel) index are defined by \[n_{i\parallel}({\bf k})=\frac{|{\bf k}|}{\omega_{i\parallel}({\bf k})}\;,\;(i =1,2,3)\;. \tag{22}\] where we must substitute the DRs (21a)-(21c). Notice that, in both the cases (a) and (b), the refractive index of the medium depends on the modulus \({\bf k}\), so, consequently, it depends on the wavelength, as \(\lambda=2\pi/|{\bf k}|\). To close this section, in possess of the set of dispersion relations (18) and (21), we call back one of the motivations to do this work, namely, to keep track of how the three different physical scenarios we bring together in the action of eq. (1) interfere with one another, which is manifested by means of the terms coupling the parameters of the different scenarios. Keeping in mind that the coefficients \(c_{1}\), \(f\) and \(d\) express the non-linearity and that \(g_{a}\) incorporates the axion-photon coupling and the coefficient \(c_{1}\), the presence of the denominator (\(1+fB^{2}\)), common to all frequency solutions, in combination with the terms in \(m^{2}\), \(v^{2}\), \(g_{a}B^{2}\), \(fB^{2}m^{2}\) and \(fB^{2}m^{2}v^{2}\), as it appears in eqs. (18a)-(18c) and (21a)-(21c), shows in an explicit way how the three different physics mix among themselves to produce tiny effects in optical quantities like phase and group velocities and refraction indices. The explicit forms of the coefficients in terms of the nonlinear electrodynamic models of Euler-Heisenberg, Born-Infeld and ModMax will be shown in the next section, namely, by equations (29),(35) and (41), respectively. ## IV The birefringence phenomenon Birefringence is an optical property of an anisotropic medium expressed by the dependence of the refractive index on the polarization and direction of propagation of an electromagnetic wave. Just to recall, the polarization conventionally refers to the configuration of the electric field of the wave. However, in the previous Section, we have worked out refraction indices associated to the propagation of the waves in two situations: perpendicular and parallel to the background magnetic field: \({\bf k}\cdot{\bf B}=0\) and \({\bf k}\cdot{\bf B}=|{\bf k}||{\bf B}|\), respectively, with no reference to the polarization established by the electric field. Eqs. (19) and (22) explicitly show how the non-linearity - manifested by the external magnetic field - the axion parameters and the LSV vector interfere with one another in the expressions for the perpendicular and parallel refraction indices. And we would like to stress that we are here adopting the point of view that the phenomenon of birefringence manifests itself by the difference between the refractive indices of eqs. (19) and (22), as defined below, \[\Delta n_{ij}({\bf k})=n_{i\parallel}({\bf k})-n_{j\perp}({\bf k})\;,\;(i,j=1,2,3)\;, \tag{23}\] where we are contemplating the cases in which \(i=j\) and \(i\neq j\); in general, \(\Delta n_{ij}\neq 0\), and it depends on the wavelength, which characterizes dispersive propagation. Notice also that \(\Delta n_{ij}\neq\Delta n_{ji}\) according to the definition (23). The difference between the refraction indices in these situations is exclusively due to the choice of the wave propagation direction with respect to the external \({\bf B}\)-field. Substituting the results from the previous section, the variation of refractive index in the case of \(i=j\) are read \[\Delta n_{11} = 1-\frac{1}{\sqrt{1-d\,B^{2}}}\;, \tag{24a}\] \[\Delta n_{22}(k) \simeq 1-\sqrt{1+f\,B^{2}}\] (24b) \[-\frac{g_{a}^{2}B^{2}}{2}\frac{\sqrt{1+f\,B^{2}}}{m^{2}+f\,B^{2} \left(k^{2}+m^{2}\right)}\] \[+\frac{v^{2}}{2k^{2}}\left(\sqrt{1+f\,B^{2}}-\frac{1}{1+f\,B^{2} }\right)\;,\] \[\Delta n_{33}(k) \simeq \frac{g_{a}^{2}\,B^{2}}{(k^{2}+m^{2})^{3/2}}\frac{k^{2}}{(1+f\,B ^{2})\,m^{2}-v^{2}}\] (24c) \[\times \frac{m^{2}-v^{2}}{m^{2}-v^{2}+f\,B^{2}(k^{2}+m^{2})}\;,\] where we have considered that \(g_{a}\) is very weak in comparison with the squared inverse of the magnetic background (\(g_{a}^{2}\,B\ll 1\)), and the wave number (\(k=2\pi/\lambda\)) is much greater than the CFJ parameter, _i. e._, \(k\gg v\). The birefringence effects for \(i\neq j\) are read below : \[\Delta n_{12}(k) \simeq 1-k\,\sqrt{\frac{1+f\,B^{2}}{k^{2}+v^{2}}}\;, \tag{25a}\] \[\Delta n_{13}(k) \simeq 1-\frac{k}{\sqrt{k^{2}+m^{2}}}\] (25b) \[+\frac{k}{\sqrt{k^{2}+m^{2}}}\frac{g_{a}^{2}\,B^{2}/2}{f\,B^{2} \,(k^{2}+m^{2})+m^{2}-v^{2}}\;,\] \[\Delta n_{23}(k) \simeq \frac{k}{\sqrt{k^{2}+\frac{v^{2}}{1+f\,B^{2}}}}-\frac{k}{\sqrt{k ^{2}+m^{2}}}\;,\] (25c) \[\Delta n_{21}(k) \simeq \frac{k}{\sqrt{k^{2}+\frac{v^{2}}{1+f\,B^{2}}}}-\frac{1}{\sqrt{1 -d\,B^{2}}}\;,\] (25d) \[\Delta n_{31}(k) \simeq \frac{k}{\sqrt{k^{2}+m^{2}}}-\frac{1}{\sqrt{1-d\,B^{2}}}\;,\] (25e) \[\Delta n_{32}(k) \simeq \frac{k}{\sqrt{k^{2}+m^{2}}}-\frac{k}{\sqrt{k^{2}+\frac{v^{2}}{1 +f\,B^{2}}}}\;. \tag{25f}\] Turning off the magnetic background, the birefringence effect disappears in the results (24a)-(25f). Only the expression (24a) does not depend on the wavelength. For the usual Maxwell ED coupled to the axion and the CFJ term, the limits of \(c_{1}\to 1\), \(d\to 0\) and \(f\to 0\) given the results below : \[\Delta n_{11} = 0\;, \tag{26a}\] \[\Delta n_{22} \simeq -\frac{g^{2}\,B^{2}}{2m^{2}}\;,\] (26b) \[\Delta n_{33} \simeq \frac{g^{2}\,B^{2}}{(k^{2}+m^{2})^{3/2}}\frac{k^{2}}{m^{2}-v^{2}}\;,\] (26c) \[\Delta n_{12} = -\Delta n_{21}\simeq 1-\frac{k}{\sqrt{k^{2}+v^{2}}}\;,\] (26d) \[\Delta n_{13} = -\Delta n_{31}\simeq 1-\frac{k}{\sqrt{k^{2}+m^{2}}}\;,\] (26e) \[\Delta n_{23} = -\Delta n_{32}\simeq\frac{k}{\sqrt{k^{2}+v^{2}}}-\frac{k}{\sqrt{k ^{2}+m^{2}}}\;. \tag{26f}\] The limit \(g_{a}\to 0\), for which we have a non-linear ED coupled to the CFJ term, the results (24a)-(25f) are reduced to \[\Delta n_{11} = 1-\frac{1}{\sqrt{1-d\,B^{2}}}\;, \tag{27a}\] \[\Delta n_{22}(k) \simeq 1-\sqrt{1+f\,B^{2}}\] (27b) \[+\frac{v^{2}}{2k^{2}}\left(\sqrt{1+f\,B^{2}}-\frac{1}{1+f\,B^{2}} \right)\;,\] \[\Delta n_{33}(k) = 0\;,\] (27c) \[\Delta n_{12}(k) \simeq 1-k\,\sqrt{\frac{1+f\,B^{2}}{k^{2}+v^{2}}}\;,\] (27d) \[\Delta n_{13}(k) \simeq 1-\frac{k}{\sqrt{k^{2}+m^{2}}}\;,\] (27e) \[\Delta n_{23}(k) \simeq \frac{k}{\sqrt{k^{2}+\frac{v^{2}}{1+f\,B^{2}}}}-\frac{k}{\sqrt{k ^{2}+m^{2}}}\;,\] (27f) \[\Delta n_{21}(k) \simeq \frac{k}{\sqrt{k^{2}+\frac{v^{2}}{1+f\,B^{2}}}}-\frac{1}{\sqrt{1 -d\,B^{2}}}\;,\] (27g) \[\Delta n_{31}(k) \simeq \frac{k}{\sqrt{k^{2}+m^{2}}}-\frac{1}{\sqrt{1-d\,B^{2}}}\;,\] (27h) \[\Delta n_{32}(k) \simeq \frac{k}{\sqrt{k^{2}+m^{2}}}-\frac{k}{\sqrt{k^{2}+\frac{v^{2}}{1 +f\,B^{2}}}}\;. \tag{27i}\] In this case, the non-linearity plays a key role in the birefringence phenomenon. We shall discuss ahead birefringence by contemplating three non-linear electrodynamic models : Euler-Heisenberg, Born-Infeld and ModMax ED. 1. The Euler-Heisenberg ED is described by the Lagrangian : \[\mathcal{L}_{EH}(\mathcal{F},\mathcal{G})=\mathcal{F}+\frac{2\alpha^{2}}{45m_{e }^{4}}\left(\,4\,\mathcal{F}^{2}\,+\,7\,\mathcal{G}^{2}\,\right)\;,\] (28) where \(\alpha=e^{2}=(137)^{-1}=0.00729\) is the fine structure constant, and \(m_{e}=0.5\,\)MeV is the electron mass. Taking this Lagrangian and applying the expansion presented in Section (II), the coefficients read as below: \[d^{EH}\simeq\frac{16\alpha^{2}}{45m_{e}^{4}}\quad\text{and}\quad f^{EH}\simeq \frac{28\alpha^{2}}{45m_{e}^{4}}\;,\] (29) for a weak magnetic field. Substituting these coef ficients in (24a)-(24c), we obtain \[\Delta n_{11}^{(EH)}\!\simeq \!-\frac{8\alpha^{2}B^{2}}{45m_{e}^{4}}\;, \tag{30a}\] \[\Delta n_{22}^{(EH)}\!\simeq \!-\frac{14\alpha^{2}B^{2}}{45m_{e}^{4}}-\frac{g^{2}\,B^{2}}{2m^{2 }}\;,\] (30b) \[\Delta n_{33}^{(EH)}\!\simeq \!\frac{g^{2}\,B^{2}}{(k^{2}+m^{2})^{3/2}}\frac{k^{2}}{m^{2}-v^{2 }}\;,\] (30c) \[\Delta n_{12}^{(EH)}\!\simeq \!-\frac{14\alpha^{2}B^{2}}{45m_{e}^{4}}+\frac{v^{2}}{2k^{2}}- \frac{g^{2}B^{2}}{2m^{2}}\;,\] (30d) \[\Delta n_{21}^{(EH)}\!\simeq \!-\frac{8\alpha^{2}B^{2}}{45m_{e}^{4}}+\frac{v^{2}}{2k^{2}}\;,\] (30e) \[\Delta n_{13}^{(EH)}\!\simeq \!-\Delta n_{31}^{(EH)}=1-\frac{k}{\sqrt{k^{2}+m^{2}}}\;,\] (30f) \[\Delta n_{23}^{(EH)}\!\simeq \!-\Delta n_{32}^{(EH)}=\frac{k}{\sqrt{k^{2}+v^{2}}}-\frac{k}{ \sqrt{k^{2}+m^{2}}}\;. \tag{30g}\] Using the parameters previously defined, the solution (30a) yields the numeric value \[\frac{|\Delta n_{11}^{(EH)}|}{B^{2}}\simeq\frac{8\alpha^{2}}{45m_{e}^{4}}=69. 4\times 10^{-24}\,\mathrm{T}^{-2}\;, \tag{31}\] that is of the same order of the result presented by the PVLAS-FE experiment for vacuum magnetic birefringence, _i.e._, \(\Delta n_{PVLAS-FE}/B^{2}=(19\pm 27)\times 10^{-24}\,\mathrm{T}^{-2}\)[22]. The solution \(\Delta n_{13}^{(EH)}\) is finite in both the limits of \(B\to 0\), and \(B\to\infty\). Turning off the magnetic background, the axion mass and the \(v\)-parameter contribute to the birefringence as follows: \[\Delta n_{13}^{(EH)B\to 0}\!\!\left\{\begin{array}{l}1-\frac{k}{\sqrt{k^{ 2}+m^{2}}}\simeq\frac{m^{2}}{2k^{2}}\;,\;\mathrm{if}\;\;m>v\;,\\ 1-\frac{k}{\sqrt{k^{2}+v^{2}}}\simeq\frac{v^{2}}{2k^{2}}\;,\;\mathrm{if}\;\;v> m\;.\end{array}\right. \tag{32}\] In the case of \(\Delta n_{21}^{(EH)}\), the CFJ \(v\)-parameter contributes to the birefringence when \(B\to 0\) : \[\Delta n_{21}^{(EH)B\to 0}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In the limit \(B\to 0\), the birefringence vanishes in (36a)-(36c). Using that \(\sqrt{\beta}=16\) MeV, the solution (37a) has the numeric value \[\frac{|\Delta n^{(BI)}_{11}|}{B^{2}}\simeq 3.2\times 10^{-24}\,{\rm T}^{-2}\;, \tag{38}\] that is the same order of the PVLAS-FE experiment. When \(B\to 0\), the variation \(\Delta n^{(BI)}_{23}\) depend only on the axion mass and \(v\)-parameter : \[\Delta n^{(BI)}_{23}\stackrel{{ B\to 0}}{{\simeq}}\frac{ \sqrt{2}\,k}{\sqrt{2k^{2}+m^{2}+v^{2}-|m^{2}-v^{2}|}}\] \[-\frac{\sqrt{2}\,k}{\sqrt{2k^{2}+m^{2}+v^{2}+|m^{2}-v^{2}|}} \simeq\frac{|m^{2}-v^{2}|}{2k^{2}}\;, \tag{39}\] if \(k^{2}\gg(m^{2},v^{2})\). 3. The modified Maxwell (ModMax) ED is set by the Lagrangian \[{\cal L}_{MM}({\cal F},{\cal G})=\cosh\gamma\,{\cal F}+\sinh\gamma\sqrt{{\cal F }^{2}+{\cal G}^{2}}\;,\] (40) where \(\gamma\) is a real and positive parameter of this theory. In the limit \(\gamma\to 0\), the ModMax Lagrangian reduces to the Maxwell ED. This non-linear ED has been well motivated in the literature due to the conformal invariance. Thus, it is the only nonlinear ED that preserve the duality and the conformal symmetries in the same Lagrangian [37]. The coefficients of the expansion in the magnetic background, in this case, are \[d^{MM}=0\quad\mbox{and}\quad f^{MM}=2\,e^{\gamma}\,\frac{\sinh\gamma}{B^{2}}\;.\] (41) Thus, the variation of the refractive index for a weak axion-coupling constant are read below : \[\Delta n^{(MM)}_{11} \!\!=\!0\;,\] (42a) \[\Delta n^{(MM)}_{22} \!\!\simeq\!\frac{k\,e^{\gamma}}{\sqrt{e^{2\gamma}\,k^{2}+v^{2}} }-\frac{k\,e^{\gamma}}{\sqrt{k^{2}+v^{2}}}\] \[+\frac{1}{2}\frac{k}{\sqrt{k^{2}+v^{2}}}\frac{g^{2}\,B^{2}\,e^{2 \gamma}}{k^{2}+v^{2}-e^{2\gamma}\,(k^{2}+m^{2})}\;,\] (42b) \[\Delta n^{(MM)}_{33} \!\!\simeq\!-\frac{g^{2}\,B^{2}}{2(k^{2}+m^{2})^{3/2}}\frac{m^{2 }-v^{2}}{e^{2\gamma}\,m^{2}-v^{2}}\] \[\times\frac{e^{\gamma}\,k^{3}}{k^{2}+v^{2}-e^{2\gamma}\,(k^{2}+m^ {2})}\;,\] (42c) \[\Delta n^{(MM)}_{12} \!\!\simeq\!1-e^{\gamma}\;,\] (42d) \[\Delta n^{(MM)}_{13} \!\!\simeq\!\Delta n^{(MM)}_{23}\simeq 1-\frac{k}{\sqrt{k^{2}+m^ {2}}}\;,\] (42e) \[\Delta n^{(MM)}_{21} \!\!\simeq\!-e^{-2\gamma}\;\frac{v^{2}}{2k^{2}}\;,\] (42f) \[\Delta n^{(MM)}_{32} \!\!\simeq\!\frac{k}{\sqrt{k^{2}+m^{2}}}-e^{\gamma}\;.\] (42g) The results ( 26a)-(26f) also are recovered in the limit \(\gamma\to 0\). Notice that, with \(\gamma\neq 0\), the birefringence remains in the second solution (42b) when \(B\to 0\) : \[\Delta n^{(MM)}_{22}\simeq\frac{k\,e^{\gamma}}{\sqrt{e^{2\gamma}\,k^{2}+v^{2} }}-\frac{k\,e^{\gamma}}{\sqrt{k^{2}+v^{2}}}\,.\] (43) This particular result is the case in which the ModMax ED is added to the CFJ term without the presence of the axion. The result (42f) shows the birefringence solution that depends directly on the \(v\)-CFJ parameter, and it goes to zero when \(v\to 0\). When the magnetic field is null, the solution \(\Delta n^{(MM)}_{21}\) is \[\Delta n^{(MM)}_{21}\stackrel{{ B\to 0}}{{=}}-1\] \[+\frac{\sqrt{2}\,k}{\sqrt{2k^{2}+m^{2}+e^{-2\gamma}\,v^{2}-e^{-2 \gamma}|m^{2}e^{2\gamma}-v^{2}|}}\;,\] (44) that depend on the conditions \(m\,e^{\gamma}>v\) and \(m\,e^{\gamma}<v\) : \[\Delta n^{(MM)}_{21} \!\!=\!\!-1+\frac{k}{\sqrt{k^{2}+m^{2}}}\simeq-\frac{m^{2}}{2k^{2 }}\;,\] (45a) \[\Delta n^{(MM)}_{21} \!\!=\!\!-1+\frac{k}{\sqrt{k^{2}+e^{-2\gamma}\,v^{2}}}\simeq-e^{ -2\gamma}\,\frac{v^{2}}{2k^{2}}\;,\] (45b) respectively. This result confirms ( 42f). When the CFJ \(v\)-parameter predominates in relation to the axion mass, and with this condition, the birefringence is null in the limit \(v\to 0\). In the case of \(\Delta n^{(MM)}_{23}\), the birefringence is null in an intense magnetic field. The variation \(\Delta n^{(MM)}_{31}\) is finite on both the limits \(B\to 0\) and \(B\to\infty\). When the magnetic background is intense, \(\Delta n^{(MM)}_{31}=-1\), that does not depend on the any parameter of the theory. ## V Conclusions and perspectives We propose a general non-linear electrodynamics coupled to a scalar axion to which we adjoin the Carroll-Field-Jackiw (CFJ) term. We expand the Lagrangian of the model around a uniform electromagnetic background field up to second order in the photon field. The CFJ term introduces a background 4-vector \(v^{\mu}=(v^{0},{\bf v})\), that consequently, breaks the Lorentz symmetry in the theory. The case with only an uniform magnetic background field (\({\bf B}\)) is analyzed where the properties of the wave propagation are discussed. Thereby, we calculate the dispersion relations of the model for a space-like (\(v^{0}=0\)) CFJ term. The wave propagation is affected by three vectors \({\bf B}\), \({\bf k}\) (wave vector) and \({\bf v}\). The dispersion relations are obtained for two cases : (a) when \({\bf B}\), \({\bf k}\) and \({\bf v}\) are perpendicular among themselves, (b) when \(\mathbf{v}\) is perpendicular to \(\mathbf{B}\) and \(\mathbf{k}\), but \(\mathbf{B}\) and \(\mathbf{k}\) are parallel vectors. These results allow us to defined the refractive index of this medium, and posteriorly, we discuss the birefringence phenomena under these conditions. Since there are three different solutions for the dispersion relations, we discuss the possible cases of birefringence, in the a variation of the refractive index in the medium is \(\Delta n_{ij}\), with \(i,j=1,2,3\). We apply the birefringence results for three cases of non-linear ED well known in the literature : Euler-Heisenberg, Born-Infeld, and the ModMax ED. When the non-linearity is null, the birefringence effect emerges due to the axion coupling with the magnetic background. In some situations, when the magnetic field is turned off, the birefringence is due to the CFJ parameter, the axion mass and the parameter of the non-linear ED. One of the solutions of Euler-Heisenberg ED exhibits the birefringence result \(\Delta n_{11}^{(EH)}/B^{2}\simeq 69.4\times 10^{-24}\,\mathrm{T}^{-2}\), that is compatible with the PVLAS-FE experiment for vacuum magnetic birefringence, _i. e._, \(\Delta n_{PVLAS-FE}/B^{2}=(19\pm 27)\times 10^{-24}\,\mathrm{T}^{-2}\). The third solution (30c) shows the birefringence positive as function of the magnetic background field. In the case of the Born-Infeld ED, one of solutions for the birefringence yields \(|\Delta n_{11}^{(BI)}|/B^{2}\simeq 3.2\times 10^{-24}\,\mathrm{T}^{-2}\), when the Born-Infeld parameter is bounded by the finite electron self-energy. This numeric value is of the same order of the PVLAS-FE experiment result. In the case of the ModMax ED, the birefringence of \(\Delta n_{33}^{(MM)}\) assumes negative values depending on the magnetic background field. When the solutions of \(\Delta n_{ij}\), for \(i\neq j\), are analyzed, the CFJ spatial-parameter (\(v\)) plays a fundamental rule in the case of \(\Delta n_{21}^{(MM)}\), in the ModMax ED. In the range of \(v>e^{\gamma}\,m\), where \(m\) is the axion mass, and \(\gamma\) is the ModMax parameter, the birefringence emerges thanks to the \(v\)-parameter. In our purpose of investigating how different new physics interfere with one another through the photon sector, we point out that, in an interesting recent article, Li and Ma [42] pursue an inspection on the effects stemming from Loop Quantum Gravity (LQG) corrections to both the photon and fermionic matter sectors of Electrodynamics. Among these corrections, there appears a non-linear (actually, cubic) term in the extended Ampere-Maxwell equation. Though modulated by LQG parameters, very strong external magnetic fields at the astrophysical or those generated in relativistic heavy ion colliders may be sufficient to enhance the associated LQG effects and, therefore, one can compute how these latter effects contribute to the axion physics through the photon-axion coupling, as we have considered here. Finally, considering still our motivation to relate non-linear photon effects with axion physics, we recall that we have here considered as electromagnetic backgrounds only constant and uniform fields. It remains to be contemplated, for example, situations with non-uniform external electric/magnetic fields that will be exchanging energy and momentum with the photon-axion system, and to compute the modified dispersion relations, the corresponding group velocities, refractive indices and birefringence which will become space-dependent as a consequence of the non-uniformity of the background. **Acknowledgments**: L.P.R. Ospedal expresses his gratitude to FAPERJ for his postdoctoral fellowship. J.M.A Paixao is grateful to the National Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) for supporting his work.
2301.06179
Equitable Data-Driven Facility Location and Resource Allocation to Fight the Opioid Epidemic
The opioid epidemic is a crisis that has plagued the United States (US) for decades. One central issue is inequitable access to treatment for opioid use disorder (OUD), which puts certain populations at a higher risk of opioid overdose. We integrate a predictive dynamical model and a prescriptive optimization problem to compute high-quality opioid treatment facility and treatment budget allocations for each US state. Our predictive model is a differential equation-based epidemiological model that captures opioid epidemic dynamics. We use a process inspired by neural ODEs to fit this model to opioid epidemic data for each state and obtain estimates for unknown parameters in the model. We then incorporate this epidemiological model into a mixed-integer optimization problem (MIP) that aims to minimize opioid overdose deaths and the number of people with OUD. We develop strong relaxations based on McCormick envelopes to efficiently compute approximate solutions to our MIPs with a mean optimality gap of 3.99%. Our method provides socioeconomically equitable solutions, as it incentivizes investments in areas with higher social vulnerability (from the US Centers for Disease Control's Social Vulnerability Index) and opioid prescribing rates. On average, our approach decreases the number of people with OUD by 9.03 $\pm$ 1.772%, increases the number of people in treatment by 88.75 $\pm$ 26.223%, and decreases opioid-related deaths by 0.58 $\pm$ 0.111% after 2 years compared to baseline epidemiological model predictions. Our solutions show that policy-makers should target adding treatment facilities to counties that have fewer facilities than their population share and are more socially vulnerable. We demonstrate that our optimization approach should help inform these decisions, as it yields population health benefits in comparison to benchmarks based solely on population and social vulnerability.
Joyce Luo, Bartolomeo Stellato
2023-01-15T20:22:46Z
http://arxiv.org/abs/2301.06179v5
Equitable Data-Driven Resource Allocation to Fight the Opioid Epidemic: A Mixed-Integer Optimization Approach ###### Abstract The opioid epidemic is a crisis that has plagued the United States (US) for decades. One central issue of the epidemic is inequitable access to treatment for opioid use disorder (OUD), which puts certain populations at a higher risk of opioid overdose. We integrate a predictive dynamical model and a prescriptive optimization problem to compute the optimal locations of opioid treatment facilities and the optimal treatment budget distribution in each US state. Our predictive model is a differential equation-based epidemiological model that captures the dynamics of the opioid epidemic. We use neural ordinary differential equations to fit this model to opioid epidemic data for each state and obtain estimates for unknown parameters in the model. We then incorporate this epidemiological model for each state into a corresponding mixed-integer optimization problem (MIP) for treatment facility location and resource allocation. Our MIPs aim to minimize the number of opioid overdose deaths and the number of people with OUD, and to target socioeconomic equitability by considering social vulnerability (from the US Centers for Disease Control's Social Vulnerability Index) and opioid prescribing rates in each county. Overall, our MIPs' proposed solutions on average decrease the number of people with OUD by \(5.70\pm 0.738\%\), increase the number of people in treatment by \(21.17\pm 3.162\%\), and decrease the number of opioid-related deaths by \(0.51\pm 0.086\%\) after 2 years compared to the baseline epidemiological model's predictions. Rather than only evaluating the effectiveness of potential policies as in past literature, our approach is decision-focused and directly yields actionable insights for policy-makers. It provides concrete opioid treatment facility and budget allocations and quantifies the impact of these allocations on pertinent population health measures. Future iterations of this approach could be implemented as a decision-making tool to tackle the issue of opioid treatment inaccessibility. ## 1 Introduction The opioid epidemic is a foremost public health crisis within the United States (US). The epidemic has been driven by increases in prescription, illicit, and synthetic opioid use, which have in turn increased rates of opioid use disorder (OUD) and overdose deaths. According to the Centers for Disease Control (CDC), around 500,000 people have died from overdoses involving both illicit and prescription opioids from 1999 to 2019 [Centers for Disease Control, 2021c]. The COVID-19 pandemic has further exacerbated the opioid epidemic, with recent data showing a spike in overdose deaths during 2020. In the period from September 2019 through August 2020, there were 88,295 predicted deaths, which is about 27% more than the preceding 12-month period [Centers for Disease Control, 2019, Baumgartner and Radley, 2021]. The pandemic has brought to the forefront the need for expanded access to opioid addiction treatment services. Currently, the main treatment for OUD is medication-assisted treatment (MAT), which has been proven to sustain patient recovery and prevent future overdoses. Methadone and buprenorphine are the two main medications approved to treat OUD [Amiri et al., 2020]. Methadone is administered through maintenance treatment, which requires that patients go to federally-approved facilities called opioid treatment programs (OTPs) regularly to take a prescribed amount of methadone [National Institute of Drug Abuse, 2018]. Buprenorphine is similarly used in maintenance treatment to treat OUD but can be administered by any health practitioner who obtains specialized training. This has led to a more rapid expansion of buprenorphine in comparison to methadone in the last few years. However, a recent study illustrated that methadone is still considered to be more effective at retaining patients compared to buprenorphine, especially when doses of either drug are low [Mattick et al., 2014]. Buprenorphine is also more expensive than methadone [Mattick et al., 2014]. Although access to both drugs has expanded in the last decade, there are still major gaps in access to these treatments across the US, especially in rural areas with under-developed health infrastructures. Those seeking care are often required to travel long distances to OTPs or other treatment facilities, which is another major factor that affects treatment retention [Amiri et al., 2020]. Implementing a method that proposes more equitable and concretely impactful treatment facility and budget allocations could help improve policy decision-making related to this issue. In this work, we formulate an approach that provides optimal opioid treatment facility location and treatment budget allocation decisions to address the issue of inequitable opioid treatment facility access. Our approach integrates a dynamical model of the opioid epidemic with a prescriptive mixed-integer optimization problem (MIP) for each state. We model the state-level opioid epidemic using a ODE-based epidemiological model. In order to fit the model to real world data obtained from the CDC, National Institute on Drug Abuse (NIDA), and Substance Abuse and Mental Health Service Administration (SAMHSA), we use neural ODEs. Representing the ODE model as a neural network layer through neural ODEs allows us to exploit the power of gradient descent for more efficient parameter estimation compared to zeroth order methods [Chen et al., 2018]. We then formulate an MIP for each state to compute optimal resource allocation interventions that minimize the effect of the opioid epidemic. We do this by including a discretized version of the state-level dynamical model within the constraints of the respective state's MIP and setting the objective to minimize overdose deaths and the number of people with OUD. We capture the impact of the interventions by showing how they affect a particular parameter of our discretized epidemic model in each time period. Our approach also incorporates information about the social vulnerability of each county, which is a measure of how susceptible a community is to the adverse impacts caused by external stresses on human health (Centers for Disease Control/Agency for Toxic Substances and Disease/Geospatial Research, Analysis, and Services Program, 2021). This information helps our MIPs target socioeconomic equitability across counties. Our MIP solutions provide concrete recommendations about how many additional treatment facilities should be in each county and how much of a limited treatment budget per time period should be allocated to each county. ### Related Work Past computational research related to the opioid epidemic has mainly centered around modeling epidemic dynamics. This research uses compartmental models, which have traditionally been used in the modeling of infectious diseases, to capture the dynamics of the opioid epidemic. The Susceptible-Infected-Recovered (SIR) model, developed by Kermack et al. (1927), is a fundamental compartmental model used in epidemiology to simulate the spread of infectious diseases such as influenza, SARS, and most recently COVID-19 (Li et al., 2022). The SIR model uses a system of ordinary differential equations (ODEs) to model transitions between the different compartments of susceptible, infected, and recovered people within a population. A modified version of this model can be developed in regards to the opioid epidemic, as the fundamental dynamics of the opioid epidemic are similar to those of infectious diseases. Becoming addicted to opioids can be seen as analogous to being "infected" by a disease, and entering treatment for opioid addiction can be seen as entering recovery. Although these models are a simplification of the true dynamics of disease spread, they are very useful for assessing the impact of different interventions on the way the population compartments evolve over time. White and Comiskey (2007) detail one of the first dynamical models of opiate addiction, with a focus on heroin use. Their ODE-based compartmental model gives insight into the progression of drug users, from initiation, to regular use, to addiction, to treatment, and finally, to recovery or relapse. Battista et al. (2019) expand on White and Comiskey's compartmental model, proposing a new model based on the commonly-used Susceptible-Exposed-Infected-Recovered (SEIR) model from epidemiology. Their model specifically focuses on capturing the dynamics of the prescription opioid epidemic with four compartments: Susceptible, Prescribed, Addicted, and Rehabilitation. The transitions between each class are determined by yearly rate parameters deduced from literature or from testing ranges of parameter values (Battista et al., 2019). We expand upon this model and other opioid epidemic compartmental modeling work by disaggregating to state level dynamics rather than national dynamics, including more population compartments, and modifying the transition dynamics. In addition to modeling the epidemic, there is a body of literature that uses these models to project the impact of certain policy interventions on epidemic dynamics, opioid misuse, and overdose deaths. Chen et al. (2019) formulate a compartmental model of the US opi oid epidemic to project opioid overdose deaths under status quo conditions and subject to interventions like lowering the prescription opioid supply. Pitt et al. (2018) and Rao et al. (2021) aim to project overdose deaths, life years, and quality-adjusted life years for several different policy responses (_i.e._, reducing opioid prescribing rates, expanding excess opioid disposal programs) using a compartmental model. The effect of a policy intervention is simulated by varying compartmental model parameters based on an "assumed magnitude" of impact and then projecting future outcomes (Pitt et al., 2018; Rao et al., 2021). In contrast, within our approach, we directly connect specific proposed policy decisions to exact changes in our compartmental model parameter values by integrating a dynamical model of the opioid epidemic into an optimization problem. To our knowledge, no previous work takes this approach within the context of the opioid epidemic. Rather than the traditional approach of assessing the effectiveness of a broad swath of potential policies, we focus on providing a streamlined decision-making process for one type of intervention related to improving opioid treatment access. For other applications, there have been previous efforts to integrate epidemiological models and optimization methods to inform policy decisions. Rao and Brandeau (2021) and Zaric and Brandeau (2002) use compartmental models to inform simple optimization routines that can be solved using heuristics for vaccine and budget allocation, respectively. Bertsimas et al. (2021) integrate a compartmental model of the COVID-19 pandemic, called the DELPHI model, into a prescriptive optimization problem to make decisions about the optimal locations of mass vaccination facilities and optimally allocate COVID vaccines. They formulate a bilinear non-convex optimization problem which includes a modified version of the time-discretized DELPHI model within its constraints (Bertsimas et al., 2021). Our work builds on the DELPHI model and this framework by specializing to the opioid epidemic context. Additionally, for our compartmental model of the opioid epidemic, we take a different approach to parameter estimation by using neural ODEs. We also estimate unique model parameters for each state and formulate separate MIPs for each state, rather than having a single national-level optimization problem as in Bertsimas et al. (2021). In doing this, our approach explicitly takes into account the unique opioid epidemic dynamics of each state, allowing for more targeted county-level solutions. Rather than only considering population-based equity, our MIPs also aim to ensure that the distribution of treatment facilities in a state is more socioeconomically equitable by considering the social vulnerability of each county. ### Our Contributions Our work has several contributions. Firstly, we seamlessly integrate a predictive dynamical model and a prescriptive optimization problem to create an operationally viable and streamlined approach for opioid treatment facility location and treatment budget allocation. This approach is novel within opioid epidemic modeling and policy-related literature. Secondly, we show that a simple neural ODE model which is informed by a dynamical structure can accurately estimate interpretable parameters from sparse time series data in the context of the opioid epidemic. These interpretable parameters can help quantify the differences between the opioid epidemic dynamics of different states. Finally, in terms of practical contributions, we show that optimizing interventions related to opioid treatment facility location and treatment budget allocation could have a positive impact, even in the short term, on population health measures like the number of people with OUD, the number of people receiving treatment, and the number of overdose deaths. This work could help support future decision-making efforts related to improving opioid treatment access. Reproduceable code can be found here. ## 2 Epidemiological Model ### Model Definition We formulate a general US state-level compartmental model, partitioning the population of a state into the following 6 exhaustive population classes (compartments): * **Susceptible (S):** Individuals who are not using opioids. * **Prescribed (P):** Individuals who use or misuse prescription opioids but are not addicted. * **Illicit Use (I):** Individuals who use illicit opioids like heroin. * **Addicted (A):** Individuals who are addicted to prescription or illicit opioids. * **Rehabilitating (R):** Individuals who are getting treatment for their addiction. * **Deceased (D):** Individuals who have died from opioid overdoses. Figure 1 shows a flow diagram of the compartmental model for a particular state. The model schematic shows the different population compartments, and the arrows depict how individuals transition between these different compartments. This deterministic model can Figure 1: Flow diagram of our state-level compartmental ODE model of the opioid epidemic. be represented by a system of ODEs, which depends on 9 parameters. These parameters are the rates by which individuals move from one compartment to another in the model, as illustrated by their locations on particular arrows in Figure 1. We assume that each state has unique opioid epidemic dynamics and therefore a unique model parameterization. We model the opioid epidemic using the system \[\frac{\mathrm{d}\mathbf{z}}{\mathrm{d}t}=f(\mathbf{z}(t),\rho,t),\] with state vector \(\mathbf{z}(t)=(S(t),P(t),I(t),A(t),R(t),D(t))\in\mathbf{R}^{6}\), and we have the initial condition \(\mathbf{z}(0)=\mathbf{z}_{0}=(S_{0},P_{0},I_{0},A_{0},R_{0},D_{0})\) consisting of estimated data for each compartment from the year 1999 for a particular US state. Vector \(\rho\in\mathbf{R}^{9}\) represents the parameters that determine how the process evolves over time. The dynamics are represented by the following system of ODEs: \[\frac{\mathrm{d}S}{\mathrm{d}t} =\epsilon P+\delta R-\alpha S\] \[\frac{\mathrm{d}P}{\mathrm{d}t} =\alpha S-(\epsilon+\gamma+\beta)P\] \[\frac{\mathrm{d}I}{\mathrm{d}t} =\beta P-\phi I\] \[\frac{\mathrm{d}A}{\mathrm{d}t} =\gamma P+\sigma R+\phi I-\zeta A-\mu A\] \[\frac{\mathrm{d}R}{\mathrm{d}t} =\zeta A-(\delta+\sigma)R\] \[\frac{\mathrm{d}D}{\mathrm{d}t} =\mu A,\] where \(N=S+P+I+A+R+D\) is the total population of a particular US state. We assume that interactions between compartments are linear, as non-linear interactions were deemed negligible through parameter estimation. We estimate the unknown parameters in \(\rho\) from real world data to ensure that our dynamics function \(f(\mathbf{z}(t),\rho,t)\) approximates the true dynamics as closely as possible. ### Parameters The epidemic model is based on parameters \(\rho=(\alpha,\gamma,\delta,\sigma,\mu,\xi,\epsilon,\phi,\beta)\) described in Table 1. All parameters represent constant annual transition rates between particular compartments. We assume the parameters are time invariant (Battista et al., 2019). These parameters are similar to those within previous literature (Battista et al., 2019; Pitt et al., 2018), but we add parameters which take into account the effect of illicit opioids on the dynamics of the opioid epidemic. In particular, we are interested in the illicit drug-induced addiction rate (\(\phi\)) and how people transition from prescription to illicit opioid use (\(\beta\)). In a previous iteration of the model, we also considered the illicit opioid use initiation rate. However, through parameter estimation, we determined that this parameter had a negligible effect on the dynamics of the model, and it was removed. Therefore, we assume that individuals can only initiate illicit use if they previously used prescription opioids. This is also substantiated by previous research indicating that the majority of heroin users have misused prescription opioids in the past (Muhuri et al., 2013; Lankenau et al., 2012). We set parameters \((\alpha,\gamma,\delta,\sigma)=(0.15,0.00744,0.1,0.9)\) based on Battista et al. (2019) and estimate parameters \((\zeta,\epsilon,\phi,\beta,\mu)\) using our neural ODE model. ### Data We collected state-level data for the years 1999 to 2019 for compartments \(D\), \(P\), \(I\), \(A\), and \(R\) to estimate the unknown model parameters. We define the time horizon to be \(H=20\) (_i.e._, the year 1999 represents \(t=\tau=0\) and the year 2019 represents \(\tau=H\), where \(\tau\) is a discrete time unit from \(0,1,\ldots,H\)). The Multiple Cause-of-Death dataset in the CDC WONDER (Wide-ranging Online Data for Epidemiologic Research) database (Centers for Disease Control, 2019) was our source for yearly overdose death counts (\(D\)) from 1999-2019. To identify opioid-specific deaths, we filtered the dataset using the multiple cause-of-death (ICD) codes for heroin (T40.1), natural opioid analgesics (T40.2), methadone (T40.3), and synthetic opioid analgesics other than methadone (T40.4). We also used underlying cause-of-death codes X40-X44 (unintentional), X60-X64 (suicide), and Y10-Y14 (undetermined) (Centers for Disease Control, 2021). The CDC suppresses data values below a threshold of 10 to prevent patient identification, so we removed states if over 20% of their data was suppressed. The following states were removed: North Dakota, South Dakota, Alaska, Idaho, Montana, Mississippi, Wyoming, and West Virginia. For states with fewer suppressed measures, these were replaced with a random number between 1 and 9 drawn from a uniform distribution. We made the deaths cumulative, starting from the number of opioid-related deaths in 1999. This is because we assume that the Deceased compartment is absorbing, since those who die from opioid overdoses cannot transition into other compartments. We approximated the number of people using prescription opioids per year per state (\(P\)) using data sources from the CDC. The CDC provides data regarding opioid dispensing rates for each state from 2006 to 2019 [Centers for Disease Control, 2021b]. We used this data to calculate ratios of the number of prescription opioids dispensed in each state to the number dispensed nationally. In addition, the CDC's National Health and Nutrition Examination Survey (NHANES) provides biyearly estimates of the percentage of adults nationwide who used a prescription opioid in the past 30 days for the years 1999-2018 [Centers for Disease Control, National Center for Health Statistics, 2019]. From these percentages, we estimated the _number_ of adults nationwide who used a prescription opioid per year. To obtain state-level estimates, we multiplied our state-to-national ratios by the nationwide estimate calculated from NHANES for each year from 2006-2018. From SAMHSA's National Survey on Drug Use and Health (NSDUH), we obtained data on the yearly prevalence of OUD (\(A\)) for each state from 2016-2019. We also obtained the yearly estimated number of heroin users for each state from 2016-2019 (\(I\)) [U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Center for Behavioral Health Statistics and Quality, 2019]. For the number of people in treatment (\(R\)), we obtained data from the National Survey of Substance Abuse Treatment Services (N-SSATS) for the years 2000, 2002-2013, 2015-2017, and 2019 [U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Center for Behavioral Health Statistics and Quality, 2021]. The data measure was the aggregated number of clients receiving MAT across all facilities in a state within a day each year. We calculated the number of susceptible people (\(S\)) based on data from the other 5 compartments and the populations of each US state. We assume that \(N^{(\tau)}=S^{(\tau)}+P^{(\tau)}+I^{(\tau)}+A^{(\tau)}+R^{(\tau)}+D^{(\tau)}\), where \(N^{(\tau)}\) is the state population in year \(\tau\). The death counts are included in this summation because they are negligible compared to the total population. We implicitly consider overall birth and death rates of the population by allowing \(N^{(\tau)}\) to vary. We calculated the \(S\) compartment only for the time points with complete data for all other compartments. For other time points, the \(S\) value was set to 0. We detail how we set the model initial conditions in Appendix A. We created data matrices for every included US state. Each data matrix is \(21\times 6\), and each row represents \(\mathbf{z}^{(\tau)}\), the data observation at time \(t=\tau\) for \(\tau=0,1,\ldots,H\). Since there are 6 different compartments, \(\mathbf{z}^{(\tau)}\) lives in \(\mathbf{R}^{6}\). Missing compartment values were set to 0. In order to ensure convergence and a better model fit, we normalized each compartment's value by \(N^{(\tau)}\) at each time point. ### Neural ODE Model The neural ODE framework represents ODEs and their solvers as a neural network layer, which is called a neural ODE layer in Chen et al. (2018). ODEs and ODE solvers fit perfectly into the neural network framework, as they have been proven to be differentiable [Chen et al., 2018]. Using this neural ODE framework, we are able to estimate unknown parameters of our ODE-based models by training a simple neural network. We construct a single-layer neural ODE model, where the layer is defined by the ODEs of our compartmental model (described in Section 2.1). We use gradient-based optimizers to minimize the following loss function: \[\mathcal{L}(\rho)=\sum_{\tau=0}^{H}\ell(\hat{\mathbf{z}}^{(\tau)},\mathbf{z}^{( \tau)}),\] where \(\hat{\mathbf{z}}^{(\tau)}\) is the ODE model's prediction of the vector of compartment values and \(\mathbf{z}^{(\tau)}\) is our observation of the vector of compartment values from the data at time \(t=\tau\) for \(\tau=0,1,\ldots,H\). The vectors \(\hat{\mathbf{z}}^{(\tau)}\) and \(\mathbf{z}^{(\tau)}\) lie in \(\mathbf{R}^{6}\). Our neural ODE model minimizes the distance between \(\hat{\mathbf{z}}^{(\tau)}\) and \(\mathbf{z}^{(\tau)}\) by varying the parameters \(\rho\). We define: \[\ell(\hat{\mathbf{z}}^{(\tau)},\mathbf{z}^{(\tau)})=\|q_{\tau}\odot(\hat{ \mathbf{z}}^{(\tau)}-\mathbf{z}^{(\tau)})\|_{2}^{2},\] where \(q_{\tau}\in\mathbf{R}^{6}\) is defined to accordingly penalize differences between the predictions and observations based on data availability at time point \(\tau\). For instance, if there is no data for a compartment at \(\tau\), the corresponding element in \(q_{\tau}\) is set to \(0\) and the data element is ignored in the loss calculations. However, if the rest of the data at \(\tau\) is available, the corresponding elements in \(q_{\tau}\) are set to \(1\) and only those elements are used to calculate the loss. Here, \(\odot\) denotes the element-wise product of \(q_{\tau}\) and \(\hat{\mathbf{z}}^{(\tau)}-\mathbf{z}^{(\tau)}\). In contrast to traditional neural networks, the neural ODE framework does not require a large amount of data to estimate parameters accurately, which makes it ideal for applications with limited data like is the case for the opioid epidemic. It also allows us to estimate parameters in a more computationally efficient way, because we are able to estimate the parameters based on the direction of the gradient (Chen et al., 2018). Implementation.We use Julia to implement our neural ODE model. In particular, we use the DiffEqFlux.jl(Rackauckas et al., 2019) and DifferentialEquations.jl(Rackauckas and Nie, 2017) libraries. We train our neural ODE model using this method for each individual state. Our initial condition is a vector of the normalized compartment values in 1999 for each respective state. We set an initial guess for the unknown parameters: \((\phi,\epsilon,\beta,\zeta,\mu)=(0.3,0.9,0.19,0.5,0.01159)\), according to parameter ranges and estimates from previous literature (Battista et al., 2019). We restrict the parameters to be non-negative by representing them as the square of the actual parameters we estimated (_e.g._, \(\phi=\hat{\phi}^{2}\), where \(\hat{\phi}\) is the actual Neural ODE parameter that we learn). We use the ODE solver Tsit5 and the ForwardDiffSensitivity method to calculate the gradients. To perform stochastic gradient descent, we run the ADAM optimization algorithm for 20000 iterations with a step size of 0.0001, followed by the BFGS algorithm. ## 3 Mixed-Integer Optimization Problem The overarching goal of our approach is to offer solutions to ensure that MAT and treatment facilities are more accessible and allocated equitably. Accordingly, for each US state, we formulate a prescriptive MIP to address two main objectives: opioid treatment facility location and treatment budget allocation. In particular, we focus on treatment facilities that offer MAT. Our MIPs mainly aim to minimize overdose deaths and the number of people with OUD, but also take into account socioeconomic considerations so that treatment facilities are distributed more equitably. The initial starting point for our problem is 2017, as that year has sufficient data availability. We set our modeling period to be 2 years. ### Data We obtained data related to the current number of treatment facilities that offer MAT in each county, using the SAMHSA Behavioral Health Treatment Services Locator. This tool helped us create a dataset that indicated the number of treatment facilities that offered "Outpatient methadone/buprenorphine or naltrexone treatment" in each county [Substance Abuse and Mental Health Services Administration, 2022]. We needed this data to account for the effect of facilities that are already treating patients with MAT. We also obtained data from the CDC's Social Vulnerability Index (SVI), which provides a value for each county that captures 15 factors from the US Census, including poverty, lack of vehicle access, and crowded housing [Centers for Disease Control/Agency for Toxic Substances and Disease/Geospatial Research, Analysis, and Services Program, 2021]. This index is intended to help identify populations that are vulnerable during public health emergencies like the opioid epidemic. The SVI ranking is a value between 0 and 1, with a ranking closer to 1 indicating that the region is more socially vulnerable [Centers for Disease Control/Agency for Toxic Substances and Disease/Geospatial Research, Analysis, and Services Program, 2021]. The CDC provides SVI data every 2 years, and we obtained county-level data for 2018. Additionally, we obtained county-level data regarding opioid dispensing/prescribing rates per 100 people for 2018 [Centers for Disease Control, 2021b] and county population totals from the Census Bureau [United States Census Bureau, 2021]. Budgetary information for each state was obtained for the constraints of our MIPs. We obtained total grant funding data for each state in 2018 from the US Department of Health and Human Services Opioid Grants Dashboard [U.S. Department of Health and Human Services, 2020]. According to previous opioid grant spending analyses, around 65% of grant funding was used for treatment initiatives in a particular year [Murrin, 2020]. In addition, the estimated cost of opening a treatment facility ranges from $300-600K for an intensive outpatient facility [Ascension Recovery Services, 2019]. We rounded up the cost to $1,000,000 to have a higher estimate. For each state, we budgeted 65% of the total grant funding to be used for opening new treatment facilities and divided this number by $1,000,000 to get the maximum _additional_ number of treatment facilities that can be opened in that state. We added this number to the current number of treatment facilities to get cap on the number of facilities that can be opened in that state, which we call \(N\). SAMHSA recently distributed a grant to states for the purposes of expanding MAT [Murrin, 2020]. We divided the amount distributed to each state by 4 to get quarterly estimates of the treatment budget, which we call \(d_{k}\) at time \(k\). For the scope of this work, we chose to only focus on methadone-based MAT. Accordingly, we also obtained data on the cost of treating a patient with methadone-based MAT, which was $37.38 [Centers for Medicare and Medicaid Services, 2021]. Multiplying this number by 12 gave us \(d=\$448.56\), the quarterly cost of methadone-based MAT. In future iterations of this MIP, we could additionally take into account buprenorphine-based MAT. ### Problem Formulation We consider an optimization problem over a time horizon \(K\) with time periods \(k\in\mathcal{K}=\{1,\ldots,K\}\). We denote each county in a state as \(i\) with \(i\in\mathcal{C}=\{1,\ldots,C\}\), where \(C\) is the total number of counties in a state. The decision variables are \(x_{i}\)--denoting the number of opioid treatment facilities with MAT needed for county \(i\in\mathcal{C}\), and \(\bar{d}_{ik}\)--denoting the treatment budget distributed to county \(i\in\mathcal{C}\) at time \(k\in\mathcal{K}\). The optimization problem will feature the parameters in Table 2. To use the continuous time compartmental model described in Section 2.1 within the constraints of our optimization problem, we discretize it using the forward Euler method [Willcox and Wang, 2014]. We set \(t=k\Delta\), where \(\Delta\) is the time discretization interval and \(k\in\mathcal{K}\). In order to mimic the continuous time trajectory as closely as possible while also not making \(\Delta\) too small, we set \(\Delta=0.25\). This represents a time increment of 3 months. \begin{table} \begin{tabular}{l l} \hline \hline & Description \\ \hline \(n_{i}\) & Number of opioid treatment facilities with MAT already in county \(i\in\mathcal{C}\) \\ \(\text{SVI}_{i}\) & Social Vulnerability Index in county \(i\in\mathcal{C}\) \\ \(\text{pr}_{i}\) & Prescribing rate per 100 people in county \(i\in\mathcal{C}\) \\ \(\text{Pop}_{i}\) & Population in county \(i\in\mathcal{C}\) \\ \(N\) & Maximum number of treatment facilities that can be open in the state \\ \(d_{k}\) & Treatment budget limit for time \(k\in\mathcal{K}\) \\ \(d\) & Quarterly per patient cost of MAT \\ \(\alpha\) & Prescription rate per person per year \\ \(\gamma\) & Prescription-induced addiction rate \\ \(\delta\) & Successful treatment rate \\ \(\sigma\) & Natural relapse rate of an individual in treatment \\ \(\mu\) & Death rate of addicts \\ \(\zeta\) & Rate of individuals with OUD entering into rehabilitation \\ \(\epsilon\) & Rate of ending prescription without addiction \\ \(\phi\) & Illicit drug-induced addiction rate \\ \(\beta\) & Transition rate from prescription to illicit opioid use \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters of the optimization problem. Our decision variables regarding opening additional treatment facilities and establishing treatment budgets for counties act as proposed interventions that affect state-level opioid epidemic dynamics. The effect of interventions in the system can be shown by changing related compartmental model parameters (Pitt et al., 2018; Rao et al., 2021). We model the impact of our decision variables on state-level opioid epidemic dynamics by showing how they affect the estimated parameter \(\zeta\) in each time period. We choose to affect \(\zeta\) because it represents the transition rate from the \(A\) to \(R\) compartment. Having greater access to opioid treatment facilities that offer MAT helps more people who have OUD get the treatment they need. Therefore, optimizing the treatment facility distribution and treatment budget allocation should increase the transition rate from \(A\) to \(R\). We specifically affect \(\zeta\) based on the proportion of extra people that could transition from the \(A\) to \(R\) compartment if a certain number of new treatment facilities offering MAT were established. We define this proportion as \[\ell_{k}=\frac{\sum_{i=1}^{C}(x_{i}-n_{i})\bar{d}_{ik}/dx_{i}}{A_{k}},\] which is added to \(\zeta\) in each time period. Here, \(\bar{d}_{ik}/dx_{i}\) represents the number of patients treated per facility per time increment and depends on two of the decision variables. The numerator of \(\ell_{k}\) represents the additional number of people who could be treated in the state due to the new treatment facilities. We then divide this number by \(A_{k}\) to get the added rate of transition from the \(A\) to \(R\) compartments. For a particular state, we have the following dynamics: \[\begin{array}{ll}S_{k+1}&=S_{k}+(\epsilon P_{k}+\delta R_{k}-\alpha S_{k}) \Delta\\ P_{k+1}&=P_{k}+(\alpha S_{k}-(\epsilon+\gamma+\beta)P_{k})\Delta\\ I_{k+1}&=I_{k}+(\beta P_{k}-\phi I_{k})\Delta\\ A_{k+1}&=A_{k}+(\gamma P_{k}+\sigma R_{k}+\phi I_{k}-(\zeta+\ell_{k})\,A_{k}- \mu A_{k})\Delta\\ R_{k+1}&=R_{k}+((\zeta+\ell_{k})\,A_{k}-(\delta+\sigma)R_{k})\,\Delta\\ D_{k+1}&=D_{k}+(\mu A_{k})\Delta.\end{array}\] Collectively, we call this dynamics function \(\bar{f}\) since this is our approximated compartmental dynamics function with estimated model parameters (one parameter of which is affected by our decision variables). For a particular US state, we define the following treatment facility location and budget allocation problem: minimize \[\begin{array}{ll}&D_{K}+\lambda_{A}\sum_{k=0}^{K}(A_{k})+\lambda_{\rm pr} \psi_{\rm pr}(\mathbf{x})+\lambda_{\rm SVI}\psi_{\rm SVI}(\mathbf{x})+\lambda _{\rm Pop}\psi_{\rm Pop}(\bar{\mathbf{d}}_{k})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\lambda_{\rm inf}\max\Big{(}0, \sum_{i=1}^{C}x_{i}-N\Big{)}\\ \text{subject to}&x_{i}\geq n_{i},\quad\forall i\in\mathcal{C}\\ &\sum_{i=1}^{C}\bar{d}_{ik}\leq d_{k},\quad\forall k\in\mathcal{K}\\ &\mathbf{z}_{k+1}=\bar{f}(\mathbf{z}_{k},\mathbf{x},\bar{\mathbf{d}}_{k}), \quad k=1,\ldots,K-1\\ &x_{i}\in\mathbf{Z},\quad x_{i}\geq 1,\quad\forall i\in\mathcal{C}\\ &\mathbf{z}_{k}=(S_{k},P_{k},I_{k},A_{k},R_{k},D_{k})\geq 0,\quad\forall k\in \mathcal{K}\\ &\bar{d}_{ik}\geq 0,\quad\forall i\in\mathcal{C},k\in\mathcal{K}.\end{array}\] Objective.Within the MIP objective function, we prioritize minimizing the total number of overdose deaths and the number of people with OUD, which are the first two terms of the objective. In the next two terms, we minimize the 1-norm distance between \(\mathbf{x}\), which is a vector of decision variables for the number of treatment facilities in every county, and the ideal distribution of treatment facilities based on the prescribing rates (vector \(\mathbf{pr}\)) or SVI rankings (vector \(\mathbf{SVI}\)) of each county. We define these functions as \(\psi_{\text{pr}}(\mathbf{x})=\|\mathbf{x}-N\cdot\mathbf{pr}/(\sum_{i=1}^{C} \text{pr}_{i})\|_{1}\) and \(\psi_{\text{SVI}}(\mathbf{x})=\|\mathbf{x}-N\cdot\mathbf{SVI}/(\sum_{i=1}^{C} \text{SVI}_{i})\|_{1}\), respectively. The parameters \(\lambda_{\text{pr}}\) and \(\lambda_{\text{SVI}}\) determine whether the distribution of treatment facilities is closer to the prescribing rate distribution or the SVI ranking distribution. These terms ensure that the distribution of treatment facilities is more equitable. In the fifth term, we penalize the difference between the ideal treatment budget distribution based on county population (vector \(\mathbf{Pop}\)) and our MIP's treatment budget allocation in each time period (vector \(\bar{\mathbf{d}}_{k}\)). We define this function as \(\psi_{\text{Pop}}(\bar{\mathbf{d}}_{k})=\|\bar{\mathbf{d}}_{k}-d_{k}\cdot \mathbf{Pop}/(\sum_{i=1}^{C}\text{Pop}_{i})\|_{\infty}\). We set the following hyperparameters: \(\lambda_{A}=0.9\), \(\lambda_{\text{pr}}=0.3\), \(\lambda_{\text{SVI}}=1-\lambda_{\text{pr}}\), and \(\lambda_{\text{Pop}}=0.1\). \(\lambda_{\text{inf}}\) is adjusted for each state, which is described in Appendix C. The final term of the objective function is essential for solution feasibility. In most cases, \(\sum_{i=1}^{C}x_{i}=N\), as we aim to cap the total number of treatment facilities in the state at \(N\). If \(\sum_{i=1}^{C}x_{i}\geq N\), this indicates that the state's solution must be over budget to be feasible. In order to minimize the amount that a solution is over budget, we penalize \(\sum_{i=1}^{C}x_{i}-N\) by setting a large \(\lambda_{\text{inf}}\). This objective function approach gives us more insight into potential solutions if particular states exceed their budget. Constraints.The first constraint ensures that the optimal number of treatment facilities in a county is greater than or equal to the number of treatment facilities already in that county. The second constraint limits the sum of the treatment budgets distributed to each county by the state treatment budget limit for time \(k\). The third constraint describes the discretized compartmental model for the particular state. The fourth constraint restricts the \(x_{i}\)'s to be integer-valued and requires that there is at least one treatment facility in each county. This ensures that there will not be any issues with dividing by 0, as we divide by \(x_{i}\) within our \(\ell_{k}\) term. The fifth constraint describes the domain of the compartment values at each time \(k\), and the sixth constraint similarly describes the domain of the \(\bar{d}_{ik}\) decision variables. The problem as shown here is not easily solved by a numerical MIP solver, since we are dividing two decision variables within \(\ell_{k}\) and have a non-convex objective. We reformulate the problem to have at most quadratic constraints and a linear objective, which is shown in Appendix B. Implementation.We use the Gurobi Optimizer to solve the optimization problem for each state. The Gurobi 10.0 bilinear solver can handle non-convex quadratic constraints in a mixed-integer programming context. We implement each state MIP with the JuMP.jl mathematical optimization framework in Julia (Dunning et al., 2017). We set the parameter NonConvex to 2. Depending on the size of the optimization problem, we also set the TimeLimit parameter to 2000 seconds to prioritize the solver finding a good feasible solution rather than an optimal solution. Reproduceable code can be found here. Results and Discussion ### Epidemiological Model Parameter Estimation using Neural ODEs Figure 2 shows the estimated parameters from our neural ODE model for each US state. The exact parameter values are included in Appendix D. Figure 2 illustrates that the \(\phi\) and \(\beta\) parameters seem to be correlated. In addition, they tend to be approximately 0 for many of the states. Both parameters have to do with illicit opioid use, which is likely why they are correlated for each state. We also hypothesize that the parameter values are negligible for many states because the NDSUH surveys tend to underestimate the prevalence of illicit opioid use in particular. However, for states where these parameters are non-negligible, we can interpret them. For New York, we see that \(\phi=0.05511\), which implies that around 5 in 100 New Yorkers who use illicit opioids will develop OUD. This rate is higher than the prescription-induced addiction rate \(\gamma=0.00744\), which we set based on previous literature. This makes sense because illicit opioids like heroin tend to be more addictive than prescription opioids. We see that \(\beta=0.005029\) for New York, which means that around 5 in 1000 New Yorkers who are using prescription opioids will begin using illicit opioids within a year. This seems to align with estimates which state that around 4-6% of individuals who misuse prescription opioids transition to heroin [National Institute on Drug Abuse, 2021]. Since our \(P\) compartment also includes people who properly use prescription opioids, it makes sense that our parameter estimate would be smaller. Notably, Vermont has the largest \(\phi\) and \(\beta\) parameters, which means they have the highest illicit addiction rate and highest transition rate from prescription to illicit opioids. This could be the result of Vermont having better data quality than other states related to illicit opioid use. However, it could also be indicative that Vermont has a problem with people transitioning from prescription to illicit opioid use and becoming addicted. More states have non-negligible \(\beta\) parameters compared to \(\phi\) parameters, which means that the transition between prescription opioid use and illicit use is a contributing factor to the state-level opioid epidemic. The other parameters \(\mu\), \(\zeta\), and \(\epsilon\) are consistently non-negligible across all states. The death rate of addicts \(\mu\) is similar in magnitude to previous calculated death rates. For instance, Battista et al. [2019] calculate the death rate of addicts to be 0.01159, which we use as our initial guess for \(\mu\). After training, our \(\mu\) parameter is still around this value for all states. The states with the highest death rate of addicts are Oklahoma (0.02198) and New Mexico (0.02149) according to our parameter estimation. The rate of entry into rehabilitation \(\zeta\) is also within the approximate range determined by Battista et al. [2019] which was 0.2-2. Our estimated \(\zeta\) values are within the range 0.1-0.52, with Maryland having the largest \(\zeta\) value. A \(\zeta\) value of 0.2 means that 20 addicted people enter treatment out of 100 addicted people. The rate of ending prescription without addiction per prescription user and year \(\epsilon\) is also within the range 0.8-8 determined by Battista et al. [2019]. Battista et al. determine this parameter range based on a study conducted by Shah et al. [2017] which shows that the time individuals spend using prescription opioid ranges from 1 month to 3 years. The estimated \(\epsilon\) values range from 1-4 for different states, indicating that most patients are ending their prescriptions without addiction within a year. Hawaii and New York have the largest \(\epsilon\) values which means that patients end their prescriptions without addiction faster. ### Epidemiological Model Validation Figure 3 illustrates the neural ODE model predicted dynamics in comparison to the actual population proportion values for the compartments \(P,I,A,R\), and \(D\) from 1999-2019 for New York. The model captures the steady increase in the \(D\) proportion values and the slight increase over the years in the \(R\) proportion values. For the \(I\) compartment, the data shows a slight decrease in the proportion of heroin users from 2016-2019, but an overall slight increase since 1999. Our model is linear, so it is only able to capture the slight increase in the \(I\) proportion values since 1999. If we were able to incorporate fentanyl use data, there would likely have been a larger increase in the \(I\) proportion values. For the \(A\) compartment, the model predicts that the proportion of people with OUD has been increasing since 1999, which fits with the data. Since the data is sparse, the model does not capture the slight decrease from 2018-2019. Compared to the other compartments, a much larger proportion of the population is captured within the \(P\) compartment, since many people are prescribed prescription opioids for pain but are not abusing them. The model captures the drastic increase in opioid prescriptions which occurred in the 1990s as doctors began to prescribe opioids to many more patients. The trend in the proportion of prescription opioid users has Figure 2: Estimated parameters for each US state. since flattened and slightly decreased, which the model captures as well. The loss progression throughout training is shown in Appendix D. ### Optimization Problem Solutions We obtain solutions for the majority of US states from our MIPs, which give the treatment facility and treatment budget distributions across counties in each state. Figure 4 shows the additional treatment facilities and treatment budget distributions determined by the MIP solutions for 4 states: Maine, California, Indiana, and Florida. We display the solution of one state from each main geographic region (Northeast, West, Midwest, and South). The MIPs for Maine and Indiana have optimal solutions, and the MIPs for California and Florida have feasible solutions. Figures 3(a), 3(c), 3(e), and 3(g) show that most counties in each state either do not need any additional treatment facilities or only need one additional facility to be opened. There is one main county in each state that requires more additional treatment facilities than the others. The solution to the California MIP (shown in Figure 3(c)) recommends that there should be 43 additional treatment facilities in Los Angeles county. California has more grant funding in comparison to other states, which is why they can afford to add 43 additional treatment facilities to a particular county. For Maine, Indiana, and Florida, the counties with the greatest number of recommended additional treatment facilities are Cumberland, Marion, and Miami-Dade, respectively. These counties also have the greatest population in each corresponding state. Nevertheless, we can still see that incorporating the social vulnerability and prescribing rate in each county had some effect on making the solutions more equitable. As an example, Figure 5 shows the Florida treatment facilities solution in comparison to the SVI rankings and prescribing rate distributions. We set \(\lambda_{\rm pr}=0.3\) and \(\lambda_{\rm SVI}=0.7\) Figure 3: Neural ODE model predictions in comparison to the actual population proportion values in each year from 1999–2019 for New York. Figure 4: State MIP solutions consisting of additional facilities and treatment budget distributions. so that the MIP prioritizes its treatment facility solution to be closer to the SVI ranking distribution rather than the prescribing rate distribution. The recommended number of treatment facilities per county in Figure 4(c) more closely matches up with the high SVI ranking counties in Figure 4(a) compared to the high prescribing rate counties in Figure 4(b). The recommended number of treatment facilities in a county is not exactly proportional to the SVI ranking because some counties already have more initial treatment facilities due to their population sizes. In addition, the MIP prioritizes minimizing deaths and the number of people with OUD when allocating facilities. However, comparing Figure 3(g) and Figure 4(a), we can see that at least 1 additional facility is recommended for most counties with notably higher SVI rankings. Overall, our solutions are more equitable because we are able to incorporate these factors into our MIPs. Figures 3(b), 3(d), 3(f), and 3(h) show the treatment budget distributions yielded by the MIP solutions for Maine, California, Indiana, and Florida in a given three month time frame. For each state, the recommended budget distributions are the same for every 3 month interval within our modeling period. We want the treatment budget distribution to be as close to the population distribution as possible, in order to distribute larger treatment budgets to Figure 5: Comparison between the SVI rankings distribution, the prescribing rate distribution, and the treatment facilities distribution determined by the Florida MIP solution. countings with larger populations. Accordingly, our solutions distribute the largest portion of each state's treatment budget to counties with the largest populations in each state--Cumberland, Los Angeles County, Marion, and Miami-Dade, respectively. Solution Impact.Table 3 quantifies the effect of optimizing the locations of additional treatment facilities and the treatment budget allocation on the compartments \(A\), \(R\), and \(D\) after 2 years for almost all US states. In comparison to our baseline compartmental model predictions, the proposed solutions to the respective state MIPs on average decrease the number of people with OUD by \(5.70\pm 0.738\%\), increase the number of people getting treatment by \(21.17\pm 3.162\%\), and decrease the number of opioid-related deaths by \(0.51\pm 0.086\%\) after 2 years (Figure 6). Figure 6 additionally shows the average effect of the MIP solutions on compartments \(A\), \(R\), and \(D\) for the main US geographic regions: Northeast, West, Midwest, and South. From Table 3, we see that Rhode Island has the greatest percentage decrease in the number of people with OUD at -13.66%. Iowa has the greatest percentage increase in the number of people receiving treatment at 50.3%. Nebraska has the greatest percentage decrease in opioid-related deaths at -1.55%. Across states, our MIP solutions have the greatest impact on the number of people in rehabilitation (\(R\)) because our decision variables directly affect the parameter \(\zeta\), which dictates how the \(R\) compartment evolves over time. Within Table 3, we also differentiate between whether the state MIP yields an optimal solution, a feasible solution within a time limit, or an over budget solution. States with no symbols within the table have MIPs that yielded an optimal, within-budget solution. For states like Texas, North Carolina, and Florida, Gurobi only found a feasible solution within the time limit because those states have a large number of counties, which adds to the complexity of the MIP. There were only 10 states for which Gurobi was not able to certify Figure 6: Average effect of our MIP solutions on compartments \(A\), \(R\), and \(D\) for various solution groupings. \begin{table} \begin{tabular}{l c c c} \hline \hline State & \(A\) & \(R\) & \(D\) \\ \hline AL\(\dagger\) & -8.07 & 26.92 & -1.05 \\ AR\(\dagger\) & -5.13 & 22.53 & -0.78 \\ AZ & -5.14 & 18.05 & -0.37 \\ CA* & -8.3 & 28.06 & -0.57 \\ CO & -4.98 & 23.04 & -0.35 \\ CT & -3.22 & 6.46 & -0.24 \\ DE & -3.67 & 12.78 & -0.34 \\ FL* & -4.17 & 21.37 & -0.32 \\ GA\(\dagger\) & -6.25 & 17.61 & -0.63 \\ HI & -9.32 & 47.03 & -0.7 \\ IA\(\dagger\) & -5.58 & 50.3 & -0.64 \\ IL & -2.44 & 9.03 & -0.21 \\ IN & -3.13 & 11.96 & -0.31 \\ KS\(\dagger\) & -7.18 & 21.28 & -0.77 \\ KY\(\dagger\) & -4.81 & 23.27 & -0.35 \\ LA\(\dagger\) & -8.66 & 27.76 & -1.01 \\ MA*\(\dagger\) & -4.53 & 13.63 & -0.26 \\ MD* & -5.68 & 9.56 & -0.47 \\ ME & -1.96 & 5.78 & -0.12 \\ MI & -4.88 & 18.43 & -0.49 \\ MN\(\dagger\) & -5.87 & 12.58 & -0.51 \\ MO\(\dagger\) & -4.95 & 19.35 & -0.4 \\ NC* & -3.38 & 11.47 & -0.33 \\ NE\(\dagger\) & -9.94 & 36.35 & -1.55 \\ NH & -5.86 & 25.24 & -0.48 \\ NJ*\(\dagger\) & -3.51 & 17.26 & -0.24 \\ NM\(\dagger\) & -6.18 & 17.73 & -0.55 \\ NV\(\dagger\) & -6.31 & 38.72 & -0.53 \\ NY* & -4.07 & 8.49 & -0.3 \\ OH* & -5.47 & 43.08 & -0.33 \\ OK\(\dagger\) & -9.42 & 28.41 & -0.89 \\ OR & -4.02 & 14.89 & -0.3 \\ PA* & -6.52 & 26.92 & -0.55 \\ RI & -13.66 & 24.12 & -0.65 \\ SC & -3.79 & 12.4 & -0.41 \\ TN\(\dagger\) & -9.26 & 27.16 & -0.66 \\ TX*\(\dagger\) & -7.33 & 19.34 & -0.77 \\ UT & -2.82 & 10.86 & -0.2 \\ VA\(\dagger\) & -4.7 & 24.17 & -0.39 \\ VT & -8.45 & 25.33 & -0.92 \\ WA & -3.81 & 16.65 & -0.3 \\ WI\(\dagger\) & -3.05 & 13.7 & -0.24 \\ \hline \hline \end{tabular} * indicates a feasible solution; \(\dagger\) indicates that this solution was over budget. \end{table} Table 3: Effect of our state MIP solutions on the values of compartments \(A\), \(R\), and \(D\) in comparison to baseline opioid epidemic dynamics after 2 years (in percentages). optimality of the returned solution within the time limit. Feasible solutions still provide insight into treatment facility distributions that would be more ideal than the current distribution. Table 3 shows that the feasible solutions have a positive impact on each population compartment, although this impact could be greater with optimal solutions. We also identify whether the solution for a state is over budget by looking at the value of parameter \(h\) which indicates how many more treatment facilities in addition to \(N\) were necessary for the MIP to yield an optimal or feasible solution. If \(h>0\), the solution is over budget, and if \(h=0\), the solution stays within budget. Table 4 shows the value of \(h\) for each state whose MIP yields an over budget solution. In particular, Texas, Nevada, Georgia, Kansas, and Iowa need to obtain significantly more funding at the state level for opioid treatment facility expansion. Texas's solution is the most over budget, allocating 170 more facilities than the state has the budget for. This is likely because Texas has many counties, most of which currently have 0 treatment facilities offering MAT. The solver needed to allocate at least 1 treatment facility to every county, which caused the solution to be over budget. Figure 7 shows that the solution to the Texas MIP allocates 1 additional treatment facility to the majority of counties, which aligns with our explanation. States with a larger number of counties tend to have more counties that currently have 0 treatment facilities, which leads to over budget MIP solutions. In turn, states that have over budget solutions likely have greater percent changes related to each compartment value than they would have had if their solutions were within budget. As shown in Figure 6, the within-budget solutions \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c} \hline \hline **State** & AL & AR & GA & IA & KS & KY & LA & MA & MN & MO & NE & NJ & NM & NV & OK & TN & TX & VA & WI \\ \hline \hline **_h_** & 29 & 29 & 70 & 61 & 67 & 18 & 18 & 5 & 32 & 12 & 77 & 1 & 3 & 10 & 31 & 30 & 170 & 49 & 4 \\ \hline \hline \end{tabular} \end{table} Table 4: Number of facilities over budget (_h_) for states with over budget solutions. Figure 7: Amount of additional treatment facilities that should be opened in each county determined by the Texas MIP solution. still on average decrease the number of people with OUD by \(5.07\pm 1.011\%\), increase the number of people getting treatment by \(18.47\pm 4.058\%\), and decrease the number of opioid-related deaths by \(0.39\pm 0.070\%\) after 2 years. ## 5 Conclusions In this work, we develop a novel optimization approach that considers complex opioid epidemic dynamics to compute optimal opioid treatment facility locations and treatment budget distributions. The integration of a prescriptive MIP with a dynamical model gives us the ability to show the direct impact of the MIP solutions on epidemic dynamics, and helps the MIP yield solutions that maximize positive impact on population health measures described by the epidemic model. Our compartmental ODE model formulation expands on previous models of the opioid epidemic by including additional representative compartments: an illicit opioid use population compartment and a deceased compartment. This helps us capture illicit opioid use dynamics and incorporate cumulative overdose death data into our model more concretely. Although there have been previous state- and national-level compartmental ODE models of the opioid epidemic defined in the literature, no past work estimates unique parameters for almost every US state. We are able to capture the differences in the dynamics of the epidemic between states and interpret these differences through the model parameters. This helps us more accurately see the impact of our proposed interventions that affect each set of model parameters. We then compute optimal resource allocation interventions using our MIP formulation for each state and quantify the differential impact of the varying state-level MIP solutions. Our MIP solutions provide insight into the additional number of treatment facilities that should be in each county to minimize opioid-related deaths and OUD prevalence under budgetary constraints. The solutions also indicate how much of a limited treatment budget per time period should be allocated to each county. Importantly, we prioritize social vulnerability within our MIPs so that their proposed solutions make the distribution of treatment facilities more socioeconomically equitable within states. In order to ensure MIP feasibility, we allow some solutions to be "over budget" in terms of the number of treatment facilities that can be opened within a particular state. We find that certain states like Oklahoma, Alabama, and Nevada should consider obtaining more funding to address opioid treatment facility expansion. Our approach has limitations. Even though neural ODEs can deal with irregularly-sampled time series data, data quality and availability was a challenge when estimating parameters for our compartmental models. Having improved and more refined data related to illicit opioids, particularly fentanyl use, could benefit the quality of the parameter estimation, but we still have established that a neural ODE framework can be used for this application. We also estimated the overall budget and some of the parameters involved in our MIPs. Nevertheless, these parameters can be updated by decision-makers who have more insight into the parameters' actual values. Additionally, we could not capture certain populations within our epidemiological model, like individuals who started out using illicit opioids rather than transitioning from prescription opioids. However, our integrated approach is general enough to accommodate possible epidemiological model variations, different MIP formulations, and different interventions. For instance, if policy-makers are interested in how to optimize the distribution of the overdose-reversing drug naloxone, they could still use this approach; the intervention would just affect a different compartmental model parameter (which would likely be \(\mu\), the death rate of addicts). In addition, it could also be possible to make decisions for combined interventions that affect several model parameters (_i.e._, optimizing the naloxone distribution, opioid treatment facility locations, and treatment budget allocations). Our contributions are two-fold: (1) we provide interpretable parameters which quantify the differences between the opioid epidemic dynamics of different states through parameter estimation with neural ODEs, and (2) we formulate a novel MIP approach that proposes more equitable, optimal solutions for opioid treatment facility location and treatment budget allocation. We show that the proposed solutions could have a positive impact, even in the short term, on population health measures. To our knowledge, the approach of integrating a predictive dynamical model with a prescriptive optimization problem has not been explored within opioid epidemic modeling and policy-related literature. In contrast from previous work, our approach directly provides concrete and actionable decisions based on real-world data regarding opioid treatment allocation. Combined with easy-to-use graphical visualization tools, this approach could be used by policy-makers to inform decision-making regarding the opioid epidemic in the future. ## Appendix A Compartmental Model Initial Condition Since data was not available for every compartment in 1999, we calculated most of the initial compartment values for each state based on previous research. For the \(P\) compartment, a CDC Vital Signs report [Paulozzi et al., 2011] indicated that the sales of prescription opioids in 2010 were 4 times those in 1999, which we used as a proxy for the comparison between the number of people using prescription opioids in 1999 and 2010 respectively. The value for \(P\) in 2010 was divided by 4 to get the initial \(P\) compartment value for each state. For the \(I\) compartment, we performed calculations based on data that stated that the annual average rate of past-year heroin use was 1.6 per 1000 persons in 2002-2004 nationally compared to 2.6 per 1000 persons in 2011-2013 nationally [Jones et al., 2015]. We approximated the initial \(I\) compartment value by multiplying the value for \(I\) in 2016 (the earliest year of data we had for this compartment) by 1.6/2.6 for each state. We assumed that the illicit users population did not change much from 1999-2002 and 2013-2016. For the \(A\) compartment, we utilized data from a study that indicated the prevalence of prescription opioid use disorder was 0.6% in 2003 compared to 0.9% in 2013 [Han et al., 2015]. We multiplied the value for \(A\) in 2016 (the earliest year of data we had for this compartment) by 6/9 for each state. This resulting value was used as the initial \(A\) compartment value, as we assumed the addicted population did not change much between 1999-2003 and 2013-2016. For the \(R\) compartment, we multiplied the data from 2000 by 0.75 to get the initial value for each state. The multiplier was determined based on the data trend. For the \(D\) compartment, we used the data we had for 1999 as the initial \(D\) compartment value for each state. The initial \(S\) compartment value was calculated based on data from the other 5 compartments and the populations of each US state in 1999. ## Appendix B Reformulation of State-Level MIP To ensure that our state-level MIP can be solved by Gurobi, we reformulate the problem to have at most quadratic constraints and a linear objective. We add new decision variables \(u_{i}\), \(v_{i}\), and \(z_{i}\) for all \(i\in C\). The \(u_{i}\)'s and \(v_{i}\)'s are defined to reformulate the 1-norm terms in the objective. The \(z_{i}\)'s are continuous variables that are defined to turn the constraint where we divide two decision variables into a quadratic constraint. We have the expression \(\bar{d}_{ik}/dx_{i}\), and we define \(z_{i}=1/x_{i}\) by adding the constraint \(x_{i}\cdot z_{i}=1\) for all \(i\in\mathcal{C}\). We reformulate the expression as \(\bar{d}_{ik}z_{i}/d\). We bound \(x_{i}\) such that \(x_{i}\in[1,N]\). Each county should have at least one treatment facility offering MAT, and \(N\) is the maximum number of facilities that can be opened within the state. We also define constants \(w_{k}\) for all \(k\in\mathcal{K}\) to linearize the sum of infinity norms objective term, and a constant \(h\) to linearize the final objective term. We can write the resulting problem as follows: minimize \[D_{K}+\lambda_{A}\sum_{k=0}^{K}(A_{k})+\lambda_{\mathrm{pr}}\sum_{i =1}^{C}u_{i}+\lambda_{\mathrm{SVI}}\sum_{i=1}^{C}v_{i}+\lambda_{\mathrm{Pop}} \sum_{k=0}^{K}w_{k}+\lambda_{\mathrm{inf}}h\] subject to \[\sum_{i=1}^{C}x_{i}-N\leq h,\quad 0\leq h\] \[x_{i}\geq n_{i},\quad x_{i}\cdot z_{i}=1,\quad\forall i\in \mathcal{C}\] \[\sum_{i=1}^{C}\bar{d}_{ik}\leq d_{k},\,\forall k\in\mathcal{K}\] \[-u_{i}\leq x_{i}-\frac{\mathrm{pr}_{i}}{\sum_{i=1}^{C}\mathrm{pr }_{i}}N\leq u_{i},\quad\forall i\in\mathcal{C}\] \[-v_{i}\leq x_{i}-\frac{\mathrm{SVI}_{i}}{\sum_{i=1}^{C}\mathrm{SVI }_{i}}N\leq v_{i},\quad\forall i\in\mathcal{C}\] \[-w_{k}\leq\bar{d}_{ik}-\frac{\mathrm{Pop}_{i}}{\sum_{i=1}^{C} \mathrm{Pop}_{i}}d_{k}\leq w_{k},\quad\forall i\in\mathcal{C},k\in\mathcal{K}\] \[S_{k+1}=S_{k}+(\epsilon P_{k}+\delta R_{k}-\alpha S_{k})\Delta, \quad k=1,\ldots,K-1\] \[P_{k+1}=P_{k}+(\alpha S_{k}-(\epsilon+\gamma+\beta)P_{k})\Delta, \quad k=1,\ldots,K-1\] \[I_{k+1}=I_{k}+(\beta P_{k}-\phi I_{k})\Delta,\quad k=1,\ldots,K-1\] \[A_{k+1}=A_{k}+(\gamma P_{k}+\sigma R_{k}+\phi I_{k}-(\zeta+\ell_ {k})\,A_{k}-\mu A_{k})\Delta,\quad k=1,\ldots,K-1\] \[R_{k+1}=R_{k}+(\left(\zeta+\ell_{k}\right)A_{k}-(\delta+\sigma)R_ {k})\,\Delta,\quad k=1,\ldots,K-1\] \[D_{k+1}=D_{k}+(\mu A_{k})\Delta,\quad\forall k\in\mathcal{K}\] \[x_{i}\in\mathbf{Z},\quad x_{i}\in[1,N],\quad k=1,\ldots,K-1\] \[\bar{d}_{ik},u_{i},v_{i},z_{i},w_{k},S_{k},P_{k},I_{k},A_{k},R_{k},D_{k}\geq 0,\quad\forall i\in\mathcal{C},k\in\mathcal{K},\] where we define \[\ell_{k}=\frac{\sum_{i=1}^{C}d^{-1}(x_{i}-n_{i})\bar{d}_{ik}z_{i}}{A_{k}}.\] This mixed-integer non-convex optimization problem is solved by Gurobi as a bilinear problem. ## Appendix C MIP Hyperparameter Tuning We tune the \(\lambda_{\mathrm{inf}}\) hyperparameter for each state-level MIP, to account for the fact that the magnitude of each MIP differs (_i.e._, each state has a different number of counties and a different population magnitude). We set \(\lambda_{\mathrm{inf}}\) to a large enough value to ensure that the MIP solver can find feasible solutions as well as minimize the amount that a solution is over budget. In this way, \(\lambda_{\mathrm{inf}}\) acts as a soft-constraint penalty parameter, since it is a large parameter set in order to help the solver find feasible solutions. We estimate this parameter specifically for certain states, which involves testing a range of parameter values within the range [100, 500]. After tuning, we set \(\lambda_{\inf}=475\) for Michigan, Florida, and Washington; \(\lambda_{\inf}=500\) for Illinois; \(\lambda_{\inf}=276\) for New Jersey; \(\lambda_{\inf}=175\) for Maryland; \(\lambda_{\inf}=123\) for Massachusetts, and \(\lambda_{\inf}=450\), otherwise. ## Appendix D Tables and Figures Table 5 shows the exact values of the parameters we estimate using the neural ODE model for each state. Figure 8 shows how the training loss evolves over 20000 iterations of training for the New York neural ODE model. The loss decreases and eventually flattens off as the number of iterations increases. The final loss is less than a magnitude of \(10^{-4}\). This indicates that the model has found the best possible parameter values. We use the results from New York to show an illuminating example of how our models tend to fit to the data. \begin{table} \begin{tabular}{l l l l l l} \hline \hline State & \(\phi\) & \(\epsilon\) & \(\beta\) & \(\zeta\) & \(\mu\) \\ \hline AL & 0 & 1.307 & 0.0000925 & 0.2871 & 0.00788 \\ AR & 0 & 1.464 & 0.001 & 0.2361 & 0.01636 \\ AZ & 0 & 2.291 & 0 & 0.2693 & 0.00885 \\ CA & 0 & 3.57 & 0 & 0.2731 & 0.00845 \\ CO & 0 & 2.724 & 0 & 0.2132 & 0.0093 \\ CT & 0.04267 & 2.769 & 0.004863 & 0.4428 & 0.00978 \\ DE & 0.03041 & 1.938 & 0.003776 & 0.2426 & 0.00571 \\ FL & 0 & 2.354 & 0 & 0.1843 & 0.0099 \\ GA & 0 & 2.112 & 0 & 0.3405 & 0.01338 \\ HI & 0 & 3.99 & 0.000394 & 0.185 & 0.01035 \\ IA & 0 & 2.765 & 0.001285 & 0.1064 & 0.00969 \\ IL & 0 & 2.973 & 0.001353 & 0.2537 & 0.01157 \\ IN & 0 & 1.8 & 0.000142 & 0.2515 & 0.00761 \\ KS & 0 & 2.137 & 0.000574 & 0.3308 & 0.01461 \\ KY & 0 & 1.458 & 0 & 0.1902 & 0.00818 \\ LA & 0 & 1.606 & 0 & 0.3065 & 0.01038 \\ MA & 0.0459 & 3.027 & 0.003099 & 0.2823 & 0.00517 \\ MD & 0.05173 & 2.74 & 0.004663 & 0.5151 & 0.01392 \\ ME & 0.06146 & 2.21 & 0.003482 & 0.2867 & 0.00458 \\ MI & 0 & 1.928 & 0.000784 & 0.2571 & 0.01402 \\ MN & 0 & 3.412 & 0 & 0.4333 & 0.01297 \\ MO & 0 & 2.03 & 0.00014 & 0.2512 & 0.01486 \\ NC & 0 & 1.972 & 0.000378 & 0.2718 & 0.01356 \\ NE & 0 & 2.689 & 0.000172 & 0.2757 & 0.01276 \\ NH & 0 & 2.353 & 0 & 0.1853 & 0.00933 \\ NJ & 0.01966 & 3.186 & 0.003131 & 0.1817 & 0.00676 \\ NM & 0.01186 & 2.524 & 0.001433 & 0.3059 & 0.02149 \\ NV & 0 & 1.967 & 0 & 0.1622 & 0.01304 \\ NY & 0.05511 & 3.916 & 0.005029 & 0.4383 & 0.01064 \\ OH & 0 & 2.006 & 0 & 0.1098 & 0.00793 \\ OK & 0 & 1.556 & 0 & 0.3297 & 0.02198 \\ OR & 0 & 1.969 & 0 & 0.2563 & 0.0084 \\ PA & 0 & 2.361 & 0.001773 & 0.2184 & 0.00812 \\ RI & 0 & 2.435 & 0.000279 & 0.4883 & 0.00889 \\ SC & 0 & 1.831 & 0.00000316 & 0.2997 & 0.01279 \\ TN & 0 & 1.388 & 0 & 0.3383 & 0.01268 \\ TX & 0 & 2.721 & 0 & 0.378 & 0.01317 \\ UT & 0 & 2.181 & 0 & 0.2526 & 0.01542 \\ VA & 0 & 2.565 & 0 & 0.1838 & 0.01007 \\ VT & 0.1004 & 3.187 & 0.01647 & 0.2776 & 0.01075 \\ WA & 0 & 2.349 & 0 & 0.2085 & 0.00887 \\ WI & 0 & 2.621 & 0.000607 & 0.2131 & 0.0123 \\ \hline \hline \end{tabular} Note: We treated all parameters that were less than \(10^{-7}\) as approximately 0. \end{table} Table 5: Estimated parameters for each US state.
2310.07526
Interaction-aware Traffic Prediction and Scenario-based Model Predictive Control for Autonomous Vehicles on Highways
This paper addresses the problem of traffic prediction and control of autonomous vehicles on highways. A modified Interacting Multiple Model Kalman filter algorithm is applied to predict the motion behavior of the traffic participants by considering their interactions. A scenario generation component is used to produce plausible scenarios of the vehicles based on the predicted information. A novel integrated decision-making and control system is proposed by applying a Scenario-based Model Predictive Control approach. The designed controller considers safety, driving comfort, and traffic rules. The recursive feasibility of the controller is guaranteed under the inclusion of the `worst case' as an additional scenario to obtain safe inputs. Finally, the proposed scheme is evaluated using the HighD dataset. Simulation results indicate that the vehicle performs safe maneuvers in different traffic situations under the designed control framework.
Xiaorong Zhang, Sahar Zeinali, Georg Schildbach
2023-10-11T14:26:02Z
http://arxiv.org/abs/2310.07526v1
Interaction-aware Traffic Prediction and Scenario-based Model Predictive Control for Autonomous Vehicles on Highways* ###### Abstract This paper addresses the problem of traffic prediction and control of autonomous vehicles on highways. A modified Interacting Multiple Model Kalman filter algorithm is applied to predict the motion behavior of the traffic participants by considering their interactions. A scenario generation component is used to produce plausible scenarios of the vehicles based on the predicted information. A novel integrated decision-making and control system is proposed by applying a Scenario-based Model Predictive Control approach. The designed controller considers safety, driving comfort, and traffic rules. The recursive feasibility of the controller is guaranteed under the inclusion of the 'worst case' as an additional scenario to obtain safe inputs. Finally, the proposed scheme is evaluated using the HighD dataset. Simulation results indicate that the vehicle performs safe maneuvers in different traffic situations under the designed control framework. ## I Introduction ### _Motivation_ During the past decades, the design of control systems for autonomous vehicles on highways has been extensively studied. The primary purpose of these systems is to safely control the ego vehicle (EV) by utilizing the predicted motion states of the surrounding target vehicles (TVs) [1]. The predicted states are usually uncertain, so generating safe, comfortable, energy-efficient, and real-time capable control strategies is challenging. ### _Literature Review_ Physics-based [2], maneuver-based [3], and interaction-aware motion models [4] are used for the state prediction of vehicles. Since the interconnections between traffic participants are considered in the interaction-aware models, they are a good choice for describing realistic scenarios. Specifically, the mutual influence of the vehicles is usually expressed from a finite set of trajectory clusters or Dynamic Bayesian Networks (DBNs) [5][6]. Moreover, a novel interaction-aware traffic model is proposed by Lefkopoulos et al. [7], combining the physics of the vehicles, the intention of the drivers, and a no-collision assumption using an Interacting Multiple Model Kalman filter (IMM-KF), which establishes a new scheme with improved computational efficiency. Model Predictive Control (MPC) has been widely applied in designing a planner and controller for the EV by considering several constraints, such as the traffic rules, safety, and the comfort of driving [8, 9, 10]. The underlying reason for the prevalence of MPC is its ability to handle explicit constraints in an optimization problem with a moving horizon [11]. As a significant variant of Stochastic MPC, Scenario-based MPC (SCMPC) generates corresponding constraints in terms of the possible situational context [12]. It has been successfully implemented under several highway traffic conditions [13][14], as it is easily compatible with the traffic prediction component and can handle uncertainty using the information, contained in a few scenarios. Safety is the most critical aspect of controlling the EV. This feature becomes more challenging in emergency scenarios, e.g., unexpected deceleration of the leading vehicle (LV) or a sudden cut-in of a TV. These circumstances are identified as a safety-critical-event (SCE) [15][16], where the EV applies immediate braking against the crash, and it may lead to a deceleration of the EV until a standstill. Adaptive Cruise Control (ACC) is a helpful tool for dealing with SCEs, where the EV makes decisions based on the information about the LV. However, a large time headway in this algorithm may lead to over-conservative actions [17]. A safety controller is proposed based on ACC in [18], where the EV uses a predefined deceleration profile when the LV performs a maneuver in an SCE manner. Another representative solution to SCEs is the rigorous formalizing mathematical model of Responsibility-Sensitive-Safety (RSS) [19]. In this model, a safety distance is defined by assuming a 'worst case' scenario, and the EV responds to an SCE by decelerating at a predefined rate and without applying a full braking force. This approach might be sensitive to the parameter design, and the subtle change of the parameter set might lead to a totally different decision strategy [20]. ### _Contribution_ This paper presents a method for controlling the EV on highways with a comprehensive but simple structure, combining the interaction-aware motion prediction of IMM-KF with the decision-making and control design based on SCMPC. Furthermore, the safety of the EV under the control inputs is guaranteed by theoretically proving the recursive feasibility of the SCMPC under the consideration of the 'worst case' scenario. To the best of the authors' knowledge, no work handles all the mentioned problems with an integrated SCMPC architecture and proves the feasibility of the algorithm at the same time. The performance of the proposed approach is evaluated for different HighD dataset scenarios, indicating that EV performs safe and desirable maneuvers by applying the designed control architecture. ## II Control Architecture The proposed control architecture is shown in Fig. 1 and works as follows. First, the mode states of the TVs are predicted based on the IMM-KF. This information is sent to a scenario generation component to produce all possible maneuvers of the TVs. Then, the most likely scenarios are filtered based on a predefined probability threshold. Finally, an SCMPC-based control system consisting of two control modes is established, which corresponds to 'following the current lane' or 'changing the lane'. In addition to the filtered scenarios, the 'worst case' scenario is also included in the design of both controllers. A decision-making module chooses the desired maneuver of the EV based on the cost function value of the two control modes. ## III Scenario Generation ### _Intention-based Policy Mode_ In this subsection, the longitudinal and lateral policy modes of the vehicles are described. In the longitudinal direction, the'velocity tracking' (VT) and 'distance keeping' (DK) modes [7] are utilized, where the EV tracks a reference velocity in the VT mode and keeps a safe distance from its LV in the DK mode, respectively. In the lateral direction, three modes corresponding to 'lane 1', 'lane 2', and 'lane 3' are applied to represent the target lane of the vehicles, as shown in Fig. 2. For example, if EV is currently at lane 2, it can change to the right lane, lane 1, or the left lane, lane 3, according to the associated lateral mode. Thus, the total number of modes is 6, \(M=6\). The common state vector \(x_{k}\) in all modes at time \(k\) is \[x_{k}\triangleq\left[\underbrace{p_{\text{lon},k}}_{x_{\text{lm},k}}\;\; \underbrace{p_{\text{lat},k}}_{x_{\text{lm},k}}\;\;\underbrace{p_{\text{lat},k }}_{x_{\text{lm},k}}\;\;\;a_{\text{lat},k}\right]^{\top}, \tag{1}\] here \(p_{*,k},v_{*,k},a_{*,k}\) are respectively the position, velocity, and acceleration in the corresponding direction, \(*\in\{\text{lon},\text{lat}\}\). The unknown reference velocity \(v_{\text{ref},k}\) (VT mode) or the reference time gap \(t_{\text{gap},k}\) (DK mode) is also included in the longitudinal policy mode to be estimated. Therefore, the full state vector \(z_{k}\) in each policy mode at time \(k\) is \[z_{k}\triangleq\left[\underbrace{x_{\text{lon},k}}_{z_{\text{lm},k}}\;\; \underbrace{r_{\text{ref},k}}_{z_{\text{lm},k}}\;\;\underbrace{x_{\text{lat},k }}_{z_{\text{lm},k}}\right]^{\top}, \tag{2}\] where \(r_{\text{ref},k}\) indicates the unknown reference velocity \(v_{\text{ref},k}\) or reference time gap \(t_{\text{gap},k}\). We use \(x_{k}\) or \(z_{k}\) to denote the common or full states of an arbitrary vehicle, and \(x_{k}^{(\text{\tiny\ Consider the position \(p^{(\lambda)}\) of the center line of the target lane \(\lambda\), the system matrix and input matrix for the lateral direction and for the target lane \(\lambda\) (\(\lambda=1,2,3\)) are as follows: \[F_{\text{lat},k}^{(\lambda)} =\begin{bmatrix}-\frac{K_{\text{lat},1}T^{3}}{6}&-\frac{K_{\text{ lat},2}T^{3}}{6}&-\frac{K_{\text{lat},3}T^{3}}{6}\\ -\frac{K_{\text{lat},1}T^{2}}{2}&-\frac{K_{\text{lat},2}T^{2}}{2}&-\frac{K_{ \text{lat},3}T^{2}}{2}\\ -K_{\text{lat},1}T&-K_{\text{lat},2}T&-K_{\text{lat},3}T-1\end{bmatrix}, \tag{5a}\] \[E_{\text{lat},k}^{(\lambda)} =\begin{bmatrix}\frac{K_{\text{lat},1}T^{3}}{6}p^{(\lambda)}\\ \frac{K_{\text{lat},1}T^{2}}{2}p^{(\lambda)}\\ K_{\text{lat},1}Tp^{(\lambda)}\end{bmatrix}. \tag{5b}\] ### _Interaction-aware Estimation and Prediction_ Since the motion behavior of each vehicle is influenced by its surrounding vehicles, the state estimation and prediction of each vehicle are calculated in descending priority order by considering the interaction between vehicles, as proposed in [7]. In particular, the priority criteria are given by 1. If two vehicles are in the same lane, the preceding vehicle has higher priority. 2. If two vehicles are in different lanes, the vehicle with the higher longitudinal progress over a specific horizon has higher priority. For the sake of clarity, a policy mode corresponds to one model of the IMM-KF in this work. In the IMM-KF [7], we consider the Markov jump linear system (4), where the transition probability from mode \(i\) to mode \(j\) is denoted as \(\pi^{(i|j)}\), and \(\pi^{(i|j)}\in[0,1]\), \(i,j\in\{1,2,...,6\}\). Since the reference parameter of the VT and DK mode is different, we mix individual common estimates and initialize each mode in the first step as: \[c^{(i)} =\sum_{j=1}^{M}\pi^{(j|i)}\mu_{k-1}^{(j)}, \tag{6a}\] \[\mu_{k-1}^{(j|i)-} =\frac{\pi^{(j|i)}\mu_{k-1}^{(j)}}{c^{(i)}},\] (6b) \[\bar{x}_{k-1}^{(i)-} =\sum_{j=1}^{M}\mu_{k-1}^{(j|i)-}\hat{x}_{k-1}^{(j)-},\] (6c) \[\bar{P}_{k-1}^{(i)-} =\sum_{j=1}^{M}\mu_{k-1}^{(j|i)-}\left[P_{k-1}^{(j)-}\right.\] (6d) \[\left.+\left(\bar{x}_{k-1}^{(i)-}-\hat{x}_{k-1}^{(j)-}\right) \left(\bar{x}_{k-1}^{(i)-}-\hat{x}_{k-1}^{(j)-}\right)^{\top}\right],\] where \(\mu_{k-1}^{(ji)-}\) is the mixing conditional mode probability, \(\hat{x}_{k-1}^{(j)-}\), \(P_{k-1}^{(j)-}\) are the common state estimation and its associated covariance of the mode \(j\), which are part of the full state estimation \(\hat{z}_{k-1}^{(j)-}\), and its associated covariance \(\mathbb{P}_{k-1}^{(j)-}\). The fused common state estimation and its associated covariance are \(\bar{x}_{k-1}^{(i)-}\) and \(\bar{P}_{k-1}^{(i)-}\). The corresponding fused full state estimation and its associated covariance are \(\hat{z}_{k-1}^{(i)-}\) and \(\bar{\mathbb{P}}_{k-1}^{(i)-}\). The second step is to predict and update each policy mode as \[\hat{z}_{k-1}^{(i)+} =F_{k-1}^{(i)}\bar{z}_{k-1}^{(i)-}+E_{k-1}^{(i)}, \tag{7a}\] \[\mathbb{P}_{k-1}^{(i)+} =F_{k-1}^{(i)}\bar{P}_{k-1}^{(i)-}F_{k-1}^{(i)+}+Q_{k-1}^{(i)},\] (7b) \[\tilde{y}_{k}^{(i)} =y_{k}^{(i)}-\mathbf{I}_{\mathbf{\gamma}\mathbf{\gamma}}\hat{z}_{k-1}^{(i)+},\] (7c) \[r_{k}^{(i)} =\mathbf{I}_{\mathbf{\gamma}\times\mathbb{P}}\mathbb{P}_{k-1}^{(i)+}\mathbf{I }_{\mathbf{\gamma}\times\mathbf{\gamma}}^{(i)}+R_{k}^{(i)},\] (7d) \[L_{k}^{(i)} =\mathbb{P}_{k-1}^{(i)+}\mathbf{I}_{\mathbf{\gamma}\times\mathbf{\gamma}}^{ (i)-1},\] (7e) \[\hat{z}_{k-1}^{(i)-} =\hat{z}_{k-1}^{(i)+}+L_{k}^{(i)}\bar{y}_{k}^{(i)},\] (7f) \[\mathbb{P}_{k}^{(i)-} =(\mathbf{I}_{\mathbf{\gamma}\times\mathbf{\gamma}}-L_{k}^{(i)}\mathbf{I}_{\mathbf{ \gamma}\times\mathbf{\gamma}})\mathbb{P}_{k-1}^{(i)+}, \tag{7g}\] using the prior state estimate \(\hat{z}_{k-1}^{(i)+}\) and its covariance \(\mathbb{P}_{k-1}^{(i)+}\), the innovation residual \(\tilde{y}_{k}^{(i)}\) and its covariance \(r_{k}^{(i)}\), Kalman gain \(L_{k}^{(i)}\), and posterior predicted state estimate \(\hat{z}_{k}^{(i)-}\) and covariance \(\mathbb{P}_{k}^{(i)-}\). Based on the estimation result of individual mode \(\hat{z}_{k}^{(i)-}\), the state prediction of each policy mode is \[\hat{z}_{t|k}=\phi(t,1)\hat{z}_{k}^{-}+\sum_{\delta=k+1}^{t}\phi(t,\delta)E_{ \delta-1}, \tag{8a}\] \[\phi(t,\delta)=\begin{cases}(\Pi_{\eta=\delta}^{t-1}F_{\eta}^{ \top})^{\top}&\text{if }t>\delta\\ \mathbf{I}_{\mathbf{\gamma}\times\mathbf{\gamma}},&\text{if }t=\delta\end{cases}, \tag{8b}\] where \(t=k+1,...,k+1+N\), and \(N\) is the prediction horizon. In order to obtain a collision-free prediction ('no-collision prediction'), a mixed integer quadratic programming (MIQP) problem is formulated to get the modified state estimation \(\hat{z}_{k}^{(\text{proj})-}\), where the safety constraints between the studied vehicle and other vehicles which have higher priority are considered. Note that the state estimation of each policy mode is still \(\hat{z}_{k}^{-}\), and only the state prediction is modified in terms of \(\hat{z}_{k}^{(\text{proj})-}\) and (8). The state estimation error between \(\hat{z}_{k}^{-}\) and \(\hat{z}_{k}^{(\text{proj})-}\), and its covariance are used to augment the innovation residual \(\bar{y}_{k}^{(i)}\) and its covariance \({r_{k}}^{(i)}\) as \(\tilde{y}_{k}^{(i)}\) and \(\tilde{r}_{k}^{(i)}\). Then, the policy mode probability is updated based on the augmented matrices \[\tilde{L}_{k}^{(i)} =\frac{\exp(-\frac{1}{2}\tilde{y}_{k}^{(i)\top}\tilde{r}_{k}^{(i)-} \tilde{y}_{k}^{(i)})}{\left|2\pi\tilde{r}_{k}^{(i)}\right|^{1/2}}, \tag{9a}\] \[\tilde{\mu}_{k}^{(i)} =\frac{c^{(i)}\tilde{L}_{k}^{(i)}}{\sum_{j=1}^{M}c^{(j)}\tilde{L}_{k}^{( j)}}. \tag{9b}\] The final step is to mix state estimation and its covariance according to the updated probability of the individual mode \[\tilde{x}_{k}^{-} =\sum_{i=1}^{M}\tilde{\mu}_{k}^{(i)}\hat{x}_{k}^{(i)-}, \tag{10a}\] \[P_{k}^{-} =\sum_{i=1}^{M}\tilde{\mu}_{k}^{(i)}[P_{k}^{(i)-}+(\hat{x}_{k}^{-} -\hat{x}_{k}^{(i)-})(\hat{x}_{k}^{-}-\hat{x}_{k}^{(i)-})^{\top}]. \tag{10b}\] The updated state estimation is also modified in terms of MIQP problem to guarantee safety over the whole prediction horizon. The readers are referred to Lefkopoulos et al. [7] for more details about the mentioned method. ### _Scenario Generation of TVs_ A scenario is defined as a tuple of motion maneuvers for all TVs. Assume that the number of investigated TVs is \(V\), then a total of \(M^{V}\) possible scenarios can be generated. \(\mu_{i}^{(n)}\) is the probability of TV \(n\) with the policy mode \(i\), \(i\in\{1,2,...,6\}\). Assuming statistical independence of each vehicle's no-collision prediction over the prediction horizon, then the probability of the scenario \(s\) is calculated by \[\text{Pr}(s)=\prod_{n=1}^{V}\mu_{i}^{(n)},\quad s=1,...,M^{V}, \tag{11}\] where \(\sum_{s=1}^{M^{V}}\text{Pr}(s)=1\). In order to have high-probability scenarios, scenarios with a probability less than a predefined threshold \(\underline{P}\) are not considered. The probability of the remaining scenarios is normalized by \[\overline{\text{Pr}}(s)=\frac{\text{Pr}(s)}{1-\sum_{\zeta=1}^{\theta}\text{Pr }(\zeta)},\quad s=1,...,M^{V}-\theta, \tag{12}\] where \(\theta\) is the total number of scenarios with probability less than \(\underline{P}\). ## IV Scenario-based Model Predictive Control Based on the predicted scenarios of the TVs, a feasible trajectory for the EV is calculated by solving a constrained finite-time optimal control problem (CFTOCP) in a moving horizon fashion. The objective of the optimization problem is to follow the planned reference trajectory with minimum effort and with the consideration of safety constraints, traffic rules, and driving comfort. The first computed control input of the CFTOCP is fed to the system at each time step. ### _Vehicle Model_ The Jerk Model [21] is applied to represent the dynamics of the EV: \[\underbrace{\begin{bmatrix}p_{s,k+1}^{\text{(EV)}}\\ v_{s,k+1}^{\text{(EV)}}\\ a_{s,k+1}^{\text{(EV)}}\\ a_{s,k+1}^{\text{(EV)}}\end{bmatrix}}_{A_{*,k+1}}\quad\underbrace{\begin{bmatrix} 1&T_{\text{p}}&\frac{1}{2}T_{\text{p}}{}^{2}\\ 0&1&T_{\text{p}}\\ 0&0&1\end{bmatrix}}_{A}\quad\underbrace{\begin{bmatrix}p_{s,k}^{\text{(EV)}}\\ u_{s,k}^{\text{(EV)}}\\ a_{s,k}^{\text{(EV)}}\end{bmatrix}}_{\begin{subarray}{c}u_{s,k}^{\text{(EV)}} \\ u_{s,k}^{\text{(EV)}}\end{subarray}}+\underbrace{\begin{bmatrix}\frac{1}{2}T_{ \text{p}}{}^{3}\\ \frac{1}{2}T_{\text{p}}{}^{2}\\ T_{\text{p}}{}^{2}\\ T_{\text{p}}\end{bmatrix}}_{B}\quad\underbrace{\begin{subarray}{c}j_{s,k}^{\text{(EV)}} \\ u_{s,k}^{\text{(EV)}}\end{subarray}}_{u_{s,k}^{\text{(EV)}}}, \tag{13}\] with the prediction time step \(T_{\text{p}}\), the states \(x_{s,k}^{\text{(EV)}}\), the position \(p_{s,k}^{\text{(EV)}}\), the velocity \(v_{s,k}^{\text{(EV)}}\), the acceleration \(a_{s,k}^{\text{(EV)}}\), the control inputs \(u_{s,k}^{\text{(EV)}}\), and the jerk \(j_{s,k}^{\text{(EV)}}\), where \(*\in\{\text{lon},\text{lat}\}\). (13) can be rewritten as follows: \[x_{k+1}^{\text{(EV)}}=\underbrace{\begin{bmatrix}A&0_{3\times 3}\\ 0_{3\times 3}&A\end{bmatrix}}_{A}x_{k}^{\text{(EV)}}+\underbrace{\begin{bmatrix} B\\ B\\ B\end{bmatrix}}_{B}u_{k}^{\text{(EV)}}, \tag{14}\] where \(x_{k}^{\text{(EV)}}\) is defined in (1), and \(u_{k}^{\text{(EV)}}=\begin{bmatrix}u_{\text{lon},k}^{\text{(EV)}}&u_{\text{ lat},k}^{\text{(EV)}}\end{bmatrix}^{\top}\). Note that the prediction time step \(T_{\text{p}}\) differs from the sampling time step \(T\) of the IMM-KF, which usually satisfies \(T_{\text{p}}>T\). ### _Scenario-based Model Predictive Controller_ Considering lane-keeping and lane-change as possible motion behaviors of the EV, two control modes are proposed leading to different reference trajectories. The first control mode aims to make the EV keep its velocity and stay in the current lane, the second control mode aims to lead the EV to a target lane while maintaining its speed. The decision-making module for choosing the control input resulting from these controllers is integrated into the controller design. The input corresponding to the minimal cost function value is applied to the system. In addition to the generated scenarios, a so-called 'worst case' scenario is introduced to guarantee the recursive feasibility of CFTOCP. In this scenario, the LV is assumed to be decelerating with its minimum acceleration over the prediction horizon. We introduce two sequences of control inputs \(u_{0}^{\text{(EV)}},...,u_{N-1}^{\text{(EV)}}\) and \(\tilde{u}_{0}^{\text{(EV)}},...,\tilde{u}_{N-1}^{\text{(EV)}}\). The first input sequence is calculated by avoiding collision between EV and the LV/TVs under the generated scenarios, which is used to calculate the value of the cost function with its associated states \(x_{k}^{\text{(EV)}}\). The second input sequence is obtained by considering the safety constraints under the 'worst case' scenario, its associated states are \(\tilde{x}_{k}^{\text{(EV)}}\), and the terminal set of the states is \(\tilde{\mathbb{X}}_{f}^{\text{(EV)}}\), which is described in the proof of recursive feasibility. The first computed inputs \(u_{0}^{\text{(EV)}}\) and \(\tilde{u}_{0}^{\text{(EV)}}\) must be equal in order to guarantee the recursive feasibility. The CFTOCP is formulated as \[J=\min_{u_{x}^{\text{(EV)}},\,x_{s,k+1}^{\text{(EV)}}}\sum_{k= 1}^{N-1}\left\|x_{k+1}^{\text{(EV)}}-x_{\text{ref},k+1}^{\text{(EV)}}\right\|_ {\mathcal{Q}}+\left\|u_{k}^{\text{(EV)}}\right\|_{\mathcal{\tilde{R}}}, \tag{15a}\] \[s.t. x_{k+1}^{\text{(EV)}}=f(x_{k}^{\text{(EV)}},u_{k}^{\text{(EV)}}), \;\;k=0,1,...,N-1,\] (15b) \[\tilde{x}_{k+1}^{\text{(EV)}}=f(x_{k}^{\text{(EV)}},\tilde{u}_{k }^{\text{(EV)}}), \;\;k=0,1,...,N-1,\] (15c) \[x_{k}^{\text{(EV)}}\in\mathbb{X}^{\text{(EV)}}, \;\;\;\tilde{x}_{k}^{\text{(EV)}}\in\tilde{\mathbb{X}}^{\text{(EV)}}, \;\;k=0,1,...,N-1,\] (15d) \[u_{k}^{\text{(EV)}}\in\mathbb{U}^{\text{(EV)}}, \;\;\;\tilde{u}_{k}^{\text{(EV)}}\in\tilde{\mathbb{U}}^{\text{(EV)}}, \;\;k=0,1,...,N-1,\] (15e) \[u_{0}^{\text{(EV)}}=\tilde{u}_{0}^{\text{(EV)}},\] (15f) \[\tilde{x}_{N}^{\text{(EV)}}\in\tilde{\mathbb{X}}_{f}^{\text{(EV)}}\] (15g) \[x_{0}^{\text{(EV)}}=\tilde{x}_{0}^{\text{(EV)}}=x^{\text{(EV)}}(0). \tag{15h}\] Here \(x_{\text{ref},k+1}^{\text{(EV)}}\) is the reference state of the reference trajectory based on the relevant control mode. Note that we consider the EV only changes one lane in the lane-change mode according to the real traffic situation, so the number of reference trajectories in lane-change mode depends on the current lane of the EV. \(\bar{Q}\in\mathbb{R}^{6\times 6}\) and \(\bar{R}\in\mathbb{R}^{2\times 2}\) are positive definite weighting matrices for tuning. The feasible state sets \(\mathbb{X}^{\text{(EV)}}\) and \(\tilde{\mathbb{X}}^{\text{(EV)}}\), and input set \(\mathbb{U}^{\text{(EV)}}\) and \(\tilde{\mathbb{U}}^{\text{(EV)}}\) are limited by appropriate constraints, as detailed below. _Remark 1_.: If there is no LV in reality, it is assumed that there is an LV far away from the EV. _Remark 2_.: During the lane change of the EV, we call the lane-keeping control mode deactivated when keeping the current lane is infeasible. ### _Constraints_ #### Iv-C1 State and input constraints The traffic rules limit the velocity of the vehicles, and the acceleration and jerk are bounded to have a comfortable driving feel. The lateral position of EV is limited by the upper and lower bounds \([l_{\text{ub}},l_{\text{lb}}]\) of the lane \[0<v_{\text{lon},k}^{\text{(EV)}},\quad l_{\text{lb}}\leq p_{\text{lat},k}^{ \text{(EV)}}\leq l_{\text{ub}} \tag{16a}\] \[\underline{a}_{\text{lon}}\leq a_{\text{lon},k}^{\text{(EV)}}\leq \overline{a}_{\text{lon}},\quad\underline{a}_{\text{lat}}\leq a_{\text{lat},k}^ {\text{(EV)}}\leq\overline{a}_{\text{lat}},\] (16b) \[\underline{j}_{\text{lon}}\leq j_{\text{lon},k}^{\text{(EV)}}\leq \overline{j}_{\text{lon}},\quad\underline{j}_{\text{lat}}\leq j_{\text{lat},k}^ {\text{(EV)}}\leq\overline{j}_{\text{lat}}, \tag{16c}\] where \(\bullet\) and \(\overline{\bullet}\) denote the minimum and maximum values of the associated variables. #### Iv-C2 Safety constraints In highway traffic, drivers are required to maintain a safe distance from the preceding vehicles in the same lane, which is translated into the constraint: \[d_{k}\geq\underline{d}, \tag{17}\] and the safety distance \(\underline{d}\) is computed by [22] \[\underline{d}=\tau v_{\text{lon},k}^{\text{(EV)}}+\triangle d, \tag{18}\] with the design parameters \(\tau\) and \(\triangle d\). If the reference point of all vehicles is in their respective center, for example, choose \(\triangle d\geq\frac{l^{\text{(EV)}}+l^{\text{(IV)}}}{2}\), where \(l^{\text{(EV)}}\) and \(l^{\text{(IV)}}\) are the length of EV and LV. During the lane-change period, in addition to keeping a safe distance from the LVs in both the current and target lane, the EV should also maintain a safe distance with the TV behind it in the target lane [12]. The required safety distance also satisfies (18). The safety constraint under the generated scenario is based on (18), while the safety distance for considering the 'worst case' scenario collapses to \(\triangle d\). ### _Recursive Feasibility of the SCMPC_ The proof of the recursive feasibility is able to provide a mathematical guarantee for the feasibility of the designed controller. **Definition 1**.: (**Recursive Feasibility**) The SCMPC in lane-keeping mode is called recursively feasible if a collision with the current LV is always avoidable, while for SCMPC in lane-change mode, it means that no accident occurs between EV and the other vehicles during the lane-change first, and then EV is always safe in the target lane. If the safety constraints in the 'worst case' scenario are satisfied, it implies that the EV is able to handle all possible traffic circumstances under the SCMPC. The corresponding capability of EV is represented by a parameter, the minimal stopping horizon, which is defined as follows. **Definition 2**.: (**Minimal Stopping Horizon**) Given the initial velocity \(v_{\text{lon},0}^{\text{(EV)}}\) and the minimal acceleration \(\underline{a}_{\text{lon}}^{\text{(EV)}}\) of EV, the minimal stopping horizon \(\underline{N}\in\mathbb{N}\) satisfies \[\underline{N}=\left\lceil\frac{v_{\text{lon},0}^{\text{(EV)}}}{|\underline{a }_{\text{lon}}^{\text{(EV)}}|T_{\text{p}}}\right\rceil, \tag{19}\] where \(\lceil\bullet\rceil\) is defined as the smallest integer that is not smaller than a real number \(\bullet\). Considering the general traffic situation and rules, we make the following assumption. **Assumption 1**.: All vehicles only drive forwards, and the EV is only responsible for front collisions. **Assumption 2**.: \(u_{k}^{\text{(EV)}}=\begin{bmatrix}0&0\end{bmatrix}^{\top}\) is one element of the feasible set \(\vec{\mathbb{U}}\). The recursive feasibility of the SCMPC is then proved below. **Theorem 1**.: If SCMPC is initially feasible, and the prediction horizon \(N\geq\underline{N}\), then the controller is recursively feasible based on Assumptions 1, 2. Proof.: Let two initial control inputs of the generated normal scenarios and the 'worst case' scenario be \(\{u_{0|0}^{\text{(EV)}},u_{1|0}^{\text{(EV)}},...,u_{\underline{N}|0}^{\text{ (EV)}},...,u_{N|0}^{\text{(EV)}}\}\) and \(\{\tilde{u}_{0|0}^{\text{(EV)}},\tilde{u}_{1|0}^{\text{(EV)}},...,\tilde{u}_{ \underline{N}|0}^{\text{(EV)}},...,\tilde{u}_{N|0}^{\text{(EV)}}\}\). Choose the second control sequences as initially feasible solution \(\{\tilde{u}_{0|0}^{\text{(EV)}\star},\tilde{u}_{1|0}^{\text{(EV)}\star},..., \tilde{u}_{\underline{N}|0}^{\text{(EV)}\star},...,\tilde{u}_{N|0}^{\text{(EV) }\star}\}\) in the proof, and its related state sequence is \(\{\tilde{x}_{0|0}^{\text{(EV)}\star},\tilde{x}_{1|0}^{\text{(EV)}\star},..., \tilde{x}_{N|0}^{\text{(EV)}\star},...,\tilde{x}_{N|0}^{\text{(EV)}\star}\}\). Due to the condition (19), the terminal set \(\bar{\bar{\chi}}_{\text{f}}\) satisfies \[\bar{x}_{N|0}^{\text{(EV)}\star}=\begin{bmatrix}p_{\text{lon},\underline{N}|0}^ {\text{(EV)}}&0&0&p^{(\lambda)}&0&0\end{bmatrix}^{\top}, \tag{20}\] where the stopping longitudinal position \(p_{\text{lon},\underline{N}|0}^{\text{(EV)}}\) of EV is determined by its initial position \(p_{\text{lon},0}^{\text{(EV)}}\), initial velocity \(v_{\text{lon},0}^{\text{(EV)}}\) and minimal acceleration \(\underline{a}_{\text{lon}}^{\text{(EV)}}\). Moreover, the terminal lateral position of EV is the position of the center line of the target lane \(p^{(\lambda)}\) under the associated control mode. According to the system dynamics (14), we apply \(\tilde{u}_{0|0}^{\text{(EV)}\star}\) to the system and obtain \[x_{1}^{\text{(EV)}}=\bar{A}x_{0}^{\text{(EV)}}+\bar{B}\tilde{u}_{0|0}^{\text{( EV)}\star}=\breve{x}_{1|0}^{\text{(EV)}\star}. \tag{21}\] Then the following is a feasible solution for the MPC problem initialized at \(x_{1}^{\text{(EV)}}\): \[\begin{split}\{u_{|1|},u_{2|1},...,u_{\underline{N}|1},...,u_{N- 1|1},u_{N|1}\}=\\ \{\tilde{u}_{1|0}^{\star},\tilde{u}_{2|0}^{\star},...,\tilde{u}_{ \underline{N}|0}^{\star},...,\tilde{u}_{N-1|0}^{\star},\begin{bmatrix}0&0\end{bmatrix} ^{\top}\},\end{split} \tag{22}\] where \(\begin{bmatrix}0&0\end{bmatrix}^{\top}\in\vec{\mathbb{U}}\). The corresponding state sequence is \[\begin{split}\{x_{2|1},x_{3|1},...,x_{\underline{N}+1|1},...,x_{N|1}, x_{N+1|1}\}=\\ \{\breve{x}_{2|0}^{\star},\breve{x}_{3|0}^{\star},...,\breve{x}_{N|0}^{\star},...,\breve{x}_{N|0}^{\star},\bar{A}\breve{x}_{N|0}^{\star}+\bar{B}\begin{bmatrix}0&0 \end{bmatrix}^{\top}\},\end{split} \tag{23}\] where \(\bar{A}\breve{x}_{N|0}^{\star}+\bar{B}\begin{bmatrix}0&0\end{bmatrix}^{\top}= \breve{x}_{N|0}^{\star}\). Both sequences are feasible for the MPC problem because they satisfy the dynamics and the constraints. ## V Simulation and Discussion The presented approach is evaluated with one documentation (ID\(:\)\(01\)) of the HighD Dataset [23], which records the motion states of \(1047\) vehicles in \(900s\) with a sampling time of \(0.04s\). We take two specific traffic situations as case studies, including the EV's initial motion states and the TVs' motion states during the simulation time. The designed control system manipulates the motion behavior of the EV in the next time steps. ### _Simulation Setup_ Fig. 3 gives the initial traffic scenes of case studies. Table I shows the parameters used in the two case studies. The sampling time of the simulation is denoted as \(T_{s}\). Table II shows the initial mode states of vehicles with the proper units, and the length \(l\) and width \(w\) of vehicles in Cases 1 and 2. Note that we only consider EV changes to lane 1 in the lane-change mode of the simulation. \(\{\mu_{1}^{(1)}\mu_{2}^{(2)},\mu_{2}^{(1)}\mu_{1}^{(2)},\mu_{2}^{(1)}\mu_{2}^{(2)} \}=\{0.241,0.186,0.573\}\). At \(9s\), TV1 might perform the VT maneuver in lane 2, edge color in black, or in lane 3, edge color in magenta, and TV2 might perform a VT maneuver in lane 2, edge color in black. The associated probabilities of the generated scenarios are \(\{\mu_{2}^{(1)}\mu_{2}^{(2)},\mu_{3}^{(1)}\mu_{2}^{(2)}\}=\{0.913,0.087\}\). #### V-C2 Case 2 The vehicles' motion trajectories, the velocity profile, and the cost function value of two controllers are displayed in Figs. 9, 10 and 11. Between \(0s\) and \(2s\), the EV maintains its speed under the control strategy while the cost keeps increasing. The main reason which leads to this situation is that it is impossible to maintain a safe distance from its LV, TV2, by consistently keeping a higher velocity throughout the prediction horizon, so the inconsistency between desired motion states and the calculated states from the controller leads to an increased cost. Since then, the EV starts to reduce its speed to the speed of TV2 until \(4s\), which accordingly leads to a decrease in the value of the cost function. Note that the lane-change becomes available after about \(5s\), before which we set its associated cost function value as 5000. After that, its cost remains higher than lane-keeping. Therefore, EV stays in the current lane in the following time steps. The traffic prediction results for Case 2 at \(3s\) and \(6s\) are shown in Figs. 12 and 13. There are two generated scenarios at \(3s\), which includes TV1 performing a VT maneuver in lane 1, edge color in black, or in lane 2, edge color in white, and TV2 performing VT maneuvers in lane 2, edge color in black. The associated probabilities are \(\{\mu_{1}^{(1)}\mu_{2}^{(2)},\mu_{2}^{(1)}\mu_{2}^{(2)}\}=\{0.860,0.140\}\). At \(6s\), there are three considered scenarios, where TV1 might perform a VT maneuver in lane 1, edge color in black, and TV2 might perform a VT maneuver in lane 1, in lane 2, or in lane 3, edge color in black, white, and blue. The corresponding probabilities are \(\{\mu_{1}^{(1)}\mu_{1}^{(2)},\mu_{1}^{(1)}\mu_{2}^{(2)},\mu_{1}^{(1)}\mu_{3}^{ (2)}\}=\{0.130,0.727,0.143\}\). Simulation results of Cases 1 and 2 indicate that EV executes safe maneuvers under the designed control structure while considering the interaction with other vehicles. ## VI Conclusions In this paper, the interaction-aware estimation of motion states of the vehicles has been studied using IMM-KF, and the associated state predictions have been combined with the probability to represent the uncertain environment. The generated scenarios, along with the 'worst case' scenario, Fig. 11: The cost function value of controllers in Case 2 Fig. 8: Traffic prediction at \(9s\) in Case 1 Fig. 12: Traffic prediction at \(3s\) in Case 2 Fig. 7: Traffic prediction at \(5s\) in Case 1 Fig. 9: The motion trajectory of vehicles in Case 2 have been applied in formulating the safety constraints of the SCMPC. The control system consists of lane-keeping and lane-change control modes, where the control input with a lower cost function value is implemented into the system. Moreover, the recursive feasibility of the method has been guaranteed based on the no-collision between EV and LV before the minimal stopping horizon of the EV. The proposed algorithm has been validated for two highway scenes chosen from the HighD dataset. The simulation results demonstrate the capability of the proposed control architecture to perform safe maneuvers.
2301.07267
Central limit theorems for martingales-I : continuous limits
When the limiting compensator of a sequence of martingales is continuous, we obtain a weak convergence theorem for the martingales; the limiting process can be written as a Brownian motion evaluated at the compensator and we find sufficient conditions for both processes to be independent. Examples of applications are provided, notably for occupation time processes and statistical estimators of financial volatility measures.
Bruno Rémillard, Jean Vaillancourt
2023-01-18T02:18:31Z
http://arxiv.org/abs/2301.07267v3
# Central limit theorems for martingales-I : continuous limits ###### Abstract. When the limiting compensator of a sequence of martingales is continuous, we obtain a weak convergence theorem for the martingales; the limiting process can be written as a Brownian motion evaluated at the compensator and we find sufficient conditions for both processes to be independent. As examples of applications, we revisit some known results for the occupation times of Brownian motion and symmetric random walks. In the latter case, our proof is much simpler than the construction of strong approximations. Furthermore, we extend finite dimensional convergence of statistical estimators of financial volatility measures to convergence as stochastic processes. Key words and phrases:Brownian motion, stochastic processes, weak convergence, martingales, mixtures 2020 Mathematics Subject Classification: Primary 60G44, Secondary 60F17 Partial funding in support of this work was provided by the Natural Sciences and Engineering Research Council of Canada. independent increasing processes, without relying to special properties of the processes, nor involving complicated constructions. In fact, under weak conditions involving sequence of martingale compensators, we will show that the CLTs holds for general martingales, and that the limiting process is a mixture of a Brownian motion with the limiting compensator of the sequence of martingales, and both processes are independent. These conditions are easy to verify and are general enough to be applicable to a wide range of situations. The main results are stated in Section 2, while some examples involving either independent and identically distributed (iid) sequences or Markov processes, are found in Section 3. A longer application regarding volatility modeling in mathematical finance is in Section 4, extending previous results of Barndorff-Nielsen and Shephard (2002, 2003). Most proofs are relegated to Appendix A. ## 2. Main results Let \(D=D[0,\infty)\) be the Polish space of \(\mathbb{R}^{d}\)-valued cadlag trajectories (right continuous with left limits everywhere), equipped with the Skorohod's \(\mathcal{J}_{1}\)-topology on \(D\) -- consult either Ethier and Kurtz (1986) or Jacod and Shiryaev (2003) for additional insight, as well as any unexplained terminology. All processes considered here have their trajectories in \(D\) and are adapted to a filtration \(\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}\) on a probability space \((\Omega,\mathcal{F},P)\) satisfying the usual conditions (notably, right continuity of the filtration and completeness). Trajectories in \(D\) are usually noted \(x(t)\) but occasionally \(x_{t}\). Weak convergence of a sequence of \(D\)-valued processes \(X_{n}\) to another such process \(X\), this last with continuous trajectories, will be denoted by \(X_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}X\), while the weaker convergence of finite dimensional distributions will be denoted by \(X_{n}\stackrel{{ f.d.d.d}}{{\rightsquigarrow}}X\). In this paper the limit \(X\) always has continuous trajectories (even though the sequence of processes \(X_{n}\) may well not) so weak convergence in the \(\mathcal{J}_{1}\)-topology coincides with that in the \(\mathcal{C}\)-topology, induced by the supremum norm over compact time sets. We will refer to \(\mathcal{C}\)-tightness etc. without further ado. All processes are written in coordinatewise fashion such as \(X_{n}=(X_{n}^{i})_{1\leq i\leq d}\). Writing \(|\cdot|\) for the Euclidean norm, square integrable processes \(X\) are those satisfying \(E\{|X(t)|^{2}\}<\infty\) for every \(t\geq 0\), while \(L_{2}\)-bounded ones also satisfy \(\sup_{t\geq 0}E\{|X(t)|^{2}\}<\infty\). Suppose that \(M_{n}\) is a sequence of \(D\)-valued square integrable \(\mathbb{F}\)-martingales started at \(M_{n}(0)=0\). Because of the discontinuity of trajectories, the (matrix-valued or cross) quadratic variation \([M_{n}]:=([M_{n}^{i},M_{n}^{j}])_{1\leq i,j\leq d}\) is distinct from its matrix-valued (predictable) compensator \(A_{n}:=\langle M_{n}\rangle\) -- another writing economy which means \((A_{n}^{ij})_{1\leq i,j\leq d}:=(\langle M_{n}^{i},M_{n}^{j}\rangle)_{1\leq i,j\leq d}\) -- and it is the latter that is of interest from a practical point of view. Coordinatewise, we use \([M_{n}^{i}]=[M_{n}^{i},M_{n}^{i}]\) and \(\langle M_{n}^{i}\rangle=\langle M_{n}^{i},M_{n}^{i}\rangle\) as well. The largest jump of \(X\in D\) over \([0,t]\) is denoted by \(J_{t}(X):=\sup_{s\in[0,t]}|X(s)-X(s-)|\). The following assumption will be used repeatedly. **Hypothesis 2.1**.: All of the following hold: 1. \(A_{n}^{ii}(t)=\langle M_{n}^{i}\rangle_{t}\to\infty\) as \(t\to\infty\) almost surely, for each fixed \(n\geq 1\) and \(i\in\{1,2,\ldots,d\}\); 2. There is a \(D\)-valued process \(A\) such that (i) \(A_{n}\stackrel{{ f.d.d.}}{{\sim}}A\) and \(A^{ij}=0\) for all \(i\neq j\); (ii) for all \(t\geq 0\) and \(i\), \(\lim_{n}E\left\{A_{n}^{ii}(t)\right\}=E\left\{A^{ii}(t)\right\}<\infty\); (iii) for all \(i\), \(A^{ii}(t)\to\infty\) as \(t\to\infty\) almost surely. Writing the inverse process for \(A_{n}^{ii}\) as \(\tau_{n}^{i}(s)=\inf\{t\geq 0;A_{n}^{ii}(t)>s\}\), one defines the rescaled \(\mathbb{F}_{\tau_{n}^{i}}\)-martingale \(W_{n}^{i}=M_{n}^{i}\circ\tau_{n}^{i}\), with compensator \(A_{n}^{ii}\circ\tau_{n}^{i}\). Note that by definition and using the right-continuity of \(A_{n}^{ii}\), \(A_{n}^{ii}\circ\tau_{n}^{i}(t)\geq t\). Actually, \(W_{n}^{i}\) is an \(\mathbb{F}_{\tau_{n}^{i}}\)-Brownian motion with respect to the filtration \(\mathbb{F}_{\tau_{n}^{i}}=\{\mathcal{F}_{\tau_{n}^{i}(t)}\}_{t\geq 0}\), whenever Hypothesis 2.1.a holds and \(M_{n}^{i}\) is continuous everywhere, by Dambis (1965) or Dubins and Schwarz (1965). In the latter case, the continuity of \(M_{n}^{i}\) implies the continuity of both \([M_{n}^{i}]\) and \(\langle M_{n}^{i}\rangle\), as well as their equality \([M_{n}^{i}]=\langle M_{n}^{i}\rangle\). In this special case, the sequence \(M_{n}\) therefore comprises a naturally associated sequence of Brownian motions \(W_{n}^{i}\) coordinatewise. This is no longer the case as soon as at least one of the \(M_{n}^{i}\)'s has a discontinuity anywhere. Obtaining a CLT therefore requires building such a Brownian motion, possibly on an enlargement of the stochastic basis \((\Omega,\mathcal{F},\mathbb{F},P)\) with \(\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}\). Such enlargements will be used systematically in this paper and are understood to affect some statements implicitly, without further ado, for instance in some of the proofs. Equality in law is denoted by \(\stackrel{{ Law}}{{\to}}\); convergence in probability is denoted by \(\stackrel{{ P_{\mathbb{F}}}}{{\to}}\), in law by \(\stackrel{{ Law}}{{\to}}\), and almost sure convergence by \(\stackrel{{ a.s.}}{{\to}}\). **Theorem 2.1**.: _Assume that Hypothesis 2.1 holds with \(A\) continuous everywhere; that \(J_{t}(M_{n})\stackrel{{ Law}}{{\rightarrow}}0\) for any \(t>0\); that there exists an \(\mathbb{F}\)-adapted sequence of \(D\)-valued square integrable martingales \(B_{n}\) started at \(B_{n}(0)=0\) so that_ 1. \((A_{n},B_{n})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}(A,B)\) _holds, where_ \(B\) _is a Brownian motion with respect to its natural filtration_ \(\mathbb{F}_{B}=\{\mathcal{F}_{B,t}:\;t\geq 0\}\) _and_ \(A\) _is_ \(\mathbb{F}_{B}\)_-measurable;_ 2. \(\langle M_{n}^{i},B_{n}^{j}\rangle_{t}\stackrel{{ Law}}{{ \rightarrow}}0\)_, for any_ \(i,j\in\{1,\ldots,d\}\) _and_ \(t\geq 0\)_._ _Then \((M_{n},A_{n},B_{n})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}(M, A,B)\) holds, where \(M\) is a continuous square integrable \(\mathcal{F}_{t}\)-martingale with respect to (enlarged) filtration with predictable quadratic variation process \(A\). Moreover, \(M^{i}=W^{i}\circ A^{ii}\), \(i\in\{1,\ldots,d\}\) holds, with \(W\) a standard Brownian motion which is independent of \(B\) and \(A\)._ No need for \(A_{n}\) to converge in probability, nor for the nested filtrations required for stable convergence (Jacod and Shiryaev, 2003, Section VIII.5c), the usual way to characterize the law of \(M\) uniquely. The proof is relegated to Appendix A.3. **Remark 2.1**.: An historically important prototype of Theorem 2.1 is Rebolledo's landmark CLT for local martingales, when restricted to sequences of square integrable martingales (Rebolledo, 1980) satisfying an asymptotic rarefaction of jumps condition. Functional CLTs involving limiting mixtures with non deterministic \(A\) go back to Rebolledo (1979, p. 92-93) for processes converging to a diffusion in \(\mathbb{R}^{d}\) and Johansson (1994), for point process martingales. Additional references on the early successes in the discrete case can be found in Hall and Heyde (1980) and in the continuous case in Jacod and Shiryaev (2003, Section VIII.5). More recently, Merlevede et al. (2019) display the current state of the art of the functional CLT for rescaled non-stationary sequences and arrays of dependent random variables when the limit is continuous -- either a Brownian motion or the stochastic integral of a continuous function with respect to a Brownian motion. The proof of the following useful proposition, given in Appendix A.4, relies on the tightness induced by the convergence of the quadratic variation process. **Proposition 2.2**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Set \(M_{n}(t)=n^{-\frac{1}{2}}\;\sum_{j=1}^{[nt]}\sigma\left(\frac{j-1}{n}\right) \xi_{j}\), \(V_{n}(t)=n^{-1}\sum_{j=1}^{[nt]}\sigma^{2}\left(\frac{j-1}{n}\right)\), and \(\sigma^{2}\left(\frac{j-1}{n}\right)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 3. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 4. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.3**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.4**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.5**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.6**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.7**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 3. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.8**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ **Remark 2.9**.: _Suppose \((\xi_{j})_{j\geq 1}\) is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\). Then, the following statements hold:_ 1. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 2. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 3. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of iid random variables with mean \(0\) and variance \(1\), independent of a continuous stochastic process \(\sigma\) defined on \([0,\infty)\)._ 4. \(\sigma^{2}\left(\frac{j-1}{n}\right)\) _is a sequence of define \(V(t)=\int_{0}^{t}\sigma^{2}(s)ds\). Then \((M_{n},V_{n})\stackrel{{\mathcal{C}}}{{\longrightarrow}}(M,V)\), where \(M=W\circ V\) and \(W\) is a Brownian motion independent of \(V\). In fact, setting \(B_{n}(t)=n^{-\frac{1}{2}}\ \sum_{j=1}^{\lfloor nt\rfloor}\xi_{j}\), then \((M_{n},V_{n},B_{n})\stackrel{{\mathcal{C}}}{{\longrightarrow}}(M,V,B)\), where \(B\) is a Brownian motion independent of \(\sigma\) and \(M\) can also be written as a stochastic integral with respect to \(B\) viz. \(M(t)=\int_{0}^{t}\sigma(s)dB_{s}\)._ ## 3. Examples of application to occupation times ### Occupation times for Brownian motion Let \(B\) denote a Brownian motion, \(V\) continuous with compact support, and \(\mu_{V}=\int_{-\infty}^{\infty}V(y)dy=0\) but \(V\) is not identically \(0\). Set \(F(x)=-2\int_{-\infty}^{x}V(y)dy\) and \(G(x)=\int_{-\infty}^{x}F(y)dy\) and consider the martingale \(M_{1}(t)=G(B_{t})+\int_{0}^{t}V(B_{s})ds\) and the occupation time \(\int_{0}^{t}V(B_{s})ds\). Setting \(B_{n}(t)=n^{-\frac{1}{2}}B_{nt}\), then the continuous martingale \(M_{n}(t)=n^{-\frac{1}{4}}\int_{0}^{nt}F(B_{s})dB_{s}=n^{\frac{1}{4}}\int_{0}^{ t}F\left(n^{\frac{1}{2}}B_{n}(s)\right)dB_{n}(s)\) has the same asymptotic behavior as \(n^{-\frac{1}{4}}\int_{0}^{nt}V(B_{s})ds\) since \(G\) is bounded. Using the scaling property of Brownian motion, it follows that \((M_{n},[M_{n}],B_{n})\stackrel{{ Law}}{{=}}\left(\tilde{M}_{n},[ \tilde{M}_{n}],\tilde{B}\right)\), where \(\tilde{B}\) is another Brownian motion, \(\tilde{M}_{n}(t)=n^{\frac{1}{4}}\int_{0}^{t}F(\sqrt{n}\tilde{B}_{u})d\tilde{B} _{u}\), and \([\tilde{M}_{n}]_{t}=n^{\frac{1}{2}}\int_{0}^{t}F^{2}(\sqrt{n}\tilde{B}_{u})du\). Hence, as \(n\to\infty\), \(A_{n}(t)=[\tilde{M}_{n}]_{t}=\sqrt{n}\int_{\mathbb{R}}F^{2}(\sqrt{n}x)\ell_{t }(x)dx\stackrel{{ a.s.}}{{\to}}\|F\|^{2}\ell_{t}(0)=A(t)\), uniformly over compact time sets, where \(\ell\) is the local time of Brownian motion \(\tilde{B}\), and \(\|F\|^{2}=\int_{\mathbb{R}}F^{2}(x)dx=-2\int_{-\infty}^{\infty}\int_{-\infty} ^{\infty}|y-z|V(y)V(z)dzdy\). As a result, \((A_{n},\tilde{B})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}(A, \tilde{B})\), and \(A\) is clearly \(\mathbb{F}_{\tilde{B}}\)-measurable. Using Theorem 2.1, both \(M_{n}\) and \(n^{-\frac{1}{4}}\int_{0}^{n\cdot}V(B_{s})ds\) converge weakly to \(W\circ A\), where \(W\) is a Brownian motion independent of \(A\) and \(\tilde{B}\). In fact, the full consequence of Theorem 2.1 states that \((M_{n},A_{n},\tilde{B})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}( M,A,\tilde{B})\) holds. The result for \(M_{n}\) was first proven in Papanicolaou et al. (1977). The proof given here is much easier and is similar to the one in Ikeda and Watanabe (1989). The argument above for handling \(A_{n}\) under \(\mu_{V}=0\), carries through to yield \(n^{-\frac{1}{2}}\int_{0}^{n\cdot}V(B_{s})ds\stackrel{{\mathcal{C }}}{{\rightsquigarrow}}\mu_{V}\ell_{\cdot}(0)\) when \(\mu_{V}\neq 0\). **Remark 3.1**.: Recall that \(\ell_{t}(0)/\sqrt{t}\stackrel{{ Law}}{{=}}|B_{1}|\) which has Mittag-Leffler distribution with parameter \(\frac{1}{2}\). Next, the inverse local time \(\tau\) is known to be a Levy process with density \(ty^{-\frac{3}{2}}e^{-\frac{t^{2}}{2y}}\), \(y>0\), and Laplace transform \(E\left[e^{-\lambda\tau_{t}}\right]=e^{-t\sqrt{2\lambda}}\), \(\lambda\geq 0\) (Borodin and Salminen, 2002). Hence \(\tau_{t}\) has a positive stable distribution with index \(\frac{1}{2}\). ### Occupation times for random walks Let \(S_{n}\) be the symmetric simple random walk on the integers \(\mathbb{Z}\), \(N_{n}(x)=\sum_{k=1}^{n}\mathbb{I}(S_{k}=x)\) the number of its visits to \(x\in\mathbb{Z}\) up to time \(n\) and \(V\) a real-valued function on \(\mathbb{Z}\) with compact support but \(V\not\equiv 0\). Setting \(\mu_{V}:=\sum_{x\in\mathbb{Z}}V(x)\), Dobrusin (1955) proved that if \(\mu_{V}\neq 0\), then \(n^{-\frac{1}{2}}\sum_{k=1}^{n}V(S_{k})\stackrel{{ Law}}{{\to}} \mu_{V}\mathcal{V}\), where \(\mathcal{V}\stackrel{{ Law}}{{=}}|Z|\), where \(Z\sim N(0,1)\); while if \(\mu_{V}=0\), then \(n^{-\frac{1}{4}}\sum_{k=1}^{n}V(S_{k})\stackrel{{ Law}}{{\to}} \mu_{v}\sqrt{\mathcal{V}}Z\), where \(Z\sim N(0,1)\) is independent of \(\mathcal{V}\), and \(\mu_{v}=2c_{V}^{2}-\sum_{x\in\mathbb{Z}}V^{2}(x)\), where \(c_{V}^{2}=-\sum_{y,z\in\mathbb{Z}}|y-z|V(y)V(z)=2\sum_{z\in\mathbb{Z}}\left\{ \sum_{y<z}V(y)\right\}^{2}\). Note that \(\mu_{v}\) corresponds to expression \(\|V\|^{2}\) in Lee and Remillard (1994). Just as in Section 3.1, we prove that \((V_{n},B_{n})\stackrel{{\mathcal{C}}}{{\to}}(M,B)\) ensues when \(\mu_{V}=0\), where \(V_{n}(t):=n^{-\frac{1}{4}}\sum_{k=1}^{\lfloor nt\rfloor}V(S_{k})\), \(B_{n}(t):=n^{-\frac{1}{2}}S_{\lfloor nt\rfloor}\), \(B\) is a Brownian motion, \(A(t)=c_{V}^{2}\ell_{t}(0)\) with \(\ell\) the local time for \(B\), and \(M=W\circ A\), where \(W\) is a Brownian motion independent of \(A\) and \(B\). We first build the pair \((M_{n},A_{n})\). To this end, set \(G(x)=-\sum_{y\in\mathbb{Z}}|x-y|V(y)\). Then \[TG(x) = \frac{G(x+1)+G(x-1)}{2}=-\sum_{y}V(y)\left\{\frac{|y-x-1|+|y-x+1 |}{2}\right\}\] \[= -V(x)-\sum_{y>x}V(y)(y-x)-\sum_{y<x}V(y)(x-y)=-V(x)+G(x),\] with \(G\) is constant outside the support of \(V\): in fact, if \(V\equiv 0\) on \([a,b]^{\complement}\), then \(G(x)=\sum_{y\in[a,b]}yV(y)=c\) if \(x>b\), while if \(x<a\), then \(G(x)=-c\). Consequently, \(M_{n}(t):=n^{-\frac{1}{4}}\sum_{k=1}^{\lfloor nt\rfloor}\{G(S_{k})-TG(S_{k-1 })\}=n^{-\frac{1}{4}}\left\{G(S_{\lfloor nt\rfloor})-G(0)\right\}+n^{-\frac{1 }{4}}\sum_{k=1}^{\lfloor nt\rfloor}V(S_{k-1})\) is a martingale with \[A_{n}(t)=\langle M_{n}\rangle_{t} = n^{-\frac{1}{2}}\sum_{k=1}^{\lfloor nt\rfloor}E\left[\left\{G(S_ {k})-TG(S_{k-1})\right\}^{2}|\mathcal{F}_{k-1}\right]\] \[= n^{-\frac{1}{2}}\sum_{k=1}^{\lfloor nt\rfloor}\left[TG^{2}(S_{k -1})-\left\{TG(S_{k-1})\right\}^{2}\right]=n^{-\frac{1}{2}}\sum_{k=1}^{ \lfloor nt\rfloor}v(S_{k-1}).\] Now, since \(TG=G-V\), it follows that \(v=TG^{2}-(TG)^{2}=2VG-V^{2}+TG^{2}-G^{2}\) which has compact support since \(G^{2}\) is constant outside \([a,b]\). As a result, \[A_{n}(t) = n^{-\frac{1}{2}}\sum_{k=1}^{\lfloor nt\rfloor}v(S_{k-1})=n^{- \frac{1}{2}}\sum_{x\in\mathbb{Z}}v(x)N_{\lfloor nt\rfloor-1}(x)\] \[= n^{-\frac{1}{2}}\sum_{x\in\mathbb{Z}}v(x)\left\{N_{\lfloor nt \rfloor-1}(x)-N_{\lfloor nt\rfloor-1}(0)\right\}+\mu_{v}n^{-\frac{1}{2}}N_{ \lfloor nt\rfloor-1}(0)\] \[= n^{-\frac{1}{4}}O_{P}(1)+\mu_{v}n^{-\frac{1}{2}}N_{\lfloor nt \rfloor}(0),\] using Dobrushin's results and the fact that \(v\) has compact support, where \[\mu_{v}=\sum_{x\in\mathbb{Z}}v(x)=-2\sum_{y,x\in\mathbb{Z}}|y-x|V(y)V(x)-\sum_ {x\in\mathbb{Z}}V^{2}(x)=\sum_{x\in\mathbb{Z}}\left\{V(x)+2H(x)\right\}^{2},\] and \(H(x)=\sum_{y<x}V(y)\). These expressions are proven in Appendix A. In particular, if \(V(a)=1\), \(a\neq 0\), and \(V(0)=-1\), then \(\mu_{v}=4|a|-2\). It then follows that \((A_{n},B_{n})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}(A,B)\), where \(B\) is a Brownian motion and \(A(t)=\mu_{v}\ell_{t}(0)\) is the local time of the Brownian motion at \(0\). Next, setting \(f(x)=x\), one gets that \[\langle M_{n},B_{n}\rangle_{t} = n^{-\frac{3}{4}}\sum_{k=1}^{\lfloor nt\rfloor}\{T(fG)(S_{k-1}) -Tf(S_{k-1})TG(S_{k-1})\}=n^{-\frac{3}{4}}\sum_{k=1}^{\lfloor nt\rfloor}g(S_{ k-1}),\] where \(g=T(fG)-TfTG\). Since \(Tf=f\), \(G(x)=c\), \(x>b\) and \(G(x)=-c\), \(x<a\), it follows that \(g\) also has compact support and, from the previous calculations and Dobrushin's result, that for any \(t\geq 0\), \(\langle M_{n},B_{n}\rangle_{t}=O\left(n^{-\frac{1}{4}}\right)\). Therefore, using Theorem 2.1, \((M_{n},A_{n},B_{n})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}( W\circ A,A,B)\), where \(W\) is a Brownian motion independent of \(A\) and \(B\); hence \((V_{n},A_{n},B_{n})\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}( W\circ A,A,B)\) as well, since the above calculations yield also \(\sup_{t\in[0,T]}|V_{n}(t)-M_{n}(t)|=O\left(n^{-\frac{1}{4}}\right)\) for any \(T>0\). The corresponding result \(n^{-\frac{1}{2}}V_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}} \mu_{V}\ell\) when \(\mu_{V}\neq 0\) ensues from the definition of the local time for Brownian motion, as in the proof of \(A_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}A\) in the case \(\mu_{V}=0\). ### Scaling limit for the random comb Let \((\xi_{n},\zeta_{n})\) and \((0,\psi_{n})\) be two iid sequences independent of each other, with \(P(\psi_{n}=\pm 1)=\frac{1}{2}\), and \(P\{(\xi_{n},\zeta_{n})=(\pm 1,0)\}=P\{(\xi_{n},\zeta_{n})=(0,\pm 1)\}=\frac{1}{4}\). The comb process \((C_{1},C_{2})\)(Bertacchi, 2006; Csaki et al., 2009) is a martingale with values on the integer lattice \(\mathbb{Z}^{2}\), as well as a Markov chain, started at \((0,0)\) and defined by \[C_{1}(n+1) = C_{1}(n)+\xi_{n+1}\mathbb{I}_{\{C_{2}(n)=0\}},\] \[C_{2}(n+1) = C_{2}(n)+\psi_{n+1}\mathbb{I}_{\{C_{2}(n)\neq 0\}}+\zeta_{n+1} \mathbb{I}_{\{C_{2}(n)=0\}}.\] Note that \(C_{2}\) is also a Markov chain on its own, while \(C_{1}\) is not. For all \(n\geq 0\), \[E\left[\{C_{1}(n+1)-C_{1}(n)\}^{2}|\mathcal{F}_{n}\right] = \frac{1}{2}\mathbb{I}_{\{C_{2}(n)=0\}},\] \[E\left[\{C_{2}(n+1)-C_{2}(n)\}^{2}|\mathcal{F}_{n}\right] = 1-\frac{1}{2}\mathbb{I}_{\{C_{2}(n)=0\}},\] \[E\left[\{C_{1}(n+1)-C_{1}(n)\}\{C_{2}(n+1)-C_{2}(n)\}|\mathcal{F }_{n}\right] = 0.\] Set \(A_{1}(n)=\sum_{k=1}^{n}\mathbb{I}_{\{C_{2}(k)=0\}}\) and \(\tau_{k}=\inf\{n\geq 1;A_{1}(n)\geq k\}\), the time of the \(k\)-th visit to \(0\) by \(C_{2}\) after the initial departure, with defaults \(A_{1}(0)=0\) and \(\tau_{0}=0\). Since the increments \(\sigma_{n}=\tau_{n}-\tau_{n-1}\) are iid and \(n^{-\frac{1}{2}}C_{2}([n\cdot])\stackrel{{\mathcal{C}}}{{\sim}}W _{2}\), a Brownian motion, there ensues \(n^{-\frac{1}{2}}2^{-1}A_{1}([n\cdot])\stackrel{{\mathcal{C}}}{{ \sim}}\eta_{2}\), the local time at \(0\) of \(W_{2}\). Build the \(\mathbb{R}^{2}\)-valued martingale with predictable quadratic variations \(n^{-\frac{1}{2}}2^{-1}A_{1}(\lfloor nt\rfloor)\) and \(n^{-1}\lfloor nt\rfloor-(2n)^{-1}A_{1}(\lfloor nt\rfloor)\) componentwise. Since \(J_{T}(\Xi_{n})n^{-\frac{1}{4}}\to 0\) as \(n\to\infty\) almost surely, the only possible weak limits of \(\Xi_{n}\) have continuous trajectories. Further, \(\left\{n^{-\frac{1}{2}}2^{-1}A_{1}([n\cdot]),n^{-1}[n\cdot]-(2n)^{-1}A_{1}([n \cdot])\right\}\stackrel{{\mathcal{C}}}{{\sim}}\{t\mapsto(\eta_ {2}(t),t)\}\), a process with continuous trajectories. Since \(\Xi_{n}\) has uncorrelated components, all the conditions of Theorem 2.1 are met and \(\Xi_{n}\stackrel{{\mathcal{C}}}{{\sim}}(W_{1}\circ\eta_{2},W_{2})\). Notice that if we write \(X_{n}=C_{1}(\tau_{n})-C_{1}(\tau_{n-1})\), then \((X_{n},\sigma_{n})_{n\geq 1}\) are iid. ## 4. Volatility modeling and estimation The financial return \(R(t)\) at time \(t\) for some investment under consideration is modelled as \(R(t)=Z\circ\tau(t)\), with \(Z(s):=\gamma B(s)+\beta s\), where \(\tau\) is a strictly increasing process, independent of the Brownian motion \(B\). This model was first proposed by Ane and Geman (2000). Time scale \(\tau\) is meant to reflect business cycles and other features known collectively in economics as business time; \(Z\) is thus the financial return adjusted accordingly. Constant \(\gamma\) is a scaling parameter and constant \(\beta\) the trend after the correction for business time. Note that a different model with the same distributional features is \(R(t)=\int_{0}^{t}\sigma(s)dW(s)+\beta\tau(t)\), where \(\sigma\) is a continuous process independent of the Brownian motion \(W\) and \(\tau(t)=\int_{0}^{t}\sigma^{2}(s)ds\). The latter was considered by Barndorff-Nielsen and Shephard (2002). Rigorous treatment of these models requires some technical results concerning \(D\)-valued square integrable \(\mathbb{F}\)-martingales \(M\) and their compensator \(A=\langle M\rangle\). We gather these next. In what follows, we consider the partitions \(s_{k,\delta}=k\delta\) independent of \(t\), where \(K(t,\delta)=\left\lfloor\frac{t}{\delta}\right\rfloor\). For each \(\delta>0\), construct \[V_{p,\delta}(t):=\delta^{1-\frac{p}{2}}\sum_{1\leq k\leq K(t,\delta)}|M(s_{k, \delta})-M(s_{k-1,\delta})|^{p}\] and \[U_{p,\delta}(t):=\delta^{1-\frac{p}{2}}\sum_{1\leq k\leq K(t,\delta)}\{A(s_{k,\delta})-A(s_{k-1,\delta})\}^{\frac{p}{2}}.\] Before stating the next result, proven in Appendix A.5, set \(\mu_{p}=E(|Z|^{p})=2^{\frac{p}{2}}\frac{\Gamma\left(\frac{p+1}{2}\right)}{ \Gamma\left(\frac{1}{2}\right)}\), where \(Z\sim N(0,1)\). So \(\mu_{2n}=\prod_{k=1}^{n}(2k-1)\) and \(\mu_{2n+1}=\sqrt{\frac{2}{\pi}}\cdot\prod_{k=1}^{n}(2k)\). Typically useful values are \(\mu_{2}=1\), \(\mu_{4}=3\) and \(\mu_{8}=105\). **Lemma 4.1**.: _Given is a real-valued martingale \(M\) started at \(M_{0}=0\) with finite \(p^{th}\) moment for some integer \(p\geq 2\) and compensator \(A\), where \(A_{t}=\int_{0}^{t}a_{s}ds\), for some non-negative and continuous stochastic process \(a\). Assume the existence of a Brownian motion \(B\) such that \(M=B\circ A\), with \(A\) independent of \(B\). For \(0\leq s\leq t<\infty\), there holds_ \[E\left\{|M_{t}-M_{s}|^{p}|\mathcal{F}_{s}\right\}=\mu_{p}E\left\{(A_{t}-A_{s} )^{\frac{p}{2}}|\mathcal{F}_{s}\right\}. \tag{4.1}\] _Then, \(\lim_{\delta\downarrow 0}U_{p,\delta}=\mathcal{U}_{p}\), where \(\mathcal{U}_{p}(t)=\int_{0}^{t}a_{s}^{\frac{p}{2}}ds\). Furthermore, under the additional assumption that \(M\) has finite \((2p)^{th}\) moment, \(V_{p,\delta}\stackrel{{ Pr_{r}}}{{\longrightarrow}}\mathcal{V}_{p}= \mu_{p}\mathcal{U}_{p}\) as \(\delta\downarrow 0\). In particular, if \(M_{t}=\int_{0}^{t}\sigma_{s}dW_{s}\), for some Brownian motion \(W\) and continuous non-negative process \(\sigma\) independent of \(W\), then \(\mathcal{V}_{p}(t)=\mu_{p}\int_{0}^{t}\sigma_{s}^{p}ds\)._ **Remark 4.1**.: This result is an extension of Barndorff-Nielsen and Shephard (2003). They only prove their result for a fixed \(t\) with convergence in probability but claimed it could also be true as a process. The case \(p=2\) is just the definition of quadratic variation \([M]\) -- see the proof of Ethier and Kurtz (1986, Proposition 2.3.4), where the existence of \(\mathcal{V}_{2}:=\lim_{\delta\downarrow 0}V_{2,\delta}\) is shown to hold for any right continuous local martingale and without any additional restriction, neither on \(A\) nor on the filter. Note that, when \(p>2\), the limit \(\mathcal{V}_{p}\) does not exist if \(M\) is not continuous everywhere. The asymptotics for \(V_{p,\delta}\) under \(p\in(0,2)\) are covered extensively by Jacod (2007) and Jacod (2008), for a large class of processes with jumps, based on equally spaced observations. The cases \(p\geq 3\) are also examined in Jacod (2008, Theorems 2.11(i)) -- the presence of jumps in the limit yields a CLT with \(\delta^{-\frac{1}{2}}\) instead of \(\delta^{1-\frac{p}{2}}\). For our continuous limits, the larger fluctuations observed there disappear. See his comments in Remarks 2.14, 2.15 and 2.16. We can now state a first consistency result for an estimator of the realized volatility error in investment returns, when data is collected at regular intervals. This is an extension of the results presented in Barndorff-Nielsen and Shephard (2002), where convergence was limited to a fixed \(t\), while here we obtain the convergence of the whole process. Before stating the first theorem, define \[X_{n}(t) := n^{\frac{1}{2}}\sum_{j=1}^{\lfloor nt\rfloor}\left[\left\{ \Delta_{n}M\left(\frac{j}{n}\right)\right\}^{2}-\Delta_{n}A\left(\frac{j}{n} \right)\right],\] \[\langle X_{n}\rangle_{t} := 2n\sum_{j=1}^{\lfloor nt\rfloor}E\left[\left\{\Delta_{n}A\left( \frac{j}{n}\right)\right\}^{2}\right|\mathcal{F}_{\frac{j-1}{n}}\right],\] \[V_{n}(t) := n\sum_{j=1}^{\lfloor nt\rfloor}\left\{\Delta_{n}M\left(\frac{ j}{n}\right)\right\}^{4},\] where \(\Delta_{n}f(s)=f(s)-f(s-1/n)\). The proof of the following theorem is given in Appendix A.6. **Theorem 4.2** (Numerical scheme).: _Assume that both \(A(t)\to\infty\) and \(\mathcal{V}_{4}(t)\to\infty\) as \(t\to\infty\). Under all the conditions of Lemma 4.1 with \(p=4\), including the finiteness of \(E\{|M_{t}|^{8}\}<\infty\) for any \(t\geq 0\), there is a standard Brownian motion \(W\) independent of \(\mathcal{V}_{4}\) such that \((X_{n},\langle X_{n}\rangle,V_{n})\stackrel{{\mathcal{C}}}{{ \rightsquigarrow}}(W\circ\mathcal{A},\mathcal{A},\mathcal{V}_{4})\), where \(\mathcal{A}=2\mathcal{U}_{4}=\frac{2}{3}\mathcal{V}_{4}\). Furthermore, for any adapted \(D\)-valued process \(N\) such that \(n^{\frac{1}{2}}\sum_{j=1}^{\lfloor nt\rfloor}\left\{\Delta_{n}N\left(\frac{j }{n}\right)\right\}^{2}\stackrel{{ Pr}}{{\to}}0\) and \(n\sum_{j=1}^{\lfloor nt\rfloor}\left\{\Delta_{n}N\left(\frac{j}{n}\right) \right\}^{2}\Delta_{n}A\left(\frac{j}{n}\right)\stackrel{{ Pr}}{{\to}}0\) both hold for all \(t>0\), there also comes \(Y_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}W\circ\mathcal{A}\), where_ \[Y_{n}(t):=n^{\frac{1}{2}}\sum_{j=1}^{\lfloor nt\rfloor}\left[\left\{\Delta_ {n}(M+N)\left(\frac{j-1}{n}\right)\right\}^{2}-\Delta_{n}A\left(\frac{j-1}{n} \right)\right].\] A frequent choice for perturbation process \(N\) is a linear function of the volatility. Recall the modulus of continuity of \(A\), defined by \[\omega_{\mathcal{C}}(A,\delta,T)=\sup_{0\leq t_{1}<t_{2}\leq T,\ t_{2}-t_{1}< \delta}\|A(t_{2})-A(t_{1})\|.\] **Corollary 4.3**.: _Suppose that \(N(t)=\mu t+\beta A(t)\), for some constants \(\mu\in\mathbb{R}\) and \(\beta\geq 0\), and that the modulus of continuity of \(A\) satisfies \(\delta^{-\alpha}\omega_{\mathcal{C}}(A,\delta,t)\stackrel{{ P_{T}}}{{ \rightarrow}}0\) as \(\delta\to 0\), for some \(\alpha>\frac{3}{4}\) and all \(t>0\). The conclusions of Theorem 4.2 hold._ Proof.: For the terms in \(A\), the two conditions on \(N\) are a consequence of \[n^{\frac{1}{2}}\sum_{j=1}^{\lfloor nt\rfloor}\left\{\Delta_{n}A \left(\frac{j}{n}\right)\right\}^{2}+n\sum_{j=1}^{\lfloor nt\rfloor}\left\{ \Delta_{n}A\left(\frac{j}{n}\right)\right\}^{3}\\ \leq tn^{3/2}\omega_{\mathcal{C}}^{2}(A,1/n,t)+tn^{2}\omega_{ \mathcal{C}}^{3}(A,1/n,t).\] The terms in \(\mu\) are treated similarly. This numerical scheme extends readily to higher powers. We state it without proof, leaving the details to the reader. First, define \[X_{n,p}(t) := n^{\frac{p-2}{4}}\sum_{j=1}^{\lfloor nt\rfloor}\left[\left| \Delta_{n}M\left(\frac{j}{n}\right)\right|^{\frac{p}{2}}-\mu_{\frac{p}{2}}\{ \Delta_{n}A\left(\frac{j}{n}\right)\}^{p/4}\right],\] \[\langle X_{n,p}\rangle_{t} := \left(\mu_{p}-\mu_{\frac{p}{2}}^{2}\right)n^{\frac{p}{2}-1}\sum_ {j=1}^{\lfloor nt\rfloor}E\left[\left\{\Delta_{n}A\left(\frac{j}{n}\right) \right\}^{\frac{p}{2}}|\mathcal{F}_{\frac{j-1}{n}}\right],\] \[V_{n,p}(t) := n^{\frac{p}{2}-1}\sum_{j=1}^{\lfloor nt\rfloor}\left|\Delta_{n} M\left(\frac{j}{n}\right)\right|^{p}.\] **Theorem 4.4** (New numerical scheme).: _Assume that \(A(t)\rightarrow\infty\) and \(\mathcal{V}_{p}(t)\rightarrow\infty\) as \(t\rightarrow\infty\), with \(\mathcal{V}_{p}\) continuous, for some \(p>4\). Under all the conditions of Lemma 4.1, including the finiteness of the \((2p)^{th}\) moment of \(M\), there is a standard Brownian motion \(W\) independent of \(\mathcal{V}_{p}\) such that \((X_{n,p},\langle X_{n,p}\rangle,V_{n,p})\stackrel{{\mathcal{C}}}{{ \rightarrow}}(W\circ\mathcal{A}_{p},\mathcal{A}_{p},\mathcal{V}_{p})\), where \(\mathcal{A}_{p}=\dfrac{\left(\mu_{p}-\mu_{\frac{p}{2}}^{2}\right)}{ \mu_{p}}\mathcal{V}_{p}.\)_ **Remark 4.2**.: In their analysis of the asymptotic properties of realized volatility error in investment returns, Barndorff-Nielsen and Shephard (2002) assumed that \(M_{t}=\int_{0}^{t}\sigma_{u}dW_{u}\), with square integrable \(\sigma\) independent of Brownian motion \(W\) -- \(\sigma_{u}^{2}\) is known as the spot volatility or instantaneous volatility at time \(u\). Since \(A_{t}=\int_{0}^{t}\sigma_{u}^{2}du\) is continuous, it follows from the proof of Proposition 2.2 that \(M\) can also be written as \(M=B\circ A\), where \(B\) is a Brownian motion independent of \(A\). They also said that one could consider \(N(t)=\mu t+\beta A(t)\) but mentioned that it could be difficult to obtain \(p\)-variation convergence, specially if \(\beta\neq 0\). From our Corollary 4.3, we see that having this extra term does not influence the value of the limit \(W\left(\dfrac{2}{3}\mathcal{V}_{4}\right)\) for \(Y_{n}\), confirming that they can be ignored when estimating realized volatility, at least below the order of third moments. Note that their setting is a modification of the models introduced in Ane and Geman (2000) by adding the drift term \(\mu\), while setting business time to \(\tau=A\) and scaling parameter to \(\gamma=1\). Note also that from Theorem 4.2, Barndorff-Nielsen and Shephard (2002, Theorem 1) can be restated as follows: as \(n\to\infty\), \(\dfrac{X_{n}(1)}{\left\{2n\sum_{j=1}^{n}\{\Delta_{n}A\left(\frac{j}{n}\right) \}^{2}\right\}^{\frac{Law}{2}}}\overset{Law}{\to}N(0,1)\) and \(n\sum_{j=1}^{n}\{\Delta_{n}A\left(\frac{j}{n}\right)\}^{2}\overset{a.s.}{ \to}\int_{0}^{1}\sigma^{4}(s)ds:=\mathcal{U}_{4}(1)\). Furthermore, \(2n\sum_{j=1}^{\lfloor nt\rfloor}\{\Delta_{n}A\left(\frac{j}{n}\right)\}^{2}- \langle X_{n}\rangle_{t}\) is a martingale that converges to \(0\), due to the continuity of \(A\) and the fact that \(\sigma^{4}\) is locally integrable. As a result, \(\mathcal{V}_{4}=3\mathcal{U}_{4}\). Their result is therefore an upshot of Theorem 4.2. More generally, Barndorff-Nielsen et al. (2006) prove a CLT for sequences of continuous \(\mathbb{R}^{d}\)-valued semimartingales \(Y\) of the following general form -- we stick to the case \(d=1\) here for the sake of simplicity: \[Y(t)=Y(0)+\int_{0}^{t}a_{s}ds+\int_{0}^{t}\sigma_{s-}dW(s),\] with \(W\) a standard Brownian motion, \(a\) bounded predictable and \(\sigma\)\(D\)-valued. For any pair \(G\) and \(H\) of continuous real-valued functions with at most polynomial growth, the sequence of approximations \[X_{n}(G,H)_{t}=n^{-1}\sum_{j=1}^{\lfloor nt\rfloor}G\left\{\frac{1}{2}\Delta_ {n}Y\left(\frac{j}{n}\right)\right\}\cdot H\left\{\frac{1}{2}\Delta_{n}Y((j+1 )/n)\right\}\] are first shown to obey a Law of Large Numbers : \[X_{n}(G,H)_{t}\overset{P_{\tau}}{\to}X(G,H)_{t}:=\int_{0}^{t}\rho_{\sigma_{s} }(G)\rho_{\sigma_{s}}(H)ds\] where \(\rho_{\sigma_{s}}(G):=E\{G(Z)\}\) where \(Z\sim N(0,\sigma_{s})\). Under some additional restrictions on both the stochastic structure of \(\sigma_{s}\) and the smoothness of \(G\) and \(H\), a CLT also ensues -- actually in the sense of stable convergence: \[\frac{1}{2}\{X_{n}(G,H)_{t}-X(G,H)_{t}\}\stackrel{{ Law}}{{\to}}U(G,H)_{t}\] where \(U(G,H)_{t}\) is a stochastic integral with respect to another Brownian motion independent of the ambient filtration. A functional version of this result should ensue from our Theorem 4.4 through the same type of arguments, when \(G\) and \(H\) have at most polynomial growth. We do not pursue this here. ## 5. Conclusion We have shown that under weak conditions involving compensators, one can get a CLT for general martingales, and the limiting process is a mixture of a Brownian motion with the limiting compensator of the sequence of martingales. These conditions are easy to verify and are general enough to be applicable to a wide range of situations. ## Appendix A Auxiliary results and main proofs ### Expression for \(\mu_{v}\) Since \(\sum_{a\leq x\leq b}\{TG^{2}(x)-G^{2}(x)\}=0\), it follows that \[\mu_{v}=\sum_{x\in\mathbb{Z}}v(x)=\sum_{x\in\mathbb{Z}}\left\{2V(x)G(x)-V^{2}( x)\right\}.\] One can check that \(\sum_{y}\sum_{z}|z-y|V(y)V(z)=4\sum_{y}yV(y)H(y)+2\sum_{y}yV^{2}(y)\), where \(H(x)=\sum_{y<x}V(y)\). Note that \(H\equiv 0\) outside \([a,b]\). Now, \(G(x)=-2xH(x)+2\sum_{y<x}yV(y)-c\). As a result, \[\mu_{v} = -4\sum_{x}xV(x)H(x)+4\sum_{x}\sum_{y<x}yV(y)V(x)-\sum_{x}V^{2}(x)\] \[= -8\sum_{x}xV(x)H(x)-4\sum_{x}xV^{2}(x)-\sum_{x}V^{2}(x),\] which is the expression of Equation 10 in Dobrusin (1955). Furthermore, since \(0=\sum_{x}V^{2}(x)+2\sum_{x}\sum_{y<x}V(y)V(x)=\sum_{x}V^{2}(x)+2\sum_{x}V(x)H(x)\), one also gets \[2\sum_{x}H^{2}(x)=2\sum_{a\leq x\leq b}\sum_{a\leq y<x}\sum_{a\leq z <x}V(y)V(z)\\ =2\sum_{a\leq y\leq b}\sum_{a\leq z\leq b}V(y)V(z)\sum_{b\geq x> \max(y,z)}=2\sum_{a\leq y\leq b}\sum_{a\leq z\leq b}V(y)V(z)\{b-\max(y,z)\}\\ =4\sum_{z}\sum_{y<z}V(y)V(z)(b-z)+2\sum_{y}V^{2}(y)(b-y)=-4\sum_{ z}zH(z)V(z)-2\sum_{y}yV^{2}(y),\] proving the expressions \(c_{V}^{2}=2\sum_{x}H^{2}(x)\) and \(2c_{V}^{2}=\mu_{v}+\sum_{x}V^{2}(x)\). Note that \(\mu_{v}=2B_{V}\), where \(B_{V}=\frac{1}{2}\sum_{x\in\mathbb{Z}}\left\{2H(x)+V(x)\right\}^{2}\)(Remillard, 1990). ### Some useful results **Lemma A.1** (Lenglart's inequality).: _Let \(X\) be an \(\mathbb{F}\)-adapted \(D\)-valued process. Suppose that \(Y\) is optional, non-decreasing, and that, for any bounded stopping time \(\tau\), \(E|X(\tau)|\leq E\{Y(\tau)\}\). Then for any stopping time \(\tau\) and all \(\varepsilon,\eta>0\),_ * _if_ \(Y\) _is predictable,_ (A.1) \[P(\sup_{s\leq\tau}|X(s)|\geq\varepsilon)\leq\frac{\eta}{\varepsilon}+P(Y(\tau) \geq\eta).\] * _if_ \(Y\) _is adapted,_ (A.2) \[P(\sup_{s\leq\tau}|X(s)|\geq\varepsilon)\leq\frac{1}{\varepsilon}\left[\eta+E \left\{J_{\tau}(Y)\right\}\right]+P(Y(\tau)\geq\eta).\] Proof.: See Jacod and Shiryaev (2003, Lemma I.3.30). Proving \(\mathcal{J}_{1}\)-tightness generally involves the following lemma. **Lemma A.2** (Aldous's criterion).: _Let \(\{X_{n}\}_{n\geq 1}\) be a sequence of \(D\)-valued processes. Suppose that for any sequence of bounded discrete stopping times \(\left\{\tau_{n}\right\}_{n\geq 1}\) and for any sequence \(\left\{\delta_{n}\right\}_{n\geq 1}\) in \([0,1]\) converging to \(0\), the following condition holds, for every \(T>0\): (A) \(X_{n}((\tau_{n}+\delta_{n})\wedge T)-X_{n}(\tau_{n})\overset{Law}{\to}0\). Then, \(\left\{X_{n}\right\}_{n\geq 1}\) is \(\mathcal{J}_{1}\)-tight, if either of the two following conditions holds:_ * \(\{X_{n}(0)\}_{n\geq 1}\) _and_ \((J_{T}(X_{n}))_{n\geq 1}\) _are tight;_ * \(\{X_{n}(t)\}_{n\geq 1}\) _is tight for any_ \(t\in[0,T]\) Proof.: See Aldous (1978), Jacod and Shiryaev (2003, Theorem VI.4.5) or for several variants see Ethier and Kurtz (1986, Theorem 3.8.6). Note that condition (1) or condition (2) are necessary for \(\mathcal{J}_{1}\)-tightness, but not condition (A). Now comes the main result about tightness, stated for real-valued processes. Before stating it, set \(J(x_{1},x_{2},x_{3})=|x_{2}-x_{1}|\wedge|x_{2}-x_{3}|\), where \(x\wedge y=\min(x,y)\). For \(x\in D\), set \[w_{\mathcal{J}_{1}}(x,\delta,T)=\sup_{0\leq t_{1}<t_{2}<t_{3}\leq T,\;t_{3}-t _{1}<\delta}J\{x(t_{1}),x(t_{2}),x(t_{3})\}.\] It follows from (Rebolledo, 1979, Remarques:I.6) and (Parthasarathy, 1967, VII, Lemma 6.4) that, for any \(T>0\) and \(\delta>0\), (A.3) \[J_{T}(x)\leq\omega_{\mathcal{C}}(x,\delta,T)\leq J_{T}(x)+2\omega_{\mathcal{J }_{1}}(x,\delta,T).\] **Remark A.1**.: It follows from (A.3) that \(X_{n}\) is \(\mathcal{C}\)-tight iff \(X_{n}\) is \(\mathcal{J}_{1}\)-tight and \(J_{T}(X_{n})\stackrel{{ Law}}{{\longrightarrow}}0\). For if \(\epsilon>0\) is given, then, for any \(\delta>0\), \[P(\omega_{\mathcal{C}}(X_{n},\delta,T)>\epsilon)\leq P(J_{T}(X_{n})>\epsilon/ 2)+P(\omega_{\mathcal{J}_{1}}(X_{n},\delta,T)>\epsilon/4)\to 0,\] as \(n\to\infty\). **Theorem A.3**.: _Let \(M_{n}\) be a sequence of \(D\)-valued square integrable \(\mathbb{F}\)-martingales with \(M_{n}(0)=0\), with quadratic variation \([M_{n}]\) and compensator \(A_{n}=\langle M_{n}\rangle\)._ * _Assume that for any_ \(t>0\)_,_ \(\limsup_{n\to\infty}E\{J_{t}^{2}(M_{n})\}=0\) _holds. Then_ \(M_{n}\) _is_ \(\mathcal{C}\)_-tight if and only if_ \([M_{n}]\) _is_ \(\mathcal{C}\)_-tight._ * _Assume that for any_ \(t>0\)_,_ \(\limsup_{n\to\infty}E\{J_{t}^{2}(M_{n})\}=0\) _and_ \(J_{t}(A_{n})\stackrel{{ Law}}{{\to}}0\) _hold. Then the_ \(\mathcal{C}\)_-tightness of_ \([M_{n}]\) _implies that of_ \(A_{n}\)_._ * _Assume that for any_ \(t>0\)_,_ \(J_{t}(M_{n})\stackrel{{ Law}}{{\to}}0\) _hold. Then the_ \(\mathcal{C}\)_-tightness of_ \(A_{n}\) _implies that of both_ \(M_{n}\) _and_ \([M_{n}]\)_._ **Remark A.2**.: If every \(M_{n}\) is continuous, then \(M_{n}\) is \(\mathcal{C}\)-tight if and only if \(A_{n}\) is \(\mathcal{C}\)-tight, by statement a), since both \(J_{t}(M_{n})=0\) and \([M_{n}]=A_{n}\) then hold -- this goes back to Rebolledo (1979). Without the continuity assumption on \(M_{n}\), this equivalence no longer holds. Note also that c) follows from (Jacod and Shiryaev, 2003, Theorem 4.13). Proof.: The idea of the proof is to show \(\mathcal{J}_{1}\)-tightness, and then use Remark A.1. For statement a), suppose first that \(M_{n}\) is \(\mathcal{C}\)-tight. Set \(X_{n}(t)=[M_{n}]_{t+\tau_{n}}-[M_{n}]_{\tau_{n}}\) and \(Y_{n}(t)=\sup_{0\leq s\leq t}\{M_{n}(s+\tau_{n})-M_{n}(\tau_{n})\}^{2}\), where \(\tau_{n}\) is a stopping time uniformly bounded by \(T\) for any \(n\). Then, for any bounded stopping time \(\tau\), \(E\{X_{n}(\tau)\}\leq E\{Y_{n}(\tau)\}\). Let \(\delta\) be a bounded stopping time. By (A.2) of Lemma A.1, we have, for any \(\varepsilon,\eta>0\), (A.4) \[P([M_{n}]_{\tau_{n}+\delta_{n}}-[M_{n}]_{\tau_{n}}\geq\varepsilon)\\ \leq\frac{\eta}{\varepsilon}+\frac{1}{\varepsilon}E\{J_{\delta_{ n}}(Y_{n})\}+P(Y_{n}(\delta_{n}))\geq\eta)\\ \leq\frac{\eta}{\varepsilon}+\frac{1}{\varepsilon}E\left\{J_{T+1 }^{2}(M_{n})\right\}+P\left\{\omega_{\mathcal{C}}(M_{n},\delta_{n},T+1)> \sqrt{\eta}\right\}.\] Since \(M_{n}\) is \(\mathcal{C}\)-tight, it follows that \(P\left\{\omega_{\mathcal{C}}(M_{n},\delta_{n},T+1)>\sqrt{\eta}\right\}\to 0\) as \(n\to\infty\). Set \(\eta=\varepsilon^{2}\). Then \(\limsup_{n\to\infty}P\left\{[M_{n}]_{\tau_{n}+\delta_{n}}-[M_{n}]_{\tau_{n}} \geq\varepsilon\right\}\leq\varepsilon\), showing that \([M_{n}]\) meets both conditions (A) and (1) of Lemma A.2, since \(J_{T}([M_{n}])=J_{T}^{2}(M_{n})\). Hence \([M_{n}]\) is \(\mathcal{J}_{1}\)-tight. The fact that \([M_{n}]\) is \(\mathcal{C}\)-tight follows from Remark A.1 and \(J_{T}([M_{n}])\xrightarrow{Law}0\). To complete the proof of a), assume now that \([M_{n}]\) is \(\mathcal{C}\)-tight. Using Lemma A.1 yields \[P\left\{|M_{n}(\tau_{n}+\delta_{n})-M_{n}(\tau_{n})|\geq\varepsilon\right\}\\ \leq\frac{\eta}{\varepsilon^{2}}+\frac{1}{\varepsilon^{2}}E\{J_{ \delta_{n}}([M_{n}]_{\tau_{n}+.}-[M_{n}]_{\tau_{n}})\}+P\left\{[M_{n}]_{\tau_{ n}+\delta_{n}}-[M_{n}]_{\tau_{n}}>\eta\right\}\\ \leq\frac{\eta}{\varepsilon^{2}}+\frac{1}{\varepsilon^{2}}E\{J_{ T+1}([M_{n}])\}+P\left\{\omega_{\mathcal{C}}([M_{n}],\delta_{n},T+1)>\eta \right\}.\] Choosing \(\eta=\varepsilon^{3}\), both conditions (A) and (1) of Lemma A.2 are met and \(M_{n}\) is \(\mathcal{J}_{1}\)-tight. Since \(J_{T}(M_{n})\xrightarrow{Law}0\), it follows from Remark A.1 that \(M_{n}\) is \(\mathcal{C}\)-tight. For statement b), (A.4) becomes \[P(A_{n}(\tau_{n}+\delta_{n})-A_{n}(\tau_{n})\geq\varepsilon)\\ \leq\frac{\eta}{\varepsilon}+\frac{1}{\varepsilon}E\{J_{T+1}^{2}( M_{n})\}+P\left\{\omega_{\mathcal{C}}([M_{n}],\delta_{n},T+1)>\eta\right\},\] showing that \(A_{n}\) meets both conditions (A) and (1) of Lemma A.2. As a result, \(A_{n}\) is \(\mathcal{J}_{1}\)-tight. Since \(J_{T}(A_{n})\xrightarrow{Law}0\), \(A_{n}\) is also \(\mathcal{C}\)-tight, using Remark A.1. Finally, for statement c), use (A.1) of Lemma A.1 instead to prove that each of \(M_{n}\) and \([M_{n}]\) meets both conditions (A) and (1) of Lemma A.2. As a result, both \(M_{n}\) and are \(\mathcal{J}_{1}\)-tight. Since \(J_{T}(M_{n})\stackrel{{Law}}{{\longrightarrow}}0\), \(M_{n}\) is \(\mathcal{C}\)-tight by Remark A.1, and so is \([M_{n}]\) by a). **Proposition A.4**.: _Let \(D\)-valued non-decreasing processes \(A_{n}\) and some continuous process \(A\) be such that \(A_{n}\stackrel{{ f.d.d.}}{{\rightsquigarrow}}A\). Then \(A_{n}\) is \(\mathcal{C}\)-tight and \(A_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}A\)._ Proof.: For any fixed \(T>0\) and \(\delta>0\) there holds, with \(i\) running only through the non-negative integers, \[\limsup_{n\to\infty}P(\omega_{\mathcal{C}}(A_{n},\delta,T)\geq \epsilon)\\ \leq\limsup_{n\to\infty}P\left(3\max_{0\leq i\leq T/\delta} \sup_{i\delta\leq s\leq T\wedge(i+1)\delta}|A_{n}(s)-A_{n}(i\delta)|\geq \epsilon\right)\\ \leq\limsup_{n\to\infty}P\left(\bigcup_{0\leq i\leq T/\delta} \left\{|A_{n}(T\wedge(i+1)\delta)-A_{n}(i\delta)|\geq\epsilon/3\right\}\right) \\ \leq P\left(\bigcup_{0\leq i\leq T/\delta}\left\{|A(T\wedge(i+1) \delta)-A(i\delta)|\geq\epsilon/3\right\}\right)\\ \leq P(\omega\mathcal{C}(A,\delta,T)\geq\epsilon/3),\] using convergence in law. Hence \(A_{n}\) is \(\mathcal{C}\)-tight since \(A\) is continuous. ### Proof of Theorem 2.1 By Proposition A.4, the continuity of \(A\) implies that \(A_{n}^{ii}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}A^{ii}\) holds for every \(i\). By Jacod and Shiryaev (2003, Theorem I.4.2), one concludes \((A_{n}^{ij}(t)-A_{n}^{ij}(s))^{2}\leq(A_{n}^{ii}(t)-A_{n}^{ii}(s))(A_{n}^{jj} (t)-A_{n}^{jj}(s))\) almost surely, for every choice of \(i\) and \(j\), so each \(A_{n}^{ij}\) is \(\mathcal{C}\)-tight and hence \(A_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}A\) as well. Next, set \(X_{n}^{i}(t)=[M_{n}^{i}]_{\tau_{n}+t}-[M_{n}^{i}]_{\tau_{n}}\) and \(Y_{n}^{i}(t)=A_{n}^{ii}(\tau_{n}+t)-A_{n}^{ii}(\tau_{n})\), where \(\tau_{n}\) is a sequence of bounded stopping times. The filtration of interest here is \(\{\mathcal{F}_{\tau_{n}+t}\}_{t\geq 0}\). For any \(\epsilon>0\), \(\eta>0\) and sequence \(\delta_{n}\in(0,1)\to 0\), \(EX_{n}^{i}(\delta_{n})=EY_{n}^{i}(\delta_{n})\) implies (A.5) \[P([M_{n}^{i}]_{\tau_{n}+\delta_{n}}-[M_{n}^{i}]_{\tau_{n}}\geq \varepsilon)\leq\frac{1}{\varepsilon}\left[\eta+E\left\{J_{\delta_{n}}(Y_{n}^{ i})\right\}\right]+P(Y_{n}^{i}(\delta_{n}))\geq\eta),\] using (A.2) of Lemma A.1, with \(J_{\delta_{n}}(Y_{n}^{i})\leq Y_{n}^{i}(\delta_{n})\leq\omega\mathcal{C}(A_{n} ^{ii},\delta_{n},T)\), since \(A_{n}^{ii}\) is nondecreasing. Therefore, the continuity of the limit \(A^{ii}\) implies \(Y_{n}^{i}(\delta_{n})\stackrel{{Law}}{{\to}}0\) while \(E\left\{Y_{n}^{i}(\delta_{n})\right\}\to 0\) follows from Hypothesis 2.1.b(ii) by the dominated convergence theorem (Ethier and Kurtz, 1986, Proposition App1.2). By Theorem A.3, \([M_{n}^{i}]\) is \(\mathcal{J}_{1}\)-tight. Since \(J_{t}([M_{n}^{i}])\stackrel{{Law}}{{\to}}0\) is assumed, \([M_{n}^{i}]\) is actually \(\mathcal{C}\)-tight and hence so are \(M_{n}^{i}\) and \((M_{n}^{i},A_{n}^{ii},[M_{n}^{i}])\) successively. For every choice of \(i\) and \(j\), inequality \(([M_{n}^{i},M_{n}^{j}]_{t}-[M_{n}^{i},M_{n}^{j}]_{s})^{2}\leq([M_{n}^{i}]_{t}-[ M_{n}^{i}]_{s})([M_{n}^{j}]_{t}-[M_{n}^{j}]_{s})\) ensures the individual \(\mathcal{C}\)-tightness of each \([M_{n}^{i},M_{n}^{j}]\). The \(\mathcal{C}\)-tightness of matrix \([M_{n}]\) follows, implying that of \(M_{n}\) and \((M_{n},A_{n},[M_{n}])\). By Skorohod's theorem, e.g., Ethier and Kurtz (1986, Theorem 3.1.8), there exists a subsequence \(\{n_{k}\}\), a probability space \((\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})\), and \(D\)-valued processes \(Z^{\prime}_{n_{k}}:=(M^{\prime}_{n_{k}},B^{\prime}_{n_{k}},A^{\prime}_{n_{k}})\) and \(Z^{\prime}:=(M^{\prime},B^{\prime},A^{\prime})\) defined on \((\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})\) which are such that \(Z^{\prime}_{n_{k}}\) and \(Z_{n_{k}}:=(M_{n_{k}},B_{n_{k}},\langle M_{n_{k}}\rangle)\) are identical in law for all \(k\geq 1\), and \(Z^{\prime}_{n_{k}}\) converges almost surely to \(Z^{\prime}\) uniformly on compact time sets. We next prove that, for any such limit point \(Z^{\prime}=(M^{\prime},B^{\prime},A^{\prime})\), \(M^{\prime}\) and \(B^{\prime}\) are both martingales with respect to the natural filtration \(\mathcal{F}^{\prime}_{t}=\sigma\{M^{\prime}(s),B^{\prime}(s),A^{\prime}(s);s \leq t\}\), such that \(\langle M^{\prime}\rangle_{t}=A^{\prime}_{t}\), \(\langle B^{\prime}\rangle_{t}=t\), and \(\langle M^{\prime},B^{\prime}\rangle_{t}\equiv 0\). Next, for every \(\ell\geq 1\), every bounded open set \(O\subset\mathbb{R}^{\ell(d+d+d^{2})}\) and each selection of \(0\leq s_{1}\leq\cdots\leq s_{\ell}\leq s\leq t\), write \(E_{n_{k}}:=\left\{(Z_{n_{k}}(s_{1}),\ldots,Z_{n_{k}}(s_{\ell}))\in O\right\}\), \(E^{\prime}_{n_{k}}:=\left\{\left(Z^{\prime}_{n_{k}}(s_{1}),\ldots,Z^{\prime}_{n _{k}}(s_{\ell})\right)\in O\right\}\), and \(E^{\prime}:=\left\{(Z^{\prime}(s_{1}),\ldots,Z^{\prime}(s_{\ell}))\in O\right\}\). For \(M^{\prime}\) to be a \(\mathcal{F}^{\prime}\)-martingale, it suffices to prove \(E\left\{(M^{\prime}(t)-M^{\prime}(s))\,\mathbb{I}_{E^{\prime}}\right\}=0\). Without loss of generality, we do so coordinatewise (hence \(d=1\)) and drop the superscript \(i\) for the rest of the proof, in order to keep notation to a minimum and avoid writing things like \((M^{\prime}_{n_{k}})^{i}(t)\). Note that \(\left(M^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s)\right)\mathbb{I}_{E^{\prime} _{n_{k}}}\) converges almost surely to \((M^{\prime}(t)-M^{\prime}(s))\,\mathbb{I}_{E^{\prime}}\) and it is uniformly integrable since \(E\left\{\left(M^{\prime}_{n_{k}}(t)\right)^{2}\right\}=EA^{\prime}_{n_{k}}(t)= EA_{n_{k}}(t)\to EA(t)\). Since \(A_{n_{k}}(u)\) is \(\mathcal{F}_{n_{k}}(s)\)-measurable for all \(0\leq u\leq s\), \(E_{n_{k}}\) is \(\mathcal{F}_{n_{k}}(s)\)-measurable and \[0=E\left\{\left(M_{n_{k}}(t)-M_{n_{k}}(s)\right)\mathbb{I}_{E_{n_ {k}}}\right\} = E\left\{\left(M^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s)\right) \mathbb{I}_{E^{\prime}_{n_{k}}}\right\}\] \[\stackrel{{ n_{k}\to\infty}}{{\to}}E\left\{\left(M^{ \prime}(t)-M^{\prime}(s)\right)\mathbb{I}_{E^{\prime}}\right\}.\] Moreover, using Ethier and Kurtz (1986, Proposition App2.3) and Hypothesis 2.1.b, \(\{M^{\prime}_{n_{k}}(t)\}^{2}\) for each \(t\geq 0\). Next, for all \(0\leq s\leq t\), \[E\left\{\left\{M^{\prime}(t)-M^{\prime}(s)\right\}^{2}\right\} = \lim_{k\to\infty}E\left\{\left\{M^{\prime}_{n_{k}}(t)-M^{\prime}_{ n_{k}}(s)\right\}^{2}\right\}\] \[= \lim_{k\to\infty}E\left\{A^{\prime}_{n_{k}}(t)-A^{\prime}_{n_{k}}(s )\right\}=E\left\{A(t)-A(s)\right\}.\] Therefore \(M^{\prime}\) is a square integrable martingale with respect to filtration \(\{\mathcal{F}^{\prime}(t)\}_{t\geq 0}\) because the cylinder sets \(E^{\prime}\) defined above generate the \(\sigma\)-algebra \(\mathcal{F}^{\prime}(s)\). The same argument also applies to \(B^{\prime}\). Next, \(\left\{\left(M^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s)\right)^{2}-A^{\prime}_{n_{ k}}(t)+A^{\prime}_{n_{k}}(s)\right\}\mathbb{I}_{E^{\prime}_{n_{k}}}\) converges almost surely to \(\left\{\left(M^{\prime}(t)-M^{\prime}(s)\right)^{2}-A^{\prime}(t)+A^{\prime}( s)\right\}\mathbb{I}_{E^{\prime}}\) and its absolute value is bounded by \(\left(M^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s)\right)^{2}+A^{\prime}_{n_{k} }(t)\), which converges almost surely to \(\left(M^{\prime}(t)-M^{\prime}(s)\right)^{2}+A^{\prime}(t)\). Using (A.6), \[E\left\{\left(M^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s)\right)^{2}+A^{\prime }_{n_{k}}(t)\right\}=2EA^{\prime}_{n_{k}}(t)-EA^{\prime}_{n_{k}}(s)\] converges to \(E\left\{\left(M^{\prime}(t)-M^{\prime}(s)\right)^{2}+A^{\prime}(t)\right\}\). By dominated convergence Ethier and Kurtz (1986, Proposition App1.2), one can conclude that \[0 = E\left\{\left\{\left(M^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s) \right)^{2}-A^{\prime}_{n_{k}}(t)+A^{\prime}_{n_{k}}(s)\right\}\mathbb{I}_{E^{ \prime}_{n_{k}}}\right\}\] \[\rightarrow E\left\{\left\{\left(M^{\prime}(t)-M^{\prime}(s)\right)^{2}-A^{ \prime}(t)+A^{\prime}(s)\right\}\mathbb{I}_{E^{\prime}}\right\}.\] It follows that \(A^{\prime}\) is the quadratic variation process of the martingale \(M^{\prime}\) since, by construction, \(A^{\prime}(t)\) is \(\mathcal{F}^{\prime}(t)\)-measurable. Again, the same argument holds true for \(B^{\prime}\), with quadratic variation \(\langle B\rangle_{t}=t\), \(t\geq 0\). Finally, both the within and cross off-diagonal terms in \(\langle\tilde{M}_{n_{k}},B_{n_{k}}\rangle\), are taken care of coordinatewise, as in the preceding argumentation, setting \(d=1\). For instance, the copy of the cross term \[\left\{M^{\prime}_{n_{k}}(t)B^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s)B^{ \prime}_{n_{k}}(s)-\langle\tilde{M}^{\prime}_{n_{k}},B^{\prime}_{n_{k}}\rangle _{t}+\langle\tilde{M}^{\prime}_{n_{k}},B^{\prime}_{n_{k}}\rangle_{s}\right\} \mathbb{I}_{E^{\prime}_{n_{k}}}\] converges almost surely to \(\left\{M^{\prime}(t)B^{\prime}(t)-M^{\prime}(s)B^{\prime}(s)\right\}\mathbb{I }_{E^{\prime}}\) and its absolute value, using Kunita-Watanabe's inequality, is bounded by \[g_{n_{k}}=\frac{1}{2}\left(M^{\prime}_{n_{k}}(t)\right)^{2}+\frac{1}{2}\left(M ^{\prime}_{n_{k}}(s)\right)^{2}+\frac{1}{2}\left(B^{\prime}_{n_{k}}(t)\right)^ {2}+\frac{1}{2}\left(B^{\prime}_{n_{k}}(s)\right)^{2}+\frac{1}{2}A^{\prime}_{n _{k}}(t)+\frac{1}{2}\langle B^{\prime}_{n_{k}}\rangle_{t},\] which converges almost surely to \[g=\frac{1}{2}\left(M^{\prime}(t)\right)^{2}+\frac{1}{2}\left(M^{\prime}(s) \right)^{2}+\frac{1}{2}\left(B^{\prime}(t)\right)^{2}+\frac{1}{2}\left(B^{ \prime}(s)\right)^{2}+\frac{1}{2}t+\frac{1}{2}A^{\prime}(t).\] Hypothesis (b) implies \(E(g_{n_{k}})\to E(g)\), so \[E\left\{\left\{M^{\prime}_{n_{k}}(t)B^{\prime}_{n_{k}}(t)-M^{\prime}_{n_{k}}(s) B^{\prime}_{n_{k}}(s)-\langle\tilde{M}^{\prime}_{n_{k}},B^{\prime}_{n_{k}} \rangle_{t}+\langle\tilde{M}^{\prime}_{n_{k}},B^{\prime}_{n_{k}}\rangle_{s} \right\}\mathbb{I}_{E^{\prime}_{n_{k}}}\right\}\equiv 0\] converges to \(E\left\{\left\{M^{\prime}(t)B^{\prime}(t)-M^{\prime}(s)B^{\prime}(s)\right\} \mathbb{I}_{E^{\prime}}\right\}\) and hence \(\langle M^{\prime},B^{\prime}\rangle=0\). Thus any limit point \(Z^{\prime}=(M^{\prime},B^{\prime},A^{\prime})\) has the property that \(M^{\prime}\), \(B^{\prime}\), and \(M^{\prime}B^{\prime}\) are martingales with respect to the natural filtration \(\mathcal{F}^{\prime}_{t}\), with \(\langle M^{\prime}\rangle=A^{\prime}\), \(B^{\prime}\) is a Brownian motion, and most importantly that \(\langle M^{\prime},B^{\prime}\rangle_{t}\equiv 0\). Since we already know that the trajectories of \(M^{\prime}\) and \(B^{\prime}\) are continuous, it follows from Ikeda and Watanabe (1989, Theorem II.7.3) that \(M^{\prime}=W^{\prime}\circ A^{\prime}\) with independent Brownian motions \(W^{\prime}\) and \(B^{\prime}\) on probability space \((\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})\). Since, \(A^{\prime}\) is \(\mathbb{F}_{B^{\prime}}\)-measurable by hypothesis, \(W^{\prime}\) is also independent of \(A^{\prime}\). Therefore all limit points \((M^{\prime},B^{\prime},A^{\prime})\) have the same law since the law of \(A^{\prime}\) is the same has the one of \(A\). ### Proof of Proposition 2.2 Let \(\mathcal{F}_{n,t}=\sigma\left\{\xi_{j},a\left(\frac{j}{n}\right);j\leq\lfloor nt \rfloor\right\}\). Then \(M_{n}\) is a \(\mathcal{F}_{n,t}\)-martingale with \(\langle M_{n}\rangle(t)=n^{-1}\sum_{j=1}^{\lfloor nt\rfloor}\sigma^{2}\left( \frac{j-1}{n}\right)=V_{n}(t)\). From the continuity of \(\sigma\), one gets that \(V_{n}\to V\) and \(V\) is continuous. Also, if \(B_{n}(t)=n^{-\frac{1}{2}}\ \sum_{j=1}^{\lfloor nt\rfloor}\xi_{j}\), then \((B_{n},V_{n})\xrightarrow{\mathcal{C}}(B,V)\), where \(B\) is a Brownian motion independent of \(\sigma\) and \(V\). Since \(J_{t}(M_{n})\xrightarrow{Law}0\) holds, Theorem A.3 shows that \((M_{n},V_{n},B_{n})\) is \(\mathcal{C}\)-tight. To complete the proof, let \(W\) be a Brownian motion independent of \(V\). It is sufficient to show that for any \(0=t_{0}<t_{1}<\cdots<t_{m}\), \(M_{n}(t_{1}),\ldots,M_{n}(t_{m})\), \(V_{n}(t_{1}),\ldots,V_{n}(t_{m})\), and \(B_{n}(t_{1}),\ldots,B_{n}(t_{m})\) converge jointly in law to \(W\circ V(t_{1}),\ldots,B(t_{m})\). To this end, take \(\theta_{1},\eta_{1},\lambda_{1},\ldots,\theta_{m},\eta_{m},\lambda_{m}\in \mathbb{R}\), and set \(\varphi(s)=E\left(e^{is\xi_{j}}\right)\). Next, setting \(G_{n}=e^{i\sum_{j=1}^{m}\eta_{j}\{V_{n}(t_{j})-V_{n}(t_{j-1})\}}\), and using the standard proof of the CLT, one gets \[E\left[e^{i\sum_{k=1}^{m}[\lambda_{k}\{M_{n}(t_{k})-M_{n}(t_{k-1 })\}+\theta_{k}\{B_{n}(t_{k})-B_{n}(t_{k-1})\}+\eta_{k}\{V_{n}(t_{k})-V_{n}(t_ {k-1})\}]}\right]\\ =E\left[G_{n}\prod_{k=1}^{m}\prod_{j=\lfloor nt_{k-1}\rfloor+1}^{ \lfloor nt_{k}\rfloor}\varphi\left\{n^{-\frac{1}{2}}\ \theta_{k}+n^{-\frac{1}{2}}\ \lambda_{k}\sigma\left(\frac{j-1}{n}\right) \right\}\right]\\ =E\left[G_{n}e^{-\frac{1}{2}\sum_{k=1}^{m}\lambda_{k}^{2}\{V_{n}( t_{k})-V_{n}(t_{k-1})\}-\frac{1}{2}\sum_{k=1}^{m}\theta_{k}^{2}(t_{k}-t_{k-1})} \right]+o(1)\\ \to E\left[e^{-\frac{1}{2}\sum_{k=1}^{m}\lambda_{k}^{2}\{V(t_{k})-V (t_{k-1})\}-\frac{1}{2}\sum_{k=1}^{m}\theta_{k}^{2}(t_{k}-t_{k-1})+i\sum_{j=1 }^{m}\eta_{k}\{V(t_{k})-V(t_{k-1})\}}\right]\\ =E\left[e^{i\sum_{k=1}^{m}\left[\lambda_{k}\int_{t_{k-1}}^{t_{k} }\sigma(s)dB_{s}+\eta_{k}\{V(t_{k})-V(t_{k-1})\}+\eta_{k}\{B(t_{k})-B(t_{k-1}) \}\right]}\right].\] Taking \(\theta_{1}=\cdots=\theta_{m}=0\), one gets that \(M\) has also the same distribution as \(W\circ V\), for a Brownian motion \(W\) independent of \(V\). ### Proof of Lemma 4.1 Equation (4.1) proceeds from \(E\left\{|B\circ A_{t}-B\circ A_{s}|^{p}|\sigma\{A\}\right\}=\mu_{p}(A_{t}-A_{s })^{\frac{p}{2}}\) which itself ensues from the independence of \(A\) and \(B\), plus the moments of a standard normal distribution. When \(p\geq 2\), note that \(\mathcal{U}_{p}(t)=\int_{0}^{t}a_{s}^{\frac{p}{2}}ds\geq U_{p,\delta}(t)\) and \(\delta^{1-\frac{p}{2}}\{A(s_{k,\delta})-A(s_{k-1,\delta})\}^{\frac{p}{2}}= \delta\{a(w_{k,\delta})\}^{\frac{p}{2}}\leq\int_{s_{k-1,\delta}}^{s_{k,\delta} }a_{s}^{\frac{p}{2}}ds\), for some \(w_{k,\delta}\in[s_{k-1,\delta},s_{k,\delta}]\). As a result, \(\lim_{\delta\downarrow 0}U_{p,\delta}=\mathcal{U}_{p}\) since \(a\) is continuous and \(\lim_{\delta\downarrow 0}\sum_{k=1}^{K(t,\delta)}\int_{s_{k-1,\delta}}^{s_{k, \delta}}\left[a_{\frac{p}{u}}^{\frac{p}{2}}-\{a(w_{k,\delta})\}^{\frac{p}{2}} \right]du=0\). It remains to show that the filter of martingales \(Z_{\delta}:=V_{p,\delta}-\mu_{p}U_{p,\delta}\) converges in probability to \(0\), uniformly on compact time sets -- the martingale property proceeds at once, just as in the proof of Equation (4.1). Assuming \(E\{|M_{t}|^{2p}\}<\infty\) for any \(t\geq 0\), the representation \(M=B\circ A\) yields \[E\left\{|M(s_{k,\delta})-M(s_{k-1,\delta})|^{p}\{A(s_{k,\delta}) -A(s_{k-1,\delta})\}^{\frac{p}{2}}\Big{|}\mathcal{F}_{s_{k-1,\delta}}\right\} \\ =\mu_{p}E\left\{\{A(s_{k,\delta})-A(s_{k-1,\delta})\}^{p}\Big{|} \mathcal{F}_{s_{k-1,\delta}}\right\}.\] Expanding the square in \[E\left\{\Big{(}|M(s_{k,\delta})-M(s_{k-1,\delta})|^{p}-\mu_{p} \{A(s_{k,\delta})-A(s_{k-1,\delta})\}^{\frac{p}{2}}\Big{)}^{2}\,\Big{|} \mathcal{F}_{s_{k-1,\delta}}\right\}\\ =(\mu_{2p}-\mu_{p}^{2})E\left\{\{A(s_{k,\delta})-A(s_{k-1,\delta })\}^{p}\Big{|}\mathcal{F}_{s_{k-1,\delta}}\right\}\] implies that \(Z_{\delta}\) is square integrable with compensator \(\langle Z_{\delta}\rangle\) given by \[\langle Z_{\delta}\rangle_{t}=\delta^{2-p}(\mu_{2p}-\mu_{p}^{2})\sum_{1\leq k \leq K(t,\delta)}E\left\{\{A(s_{k,\delta})-A(s_{k-1,\delta})\}^{p}\Big{|} \mathcal{F}_{s_{k-1,\delta}}\right\},\] so there ensues \(E\{\langle Z_{\delta}\rangle_{t}\}=\delta(\mu_{2p}-\mu_{p}^{2})E\{U_{2p,\delta }(t)\}\) which goes to \(0\) with \(\delta\). By Lemma A.1, \(\sup_{0\leq s\leq t}|Z_{\delta}(s)|\) converges in probability to \(0\) with \(\delta\) as well. ### Proof of Theorem 4.2 By Lemma 4.1, \(V_{n}\) converges to \(\mathcal{V}_{4}\) in probability and hence \(V_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}\mathcal{V}_{4}\) holds, by Proposition A.4. The expression for \(\langle X_{n}\rangle\) is a direct consequence of Lemma 4.1, which entails \(E\left[\left\{\Delta_{n}M\left(\frac{j}{n}\right)\right\}^{4}|\mathcal{F}_{ \frac{j-1}{n}}\right]=3E\left[\left\{\Delta_{n}A\left(\frac{j}{n}\right)\right\} ^{2}|\mathcal{F}_{\frac{j-1}{n}}\right]\), while the representation \(M=B\circ A\) yields \[E\left[\left\{\Delta_{n}M\left(\frac{j}{n}\right)\right]^{2}\Delta_{n}A\left( \frac{j}{n}\right)|\mathcal{F}_{\frac{j-1}{n}}\right]=E\left[\left\{\Delta_{ n}A\left(\frac{j}{n}\right)\right\}^{2}|\mathcal{F}_{\frac{j-1}{n}}\right].\] Also, \(V_{n}=\frac{3}{2}\langle X_{n}\rangle+\mathcal{Z}_{n}\), where \[\mathcal{Z}_{n}(t)=n\sum_{j=1}^{\lfloor nt\rfloor}\left[\left\{\Delta_{n}M \left(\frac{j}{n}\right)\right\}^{4}-E\left[\left\{\Delta_{n}M\left(\frac{j}{n} \right)\right\}^{4}|\mathcal{F}_{\frac{j-1}{n}}\right]\right]\] is a martingale with \[\langle\mathcal{Z}_{n}\rangle_{t}=\mu_{8}n^{2}\sum_{k=1}^{\lfloor nt\rfloor}E \left[\left\{\Delta_{n}A\left(\frac{j}{n}\right)\right]^{4}|\mathcal{F}_{\frac{ j-1}{n}}\right]-\mu_{4}^{2}n^{2}\sum_{k=1}^{\lfloor nt\rfloor}E^{2}\left[\left\{\Delta_{n}A \left(\frac{j}{n}\right)\right]^{2}|\mathcal{F}_{\frac{j-1}{n}}\right].\] By Lemma 4.1 with \(p=8\), \(nE\{\langle\mathcal{Z}_{n}\rangle_{t}\}\) is bounded above by sequence \(E\{U_{8,1/n}(t)\}\) which converges to \(E\{\mathcal{U}_{8}(t)\}\) and hence \(\langle\mathcal{Z}_{n}\rangle\overset{\mathcal{C}}{\rightsquigarrow}0\) holds, by Proposition A.4. By Lemma A.1, \(\sup\limits_{0\leq t\leq T}|\mathcal{Z}_{n}(t)|\) converges in probability to \(0\). This implies \(\langle X_{n}\rangle\overset{\mathcal{C}}{\rightsquigarrow}2\mathcal{U}_{4}= \frac{2}{3}\mathcal{V}_{4}\). Next, let \(Z_{j}\) be iid standard Gaussian random variables independent of \(A\), set \(\mathbb{B}_{n}(t)=(2n)^{-\frac{1}{2}}\sum_{j=1}^{\lfloor nt\rfloor}(Z_{j}^{2}-1)\); further set \(a_{n,j}=A\left(\frac{j}{n}\right)-A\left(\frac{j-1}{n}\right)\). It is then clear that \((A_{n},\mathbb{B}_{n})\overset{\mathcal{C}}{\longrightarrow}(A,\mathbb{B})\), where \(\mathbb{B}\) is a Brownian motion independent of \(a\). For any \(0=t_{0}<t_{1}<\cdots<t_{m}\), and \(\lambda_{1},\ldots,\lambda_{m}\in\mathbb{R}\), since \(M=B\circ A\), where \(B\) is a Brownian motion independent of \(A\), one has \[E\left[\exp\left[i\sum_{k=1}^{m}\lambda_{k}\{X_{n}(t_{k})-X_{n}( t_{k-1})\}\right]\right]\\ =E\left[\exp\left[in^{\frac{1}{2}}\sum_{k=1}^{m}\lambda_{k}\sum_ {j=\lfloor nt_{k-1}\rfloor+1}^{\lfloor nt_{k}\rfloor}a_{n,j}\left(Z_{j}^{2}-1 \right)\right]\right]\\ =E\left[\exp\left[i2^{\frac{1}{2}}n^{-\frac{1}{2}}\sum_{k=1}^{m} \lambda_{k}\sum_{j=\lfloor nt_{k-1}\rfloor+1}^{\lfloor nt_{k}\rfloor}a\left( \frac{j-1}{n}\right)\left(\frac{Z_{j}^{2}-1}{2^{\frac{1}{2}}}\right)\right] \right]+o(1).\] One can then use Proposition 2.2 to conclude that \((X_{n},\langle X_{n}\rangle,V_{n},\mathbb{B}_{n})\overset{\mathcal{C}}{ \rightsquigarrow}(X,\mathcal{A},\mathcal{V}_{4},\mathbb{B})\), where \(\mathbb{B}\) is a Brownian motion independent of \(\mathcal{A}=2\mathcal{U}_{4}\) and \(X_{t}=\int_{0}^{t}a(s)d\mathbb{B}(s)\). Furthermore, \(X\) can be written as \(X=W\circ\mathcal{A}\), where \(W\) is a Brownian motion independent of \(\mathcal{A}\). Finally, writing \(Z_{n}(t):=n^{\frac{1}{2}}\sum_{j=1}^{\lfloor nt\rfloor}\Delta_{n}N\left(\frac{ j-1}{n}\right)\) yields square integrable martingale \(\int_{0}^{t}Z_{n}(s)dM(s)=n^{\frac{1}{2}}\sum_{j=1}^{\lfloor nt \rfloor}\Delta_{n}M\left(\frac{j}{n}\right)\Delta_{n}N\left(\frac{j-1}{n}\right)\) with \(\int_{0}^{t}Z_{n}^{2}(s)ds=\sum_{j=1}^{\lfloor nt\rfloor}\left\{\Delta_{n}N \left(\frac{j-1}{n}\right)\right\}^{2}\) and quadratic variation \[\left\langle\int_{0}^{t}Z_{n}(s)dM(s)\right\rangle=\int_{0}^{t}Z_{n}^{2}(s)d \langle M\rangle_{t}=n\sum_{j=1}^{\lfloor nt\rfloor}\{\Delta_{n}N\left(\frac {j-1}{n}\right)\}^{2}\Delta_{n}A\left(\frac{j-1}{n}\right).\] Therefore \(Y_{n}(t)-X_{n}(t)=n^{\frac{1}{2}}\int_{0}^{t}Z_{n}^{2}(s)ds+2\int_{0}^{t}Z_{n}(s)dM (s)\stackrel{{ P_{\mathcal{T}}}}{{\to}}0\) holds for every \(t>0\), since \(\int_{0}^{t}Z_{n}^{2}(s)d\langle M\rangle_{t}\stackrel{{ Pr}}{{\to}}0\). Hence \(Y_{n}-X_{n}\stackrel{{\mathcal{C}}}{{\rightsquigarrow}}0\) holds as well, yielding the last statement for \(Y_{n}\) via Lemma A.1, Theorem A.3 and Proposition A.4.
2305.15394
Differentially-Private Decision Trees and Provable Robustness to Data Poisoning
Decision trees are interpretable models that are well-suited to non-linear learning problems. Much work has been done on extending decision tree learning algorithms with differential privacy, a system that guarantees the privacy of samples within the training data. However, current state-of-the-art algorithms for this purpose sacrifice much utility for a small privacy benefit. These solutions create random decision nodes that reduce decision tree accuracy or spend an excessive share of the privacy budget on labeling leaves. Moreover, many works do not support continuous features or leak information about them. We propose a new method called PrivaTree based on private histograms that chooses good splits while consuming a small privacy budget. The resulting trees provide a significantly better privacy-utility trade-off and accept mixed numerical and categorical data without leaking information about numerical features. Finally, while it is notoriously hard to give robustness guarantees against data poisoning attacks, we demonstrate bounds for the expected accuracy and success rates of backdoor attacks against differentially-private learners. By leveraging the better privacy-utility trade-off of PrivaTree we are able to train decision trees with significantly better robustness against backdoor attacks compared to regular decision trees and with meaningful theoretical guarantees.
Daniël Vos, Jelle Vos, Tianyu Li, Zekeriya Erkin, Sicco Verwer
2023-05-24T17:56:18Z
http://arxiv.org/abs/2305.15394v2
# Differentially-Private Decision Trees with Probabilistic Robustness to Data Poisoning ###### Abstract Decision trees are interpretable models that are well-suited to non-linear learning problems. Much work has been done on extending decision tree learning algorithms with differential privacy, a system that guarantees the privacy of samples within the training data. However, current state-of-the-art algorithms for this purpose sacrifice much utility for a small privacy benefit. These solutions create random decision nodes that reduce decision tree accuracy or spend an excessive share of the privacy budget on labeling leaves. Moreover, many works do not support or leak information about feature values when data is continuous. We propose a new method called PrivaTree based on private histograms that chooses good splits while consuming a small privacy budget. The resulting trees provide a significantly better privacy-utility trade-off and accept mixed numerical and categorical data without leaking additional information. Finally, while it is notoriously hard to give robustness guarantees against data poisoning attacks, we prove bounds for the expected success rates of backdoor attacks against differentially-private learners. Our experimental results show that PrivaTree consistently outperforms previous works on predictive accuracy and significantly improves robustness against backdoor attacks compared to regular decision trees. ## 1 Introduction Machine learning has achieved widespread success with neural networks and ensemble methods, but it is almost impossible for humans to understand the decision such models make [30]. Fortunately, much work has been done on training machine learning models that are directly interpretable by humans [38]. Especially size-limited decision trees [6; 36] are successful methods for their interpretability combined with their ability to predict non-linear data. While decision trees can offer interpretability, they reveal information about the data they were trained on. This is a detrimental property when models are trained on private data that contains sensitive information, such as in fraud detection and medical applications. Differentially-private machine learning models solve this problem by introducing carefully crafted randomness into the way the models are trained [1]. For differentially-private decision trees, the entire model consisting of decision node splits and leaf labels can be made public, and by extension, predictions made by the model. This is not only useful for training interpretable private models, but decision trees are also vital primitives for building tree ensembles [5; 17; 7; 26]. The key problem in training differentially-private models is efficiently spending the so-called privacy budget \(\epsilon\) to achieve high utility. In this work, we propose such an algorithm for training decision trees with a better privacy-utility trade-off. Many previous works have already proposed ways to generate differentially-private decision trees, but they also have shortcomings. There are two main categories of algorithms here. The first category [3; 14; 24; 25] chooses splits completely at random and allocates the entire privacy budget for labeling the leaves. The second category [2; 4; 13; 16] extends the greedy splitting procedure of regular decision trees where splits are selected by optimizing a splitting score. These works guarantee differential privacy by incorporating noise from a probability distribution weighted by this score while consuming a part of the user-defined privacy budget. However, naive approaches require computing many scores resulting in consuming privacy budget frequently, which means much privacy budget is needed to select good decision nodes. The remaining budget is spent on labeling leaves. In this work, we propose a method called PrivaTree to train differentially-private decision trees, which uses the privacy budget much more efficiently when choosing splits. We also propose a strategy for distributing the privacy budget that works well for small and large datasets. The result is a practical method for training private trees with a significantly better utility. PrivaTree also prevents leakage from the location of splits on numerical features. Moreover, we prove a bound on the robustness of differentially-private machine learning models against data poisoning attacks where an adversary manipulates the training data. Our experiments on the MNIST 0 vs 1 dataset show that indeed PrivaTrees with small privacy budgets already resist a trigger-based backdoor attack. ## 2 Preliminaries ### Decision trees Decision trees are simple models that consist of nodes that apply logical rules to a sample, and leaves that hold a prediction. By following a path through the rules, each sample reaches a leaf and that leaf's value is predicted. Decision trees have become a popular choice of model due to their straightforward interpretation when limiting the size of the tree [30] and their success in more complex ensembles [5; 17; 7; 26; 21]. While it is debatable what exact tree size maintains interpretability, we choose to train trees up to a depth of 4 resulting in at most 16 leaves. The most popular algorithms for learning decision trees are based on CART [6] and ID3 [36]. These algorithms recursively create decision nodes that minimize Gini impurity or maximize information gain and create leaves labeled with the majority of labels that reach them. While these methods are greedy heuristics and thus offer no guarantees [27], they perform well in practice. Due to the success of decision trees in gradient boosting, much effort has gone into implementing efficient algorithms using histograms [7; 26]. We base PrivaTree on such histogram-based learners, where instead of aiming for efficiency, we use them to achieve a better trade-off between privacy and accuracy. ### Differential privacy Differential privacy [10; 11; 12] provides strong privacy guarantees for algorithms over aggregate datasets: it implies that the existence of any record in the dataset does not influence the output probability with factor \(\epsilon\). This property prevents membership attacks with high probability if \(\epsilon\) is chosen small enough. Figure 1: Mean accuracy scores of depth 4 trees when varying privacy budget \(\epsilon\) from private to less private with 50 repetitions. For very small privacy budgets DiffPrivLib performs best but for higher budgets, PrivaTree achieves a significantly better trade-off between accuracy and privacy. **Definition 1** (Differential privacy).: _A randomized algorithm \(\mathcal{A}\) satisfies \(\epsilon\)-differential privacy if for all datasets \(\mathcal{D},\mathcal{D}^{\prime}\) differing in one element, drawn from \(\mathcal{X}\), and any \(\mathcal{S}\subseteq\mathrm{Range}(\mathcal{A})\), it holds that:_ \[\Pr[\mathcal{A}(\mathcal{D})\in\mathcal{S}]\leq e^{\epsilon}\Pr[\mathcal{A}( \mathcal{D}^{\prime})\in\mathcal{S}]. \tag{1}\] Differential privacy has several composition properties [32]. Consider a series of mechanisms \(\mathcal{A}_{1},\dots,\mathcal{A}_{k}\), where each \(\mathcal{A}_{i}\) is \(\epsilon_{i}\)-differentially private for \(i\in[k]\). The sequential composition of these mechanisms is guaranteed to provide \((\sum_{i}\epsilon_{i})\)-differential privacy. Parallel composition provides \((\max_{i}\epsilon_{i})\)-differential privacy when the mechanisms are applied to disjoint subsets of the input. Some common mechanisms include the Laplace [11] and the geometric [19] mechanisms. These algorithms sample noise from a probability distribution shaped by \(\epsilon\) and perturb the original data by this noise. Other algorithms like the exponential mechanism [33] and permute-and-flip [31] randomly choose a value from a set of options, weighed by a utility score and the privacy parameter \(\epsilon\). ## 3 Related work Many previous works already propose algorithms for training differentially-private decision trees. These algorithms address the privacy leakage in regular trees by replacing the splitting and labeling operations with differentially-private alternatives. This paper only considers algorithms for single decision trees.Fletcher and Islam [15] wrote a survey on this topic which also examines ensembles. Table 1 provides a summary of existing algorithms for training private decision trees. In the 'features' column, we indicate whether the algorithm considers categorical and numerical features. We remark that algorithms for numerical splits also support categories using a one-hot encoding, and algorithms for categories support numerical features by applying binning. Note that unless computed using differential privacy, the resulting bins reveal information about the training data. In that case, the model only guarantees differential privacy for the leaves, which is equivalent to labelDP [18]. There are two main categories of algorithms for training differentially-private trees. The first category trains random decision trees, which replace choosing splits by splitting uniformly at random from the domain of possible feature values. A benefit of doing so is that splitting does not consume any privacy budget, so that labeling can be performed with the full privacy budget. However, random splits do not necessarily produce good leaves as worse splits lead to leaves that may contain many samples of both classes. As a result, accurate labels are not guaranteed to produce accurate models. For certain datasets, the poor quality of random splits strongly affects the performance of the resulting tree. For this reason, random decision trees are almost exclusively used in ensembles. Examples of such algorithms are Private-RDT [25], dpRFMV/dpRFTA [3], and Smooth Random Trees [14]. \begin{table} \begin{tabular}{l c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Method**} & \multicolumn{3}{c|}{**Features**} & \multicolumn{2}{c}{**Mechanism**} \\ Name & Year & Ref & Categorical & Numerical & Splitting & Labeling \\ \hline SuLQ ID3 & 2005 & [2] & & & \(\bigcirc\) & \(\mathcal{M}_{\text{Gaussian}}\) & \(\mathcal{M}_{\text{Gaussian}}\) \\ Private-RDT & 2009 & [25] & & & - & \(\mathcal{M}_{\text{Laplace}}\) \\ SuLQ-based ID3 & 2010 & [16] & & & \(\bigcirc\) & \(\mathcal{M}_{\text{Laplace}}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ DiffPID3 & 2010 & [16] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ DiffGen & 2011 & [34] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ DT-Diff & 2013 & [43] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ dpRFMV/dpRFTA & 2014 & [3] & & & - & \(\mathcal{M}_{\text{Laplace}}\) \\ DPDF & 2015 & [13] & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ Rana et al. & 2015 & [37] & & & \(\bigcirc\) & \(\mathcal{M}_{\text{Laplace}}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ Smooth Random Trees & 2017 & [14] & & & - & - & \(\mathcal{M}_{EM}\)* \\ ADiffP & 2018 & [4] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ DPGDF & 2019 & [42] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{EM}\)* \\ BDPT & 2020 & [23] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\)* & \(\mathcal{M}_{\text{Laplace}}\) \\ TrainSingleTree & 2020 & [28] & & & \(\bigcirc\) & \(\mathcal{M}_{EM}\) & \(\mathcal{M}_{\text{Laplace}}\) \\ DiffPrivLib & 2021 & [24] & & & - & \(\mathcal{M}_{PF}\) \\ PrivaTree & _This work_ & & & & \(\mathcal{M}_{\text{Geometric}}\) & \(\mathcal{M}_{PF}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of methods for training differentially private decision trees, algorithms marked with * use smooth sensitivity. Most methods use the exponential mechanism \(\mathcal{M}_{EM}\) for splitting and \(\mathcal{M}_{\text{Laplace}}\) for labeling leaves. Methods without splitting mechanism use random trees. The second category consists of algorithms that train a greedy tree by probabilistically choosing a split weighed by a scoring function such as the information gain or the Gini impurity. SuLQ ID3 [2] and SuLQ-based ID3 [16] do so by adding Gaussian or Laplace noise to the scores themselves, while works like DiffPID3 [16], DiffGen [34], DT-Diff [43] and TrainSingleTree [28] do so not by perturbing the scores, but using the exponential mechanism so the privacy budget does not have to be divided over so many queries. DPDF [13] further increases the utility of the queries by bounding the sensitivity of the Gini impurity, while ADiffP [4] dynamically allocates the privacy budget. We compare against three of the latest algorithms for training private trees. BDPT [23] is a greedy tree algorithm that uses the exponential mechanism for splitting but with smooth sensitivity, allowing for a higher utility per query as compared to previous works. DPGDF [42] is a similar algorithm that uses smooth sensitivity for creating the leaves rather than the splits. This algorithm only support categorical features. Finally, DiffPrivLib [24] offers a recent implementation of random trees. For the leaves it uses the permute-and-flip mechanism, which performs better in practice than the exponential mechanism [31]. This algorithm only supports numerical features. ## 4 Improving differentially private decision trees: PrivaTree In this section, we propose PrivaTree, which is an algorithm for training differentially-private decision trees with high utility. PrivaTree incorporates three techniques to improve performance: * Private histograms to find good splits with the Gini impurity using little privacy budget. * The permute-and-flip mechanism instead of the exponential mechanism for leaf labeling. * A better distribution of privacy budget based on a bound for labeling leaves accurately. Additionally, PrivaTree uses a pre-processing technique based on private quantiles which enables it to train on both numerical and categorical features. PrivaTree assumes that the range of numerical features, the set of categorical values, the set of class labels, and the dataset size are publicly known. We provide pseudocode in Algorithm 1, which we describe in more detail in the rest of this section. ### Scoring splits using private histograms Classification tree learning algorithms such as CART create decision nodes by choosing a split among all feature values that minimizes the weighted Gini impurity. Previous differentially-private decision trees have also used such approaches but often leak information about numerical feature values. Friedman and Schuster [16] show how to privately split on numerical features using the continuous exponential mechanism but this requires more privacy budget because each feature needs to be considered separately. Another work, BDPT, permits some leakage, providing a weak form of privacy by averaging every 5 numerical feature values, then splitting using the exponential mechanism. Numerical featuresTo find high-quality splits while protecting feature value information and efficiently using the privacy budget, we use private histograms. Splitting according to histograms has been a successful progression in decision tree learning for gradient boosting ensembles [7; 26] where it is used for its runtime efficiency. Instead, we rely on them because they only require noise to be added once per bin (parallel composition), allowing us to only add noise once per feature for each node. Specifically, we add noise with the Geometric mechanism and privacy budget \(\epsilon_{\text{node,num}}\) which we define later, and sensitivity 1. We notice that decision trees perform well on most datasets at very low numbers of bins, which supports this choice. In the rest of this paper, we fix the number of bins for numerical features to 10. To find a split, the PrivaTree algorithm computes a private histogram for each feature, computes the Gini impurity of the split between every bin, and selects the minimum. This results in decision rules of the kind 'value of feature \(i\leq\text{threshold}\)'. Categorical featuresPrevious decision tree learning algorithms support categorical features by only splitting one category at a time. While this method is sound, it often requires deep trees to make enough splits for categorical features to be useful, and this harms interpretability. In the PrivaTree algorithm, we find a locally-optimal partition of the categories instead. Such rules are of the kind 'categories \(L\) go left, categories \(R\) go right'. By a result of Coppersmith et al. [8], this partition is efficiently identified by first sorting the categories by their ratio of class 0 versus class 1 members followed by the typical splitting procedure. To guarantee differential privacy we perform the sort operation based on the private histogram with privacy budget \(\epsilon_{\text{node,cat}}\) and sensitivity 1. ### Pre-processing by private quantiles Before the splitting procedure, PrivaTree needs to select the boundaries of the bins of the private histograms. A natural choice is to create equal-width bins of numerical features based on the public knowledge of each feature's range, but there is a problem with this approach: features with long tails would cause data to be concentrated in a few bins and would result in a large loss of information. Instead, we bin numerical features using quantiles so that they evenly divide the samples over each bin. Since regular quantiles leak information about the dataset, we resort to differentially-private quantiles. We use the _jointexp_ algorithm [20] for this, which improves performance over optimal statistical estimation techniques [39] when computing multiple quantiles on the same data. This algorithm runs with privacy budget \(\epsilon_{\text{quantile}}\) which we define later. Categorical variables require no pre-processing. Instead, we encode them as integers and the split-finding operation handles these values natively. Since we do not pre-process categories, we do not spend privacy budget, leaving more budget for subsequent private operations. ### Leaf labeling according to majority votes Once the learning algorithm has produced a series of decision nodes and it reaches the stopping criterion, the algorithm creates a leaf containing a prediction. To maximize accuracy, the prediction is normally chosen as the majority of the class labels of samples that reach that leaf. However, this leaks private information. Previous works have used the Laplace or (smooth) exponential mechanisms but like modern implementations such as DiffPriLib [24] we opt for the permute-and-flip mechanism [31]. Permute-and-flip is proven to always be at least as performant as the exponential mechanism and practically outperforms it. To label a leaf we therefore count the number of samples of each class and apply permute-and-flip with privacy budget \(\epsilon_{\text{leaf}}\) and sensitivity 1. ### Distributing the privacy budget The composability property of differential privacy allows modular algorithm design by breaking up the algorithm into differentially private primitives. However, it is generally not obvious how to distribute the privacy budget \(\epsilon\) over the primitives to maximize the expected utility of the outcomes. Previous works have distributed \(\epsilon\) equally over each private operation or distributed epsilon 50-50 between node- and leaf operations. For large values of \(\epsilon\) this results in excess budget being spent on leaf labeling where it could have improved node selection, as demonstrated in Figure 2. By noticing that we can bound the expected error incurred by labeling leaves for a given privacy budget, we propose a budget distribution scheme that scales well for varying values of \(\epsilon\). When the privacy budget is low compared to the dataset size, set \(\epsilon_{\text{leaf}}=\frac{\epsilon}{2}\). When the budget is relatively high, set \(\epsilon_{\text{leaf}}\) such that the maximum expected labeling error \(\mathbb{E}[\mathcal{E}(\mathcal{M},\tilde{N})]\) is at most equal to the user-specified Figure 2: Training accuracy scores when varying the distribution of the privacy budget \(\epsilon=0.1\) over different parts of the PrivaTree algorithm with depth 3. Scores were averaged over 50 executions. On small datasets such as _vote_, the best trees allocate more budget to the leaves, while large datasets such as _pol_ benefit from more budget for splitting. labeling error limit \(\mathcal{E}_{max}\). Distribute the remaining budget \(\epsilon-\epsilon_{\text{leaf}}\) uniformly over quantile and node operations to improve algorithm utility for higher values of \(\epsilon\). **Theorem 1**.: _For \(K\) classes, \(n\) samples and depth \(d\) trees, the amount of privacy budget \(\epsilon^{\prime}_{\text{leaf}}\) needed for labeling leaves with expected error \(\mathbb{E}[\mathcal{E}(\mathcal{M}_{PF},\vec{N})]\) of at most \(\mathcal{E}_{max}\) is:_ \[\epsilon^{\prime}_{\text{leaf}}=\frac{2^{d}\max_{p}2\log(\frac{1}{p})\left(1- \frac{1-(1-p)^{\mathcal{K}}}{Kp}\right)}{n\ \mathcal{E}_{max}}\, \tag{2}\] Proof.: This result is a straightforward application of the bound proved for \(\mathcal{M}_{PF}\) in the permute-and-flip paper [31]. A complete proof is given in the appendix. Next, we show that this distribution actually provides differential privacy. **Theorem 2**.: _PrivaTree, which is given in Algorithm 1, provides \(\epsilon\)-differential privacy._ Proof.: For numerical attributes in \(X\), PrivaTree first computes the quantiles with \(\epsilon_{\text{quantiles}}\). After that, the algorithm chooses splits. Since the maximum depth is \(d\), the privacy parameter is bound by \(d\cdot\epsilon_{node,num}\) through sequential composition. Finally, leaf labeling consumes \(\epsilon_{leaf}\) of the privacy budget. By sequential composition, the overall privacy parameter for numerical attributes is: \[\epsilon_{n}=\epsilon_{\text{quantiles}}+d\cdot\epsilon_{\text{node,num}}+ \epsilon_{\text{leaf}}=(1+d)\cdot(\epsilon-\epsilon_{\text{leaf}})\cdot \frac{1}{1+d}+\epsilon_{\text{leaf}}=\epsilon\.\] For categorical attributes in \(X\), there is no need to calculate the quantiles. Choosing the splits of nodes takes the privacy budget of \(d\cdot\epsilon_{\text{node,cat}}\), and the leaf labeling is with \(\epsilon_{\text{leaf}}\). The overall privacy parameter for categorical attributes follows similarly: \(\epsilon_{c}=d\cdot\epsilon_{\text{node,cat}}+\epsilon_{\text{leaf}}=\epsilon\). Numerical and categorical attributes are disjoint subsets of the input dataset \(X\). According to the parallel composition, Algorithm 1 provides \(\max(\epsilon_{n},\epsilon_{c})=\epsilon\)-differential privacy. ``` 0: dataset \(X\) (\(n\times m\)), labels \(y\), privacy budget \(\epsilon\), maximum leaf error \(\mathcal{E}\), maximum depth \(d\) 1:\(\epsilon_{\text{leaf}}=\min(\frac{\epsilon}{2},\epsilon^{\prime}_{\text{ leaf}})\), where \(\epsilon^{\prime}_{\text{leaf}}\) is computed with Equation 2 2:\(\epsilon_{\text{node,num}}=\epsilon_{\text{quantiles}}=(\epsilon-\epsilon_{ \text{leaf}})\cdot\frac{1}{1+d}\), \(\epsilon_{\text{node,cat}}=(\epsilon-\epsilon_{\text{leaf}})\cdot\frac{1}{d}\) 3:for numerical features \(j_{\text{num}}\)do 4: bin values \(X_{i,j_{\text{sum}}}\) into bins \(B_{j}\) using \(\mathcal{M}_{\text{Quantiles}}(X_{i,j_{\text{sum}}}:i=1...n)\)\(\triangleright\) with budget \(\epsilon_{\text{quantiles}}\) 5:endfor 6:procedureFitTree(\(X^{\prime},y^{\prime},d^{\prime}\)) 7:if\(d^{\prime}\) = 0 or \(|X^{\prime}_{i,j}|\leq 1\) or \(y^{\prime}\) contains a single class then 8:return\(\text{Leaf}(\mathcal{M}_{PF}(\langle N_{0},N_{1},...,N_{k}\rangle))\)\(\triangleright\) with budget \(\epsilon_{\text{leaf}}\) 9:else 10:\(\forall j,k,b{\in}B_{j}:H_{j,b,k}\leftarrow\mathcal{M}_{\text{Geometric}}( \sum_{i}[X^{\prime}_{i,j}{=}b\wedge y^{\prime}_{i}{=}k])\)\(\triangleright\) with \(\epsilon_{\text{node,num}}\) or \(\epsilon_{\text{node,cat}}\) 11: find split (\(j^{*},b^{*}\)) that minimizes Gini impurity w.r.t. \(H_{j,b,k}\) 12: the split (\(j^{*},b^{*}\)) partitions \(X^{\prime}\) into \((X^{\text{left}},X^{\text{right}})\), and \(y^{\prime}\) into \((y^{\text{left}},y^{\text{right}})\) 13:\(\mathcal{T}_{\text{left}}\leftarrow\texttt{FitTree}(X^{\text{left}},y^{\text{ left}},d^{\prime}{-}1)\), \(\mathcal{T}_{\text{right}}\leftarrow\texttt{FitTree}(X^{\text{right}},y^{\text{ right}},d^{\prime}{-}1)\) 14:return\(\text{Node}(j^{*},b^{*},\mathcal{T}_{\text{left}},\mathcal{T}_{\text{right}})\) 15:endif 16:endprocedure 17:returnFitTree(\(X,y,d\)) ``` **Algorithm 1** Train PrivaTree with \(\epsilon\) differential privacy ## 5 Poisoning Robustness Bounds When using machine learning trained on crowd-sourced data, such as in federated learning scenarios, one has to consider malicious user behavior. One such threat is data poisoning, in which users insert \(x\) data points into the training dataset to confuse the classifier or introduce a backdoor. Many defenses have been proposed such as using learning behavior to ignore backdoor data [29] or post-processing based on adversarial robustness to remove backdoors [41] but such methods work heuristically and offer no guarantees. We notice that by using machine learning algorithms that offer \(\epsilon\)-differential privacy guarantees, one can theoretically bound the expected success rates of such poisoning attacks. **Theorem 3**.: _A machine learning algorithm \(\mathcal{M}_{ML}\) satisfying \(\epsilon\)-differential privacy guarantees that the expected backdoor attack success rate (ASR) against \(x\) poisoned samples of the classifiers \(C\) that it produces \(\mathbb{E}_{C}[ASR(\mathcal{M}_{ML}(X_{x}))]\) is bounded by:_ \[\mathbb{E}_{C}[ASR(\mathcal{M}_{ML}(X_{x}))]\leq 1-(1-\mathbb{E}_{C}[ASR( \mathcal{M}_{ML}(X_{0}))])e^{-x\epsilon}\;.\] Proof.: It follows from Definition 1 that given two datasets \(D\) and \(D^{\prime}\) differing in \(x\) elements, any \(\epsilon\)-differentially private mechanism \(\mathcal{M}\) that produces outcomes \(C\in\mathcal{C}\) satisfies [10]: \[Pr[\mathcal{M}(D)=C]\leq e^{x\epsilon}\cdot Pr[\mathcal{M}(D^{\prime})=C]\;. \tag{3}\] To increase the probability mass of classifiers \(C\) with high \(ASR(C)\), the adversary must remove the probability mass of classifiers with low \(ASR(C)\). Our result relies on the amount of probability mass that the adversary can remove and bound this value by assuming that the adversary can always add any removed probability mass to a classifier \(C^{*}=\text{argmax}_{C}(ASR(C))\) that maximizes the attack success rate. Since \(ASR:\mathcal{C}\rightarrow[0,1]\) the attack success rate of \(C^{*}\) is bounded by \(ASR(C^{*})=1\): \[\mathbb{E}_{C}[ASR(\mathcal{M}_{ML}(X_{x}))] =\sum_{C\in\mathcal{C}}\Pr[\mathcal{M}_{ML}(X_{x})=C]\cdot ASR(C)\;,\] \[\leq\sum_{C\in\mathcal{C}}e^{-x\epsilon}\Pr[\mathcal{M}_{ML}(X_{0 })=C]ASR(C)\] \[\quad+\sum_{C\in\mathcal{C}}(1-e^{-x\epsilon})\Pr[\mathcal{M}_{ ML}(X_{0})=C]\underbrace{\max_{C^{\prime}}(ASR(C^{\prime}))}_{=1}\;,\] \[=e^{-x\epsilon}\mathbb{E}_{C}[ASR(\mathcal{M}_{ML}(X_{0}))]+(1-e ^{-x\epsilon})\underbrace{\sum_{C\in\mathcal{C}}\Pr[\mathcal{M}_{ML}(X_{0})=C] }_{=1}\;,\] \[=1-e^{-x\epsilon}(1-\mathbb{E}_{C}[ASR(\mathcal{M}_{ML}(X_{0}))] )\;.\qed\] ## 6 Results We compare the performance of PrivaTree with regular decision trees from Scikit-learn [35] and 3 existing works: DiffPrivLib [24], BDPT [23] and DPGDF [42]. DiffPrivLib is a widely used python library for differential privacy and implements several private machine learning models. Their decision tree implementation creates random decision nodes and uses all privacy budget to label leaves use the permute-and-flip mechanism. Since DiffPrivLib and Scikit-learn do not natively support categorical features we encode these into integers. BDPT and DPGDF did not share their implementations so we implemented these using DiffPrivLib. Since DPGDF only supports categorical variables we run experiments as in the work of Borhan [4] and remove numerical features. BDPT only heuristically protects numerical feature values so we compare it against PrivaTree* a variant of PrivaTree where we compute quantiles non-privately and set \(\epsilon_{\text{quantile}}=0\). All experiments ran on a computer with 16GB of RAM and a 2 GHz Intel i5 processor with 4 cores. Our open-source Scikit-learn compatible implementation of all algorithms can be found online1. Footnote 1: [https://github.com/tudelft-cda-lab/PrivaTree](https://github.com/tudelft-cda-lab/PrivaTree) ### Predictive performance To compare PrivaTree to existing works we evaluated performance on two well-known benchmarks. First, we experimented on 6 datasets from the UCI repository [9] that previous works tested on. However, these datasets are often small and thus too hard (_diabetes_) or too easy to predict (_nursery_), or imbalanced (_adult_), which skews performance numbers. To complement this, we therefore also run experiments on the tabular data benchmark [21]. These datasets were chosen to be real-world, balanced, not too small, and not too simple. We removed rows with missing values and computed the public feature ranges based on the datasets, the categorical values are supplied by OpenML [40]. In Table 2 we present the accuracy scores on both benchmarks for trees of depth 4 computed with 5-fold stratified cross-validation at a privacy budget of \(\epsilon=0.1\). Results for other budgets are given in the appendix. Since all private algorithms are based on the greedy algorithm for decision trees, the goal is to score similarly to the non-private trees. On almost all datasets PrivaTree outperforms BDPT, DPGDF, and DiffPrivLib or performs similarly. On _breast-w_ and _vote_ however, DiffPrivLib sometimes performs better. This is because, on such small datasets, it is better to avoid spending the privacy budget on good decision nodes and instead spend all budget on labeling leaves correctly. On most datasets, there is only a difference in score of a few percentage points between PrivaTree and PrivaTree*. BDPT has previously only been tested on numerical features with few unique values and fails to train accurate trees on the numerical tabular benchmark. In Figure 1 we visualize the average accuracy over 50 runs when varying the total privacy budget \(\epsilon\) for depth 4 trees on the _adult_ and _compas-two-years_ datasets. Again, on very small \(\epsilon\) values DiffPrivLib outperforms the other methods. However, as soon as there is enough privacy budget to see value from choosing better decision nodes (around \(\epsilon{=}0.005\) and \(\epsilon{=}0.05\) in the figure respectively) PrivaTree dominates the rest of the methods. ### Backdoor robustness on MNIST To demonstrate the effectiveness of differentially private decision tree learners at mitigating data poisoning attacks, we have evaluated backdoor attacks on the MNIST 0 vs 1 dataset. Specifically, \begin{table} \begin{tabular}{l|c|c c|c c} \hline \hline **OpenML dataset** & **decision tree** & **BDPT** & **PrivaTree*** & **DPGDF** & **DiffPrivLib** & **PrivaTree** \\ & no privacy & leaking numerical splits & & differential privacy & \\ \hline \multicolumn{5}{c}{Numerical data} \\ \hline Bioresponse &.701 \(\pm\).009 &.502 \(\pm\).001 & **.574**\(\pm\).015 & - &.519 \(\pm\).013 & **.564**\(\pm\).017 \\ Diabetes130US &.606 \(\pm\).002 &.511 \(\pm\).005 & **.601**\(\pm\).001 & - &.507 \(\pm\).003 & **.600**\(\pm\).001 \\ Higgs &.657 \(\pm\).001 & timeout & **.658**\(\pm\).001 & - &.513 \(\pm\).010 & **.659**\(\pm\).001 \\ MagicTelescope &.783 \(\pm\).008 &.500 \(\pm\).000 & **.756**\(\pm\).003 & - &.553 \(\pm\).022 & **.740**\(\pm\).005 \\ MiniBooNE &.872 \(\pm\).001 &.500 \(\pm\).000 & **.859**\(\pm\).002 & - &.509 \(\pm\).006 & **.858**\(\pm\).002 \\ bank-marketing &.768 \(\pm\).004 &.501 \(\pm\).001 & **.735**\(\pm\).008 & - &.538 \(\pm\).013 & **.747**\(\pm\).008 \\ california &.783 \(\pm\).002 &.500 \(\pm\).000 & **.754**\(\pm\).009 & - &.512 \(\pm\).005 & **.756**\(\pm\).006 \\ covertype &.741 \(\pm\).001 &.502 \(\pm\).001 & **.745**\(\pm\).002 & - &.569 \(\pm\).035 & **.748**\(\pm\).001 \\ credit &.748 \(\pm\).001 &.500 \(\pm\).000 & **.739**\(\pm\).003 & - &.516 \(\pm\).009 & **.720**\(\pm\).014 \\ default-of-credit. &.700 \(\pm\).006 &.500 \(\pm\).000 & **.679**\(\pm\).008 & - &.549 \(\pm\).019 & **.684**\(\pm\).003 \\ electricity &.731 \(\pm\).001 &.500 \(\pm\).000 & **.738**\(\pm\).003 & - &.555 \(\pm\).003 & **.736**\(\pm\).006 \\ eye\_movements &.571 \(\pm\).010 &.500 \(\pm\).000 & **.532**\(\pm\).007 & - & **.518**\(\pm\).007 & -.514 \(\pm\).009 \\ helico &.702 \(\pm\).004 &.518 \(\pm\).009 & **.696**\(\pm\).002 & - &.532 \(\pm\).015 & **.677**\(\pm\).011 \\ house\_16H &.819 \(\pm\).003 &.500 \(\pm\).000 & **.765**\(\pm\).010 & - &.555 \(\pm\).009 & **.769**\(\pm\).007 \\ jannis &.718 \(\pm\).001 &.500 \(\pm\).000 & **.707**\(\pm\).003 & - &.564 \(\pm\).032 & **.702**\(\pm\).003 \\ pol &.930 \(\pm\).001 &.509 \(\pm\).014 & **.871**\(\pm\).011 & - &.580 \(\pm\).016 & **.836**\(\pm\).015 \\ \hline \multicolumn{5}{c}{Numerical \& categorical data} \\ \hline albert &.641 \(\pm\).002 &.501 \(\pm\).001 & **.632**\(\pm\).002 &.511 \(\pm\).008 &.546 \(\pm\).017 & **.631**\(\pm\).003 \\ compas-two-years &.664 \(\pm\).005 &.562 \(\pm\).013 & **.639**\(\pm\).007 &.571 \(\pm\).004 &.569 \(\pm\).005 & **.595**\(\pm\).013 \\ covertype &.756 \(\pm\).001 &.614 \(\pm\).001 & **.755**\(\pm\).001 &.537 \(\pm\).011 &.539 \(\pm\).018 & **.755**\(\pm\).001 \\ default-of-credit. &.704 \(\pm\).005 &.500 \(\pm\).001 & **.691**\(\pm\).005 &.528 \(\pm\).004 &.535 \(\pm\).015 & **.689**\(\pm\).005 \\ electricity &.732 \(\pm\).002 &.500 \(\pm\).000 & **.740**\(\pm\).004 &.518 \(\pm\).004 &.527 \(\pm\).014 & **.734**\(\pm\).004 \\ eye\_movements &.570 \(\pm\).003 &.500 \(\pm\).001 & **.514**\(\pm\).007 &.523 \(\pm\).004 &.524 \(\pm\).009 & **.533**\(\pm\).005 \\ road-safety &.728 \(\pm\).001 &.685 \(\pm\).002 & **.710**\(\pm\).003 &.685 \(\pm\).002 &.555 \(\pm\).033 & **.721**\(\pm\).001 \\ \hline \hline \multicolumn{5}{c}{UCI datasets (numerical \& categorical)} \\ \hline adult &.840 \(\pm\).001 &.752 \(\pm\).000 & **.815**\(\pm\).001 &.753 \(\pm\).002 &.757 \(\pm\).003 & **.820**\(\pm\).003 \\ breast-w &.950 \(\pm\).007 &.614 \(\pm\).025 & **.909**\(\pm\).025 & - &.842 \(\pm\).037 & **.886**\(\pm\).019 \\ diabetes &.723 \(\pm\).012 &.624 \(\pm\).017 & **.659**\(\pm\).011 & - &.667 \(\pm\).010 & **.673**\(\pm\).019 \\ mushroom &.977 \(\pm\).005 &.880 \(\pm\).013 & **.973**\(\pm\).011 &.761 \(\pm\).024 &.749 \(\pm\).048 & **.985**\(\pm\).002 \\ nursery & we repeat the experiment from Badnets [22] in which the adversary adds a fixed trigger pattern to the bottom right corner of the image in an attempt to force zeros to be classified as ones. To achieve this, the adversary copies \(x\) zeros, adds the trigger pattern to these images, and adds the copies to the training set with label 1. An example of a zero with a trigger pattern is shown in Figure 3. To measure the robustness of models against the backdoor, we compute the Attack Success Rate (ASR), which is the percentage of test samples with label 0 that are predicted as 1 when the trigger pattern is added. In Figure 3, we plot the ASR of a regular decision tree and PrivaTrees with privacy budgets 0.1 and 0.01 against a varying number of poisoned samples ranging between 0% to 1% of the dataset. All trees were trained 50 times had a depth of 4. With only 0.01% of the train set poisoned, regular decision trees already suffer from an ASR of almost 100% whereas PrivaTrees on average stay under ASR of under 20% for the entire range. While the bound for \(\epsilon=0.01\) is much tighter than the bound for \(\epsilon=0.1\), PrivaTrees perform well in practice for both settings. ## 7 Discussion LimitationsIn our experiments, we compared the performance of various differentially private decision tree learners on UCI data and the tabular benchmark. While some UCI datasets are too easy, the tabular benchmark was specifically curated so that decision trees alone score poorly, which could lead to the algorithms performing differently on other datasets. As is typical, we assume that the range of numerical features, the set of possible categorical values, and the set of class labels are public knowledge. Additionally, we use the number of samples in the training set to select an efficient value for \(\epsilon_{\text{leaf}}\) which leaks information on the dataset size. Some other works protect this value. ConclusionIn this paper, we proposed a new algorithm for training differentially private decision trees called PrivaTree. PrivaTree uses private histograms for node selection, the permute-and-flip mechanism for leaf labeling, and a more efficient privacy budget distribution method to improve the privacy-accuracy trade-off. Our experiments on two benchmarks demonstrate that PrivaTree scores similarly or better than existing works on accuracy at a fixed privacy budget. Moreover, we proved a general bound on poisoning robustness for differentially-private learners and applied this to the setting of backdoor attacks. On the MNIST 0 vs 1 task, differentially private decision trees reduce attack success rate fivefold compared to decision trees trained without privacy. While our work focused on interpretable machine learning, follow-up work may trade off interpretability for performance and use the PrivaTree as a primitive for training private tree ensembles. Broader ImpactPrivacy and robustness in machine learning are important topics as models trained on user data are continuously deployed in the world. Differential privacy is a promising technique for this and we improve the performance of decision trees at high differential privacy levels. Furthermore, we warn against over-optimism as differential privacy is not a silver bullet for AI security. Engineers must take into account in which context models are deployed to decide what constitutes an acceptable privacy risk and must take into account what attributes are not protected by our method. Figure 3: An adversary injects zeros with a trigger pattern to create a backdoor for class 1. (left) Poisoned sample of a 0 labeled as 1 with a trigger in the bottom-right. (right) The attack success rate when varying the number of poisoned samples in MNIST out of 11,200 train samples. \(\epsilon{=}0.01\) offers a tighter bound than \(\epsilon{=}0.1\) but in practice, both values defend well against the backdoor attack.
2310.11344
The effect of stemming and lemmatization on Portuguese fake news text classification
With the popularization of the internet, smartphones and social media, information is being spread quickly and easily way, which implies bigger traffic of information in the world, but there is a problem that is harming society with the dissemination of fake news. With a bigger flow of information, some people are trying to disseminate deceptive information and fake news. The automatic detection of fake news is a challenging task because to obtain a good result is necessary to deal with linguistics problems, especially when we are dealing with languages that not have been comprehensively studied yet, besides that, some techniques can help to reach a good result when we are dealing with text data, although, the motivation of detecting this deceptive information it is in the fact that the people need to know which information is true and trustful and which one is not. In this work, we present the effect the pre-processing methods such as lemmatization and stemming have on fake news classification, for that we designed some classifier models applying different pre-processing techniques. The results show that the pre-processing step is important to obtain betters results, the stemming and lemmatization techniques are interesting methods and need to be more studied to develop techniques focused on the Portuguese language so we can reach better results.
Lucca de Freitas Santos, Murilo Varges da Silva
2023-10-17T15:26:40Z
http://arxiv.org/abs/2310.11344v1
# The effect of stemming and lemmatization on Portuguese fake news text classification ###### Abstract With the popularization of the internet, smartphones and social media, information is being spread quickly and easily way, which implies bigger traffic of information in the world, but there is a problem that is harming society with the dissemination of fake news. With a bigger flow of information, some people are trying to disseminate deceptive information and fake news. The automatic detection of fake news is a challenging task because to obtain a good result is necessary to deal with linguistics problems, especially when we are dealing with languages that not have been comprehensively studied yet, besides that, some techniques can help to reach a good result when we are dealing with text data, although, the motivation of detecting this deceptive information it is in the fact that the people need to know which information is true and trustful and which one is not. In this work, we present the effect the pre-processing methods such as lemmatization and stemming have on fake news classification, for that we designed some classifier models applying different pre-processing techniques. The results show that the pre-processing step is important to obtain betters results, the stemming and lemmatization techniques are interesting methods and need to be more studied to develop techniques focused on the Portuguese language so we can reach better results. Keywords:Fake news classification NLP Stemming Lemmatization ## 1 Introduction The popularity of smartphones and social media is causing a great problem nowadays, the spreading of fake news. This kind of news can deceive thousands of people in a short time and harm not only the population but companies and society in general [21]. According to [5] fraud or deception is a type of information that is intentionally produced and shared with the goal of making a false impression or conclusion about a subject. Recently fake news is becoming one of the most dangerous deception sources, that is because this kind of information tries to mimic the content of the official press. However, is important to say that fake news is different from news that the source is not certain or that was not performed deep research on the subject, this kind of information is called misinformation [11]. Consequently, this information can be misleading and even harmful, especially when it is disconnected from its origins and original contexts [18]. According to [8] the number of digital texts stored is growing faster, being forgotten because there is no one who can read and understand all these texts at once. To [6] the Natural Language Processing (NLP) is an area that explores the way that computers can be used to understand and manipulate the human language, being capable of doing useful tasks daily. In Computer Science, NLP is not an easy task, because of the ambiguity of the natural language, this ambiguity makes NLP different from the processing of programming languages for example, which is defined as precisely avoiding ambiguity. NLP research can be focused on five levels of analysis (Phonetic or Phonological, Morphological, Syntactic, Semantic and Pragmatic) all levels have unique features and their own associated difficulties; however, each NLP application can be more focused on a subset of these levels [25]. Mining Text applications impose a hard restriction on the usual NLP Tools, as they involve large volumes of textual data, and they do not allow the integration of complex treatments (generating exponential algorithms, therefore, unapproachable). Besides that, the semantic models for the applications are rarely available, implying strong limitations on the identification of the semantic and pragmatic levels of the linguistic models [16]. As believed by [9] the automatic detection of deception information is highly important because of two facts: i. systems can be more objective than humans in verdicts, that's because humans can be tendentious or be subject to bias and ii. who is judging can be overloaded and delayed or even commit a mistake on the verdict. Applications based on NLP seek to use linguistic patterns that can help in the detection of fake news, as well as general deceptive information. However, there is some struggle with NLP research because it depends on a specific language, being necessary to restructure or develop new techniques to adapt the language of the study and even create specific techniques corpora to each language [21]. In [19] the authors classified the deceptive information into three major types: i. Deception for humoristic purposes, using sarcasm and irony to develop parody and satire; ii. Deception content to fool the population and spread misinformation; and iii. Non-confirmed information that is publicly accepted On [2] a study is proposed to analyze the number of verbs, adjectives, and adverbs, the text complexity (average of the sentence size and average of the size of the words), the break (punctuations occurrence rate), incertitude (number of modal verbs and passive voices present on the text) and expressiveness (number of the adverbs and adjectives in relation to the number of nouns and verbs) to be possible to try recognize some patterns that can help on the automatic detection of deceptive information. According to [2] it is possible to encounter indications of falsifications, exaggeration, omission, and fraud in social media texts evaluating some points like lies, contradictions, distortions, superlatives, half-trues, phrase modification, ir relevant information, and misconceptions. Doing that is possible to detect automatically misleading information. A recent survey [14] describes the challenges involved in fake news detection using NLP and describes related tasks, in [17] was presented a new set of features and measure the prediction performance of current approaches and features for automatic detection of fake news. In [7] the authors introduce three new steps into the pre-processing phase of text classification pipelines to improve effectiveness while reducing the associated costs. Text pre-processing and normalizations are techniques that seek to decrease e standardize textual elements to facilitate information retrieval [3]. Among the existent techniques in the literature, this paper focuses on lemmatization and Stemming, both techniques aim to reduce the word to its root to decrease the existent dictionary size. Stemming has been the most widely applied morphological technique for information retrieval, stemming reduces the total number of index entries, however, stemming causes query expansion by bringing word variants and derivations [10]. Lemmatization is like stemming having the same benefits, however, when this technique is applied, the lemma of the word is extracted, this lemma has the purpose of being a real world that exists in the language in study [10, 3]. A lemmatization problem studied by [12] happens when the lemmatization technique finds a compound word. Many times, the algorithm can't find a difference between the compound word or its component of the word unattended, this problem is very important when the goal of the research is text information retrieval. In the study [10] the authors suggest that the lemmatization algorithm tends to have a better result, that's because the stemming techniques have a bigger recall and smaller precision than the lemmatization, although recall has an important weight on efficiency statistics, lemmatization show yourself superior because its metrics are balanced. In [20] the authors proposed some methods for detecting fake news, the first method is knowledge-based, which means that the data used to train the artificial intelligence model is labeled on true and fake news, other method used is style-based, this method tries to detect a pattern of writing that seeks to fool the people. On the study of [13] was found that the true news has more words than the fake news, being that a pattern that can be analyzed in detail. On the research of [22] the SVM, Logistic Regression and Random Forest algorithms are shown as the better ones for fake news classification, however, they say that there was no superior method, but the ones that got better results than the others. The goal of this paper is present a study of the effect of lemmatization and stemming techniques on the classification of Fake News. To reach this objective were created some classification models that went analyzed and explored to appoint which technique obtained the best result and explore the reasons for it. Besides this introductory section, this paper is organized as follows. Section 2 presents the lemmatization and stemming methods used in this work. Section 3 presents the experiments and the discussions of the results obtained by applying the methods of text normalization previously mentioned. Section 4 presents the conclusion of the work. ## 2 Proposed Method In order to classify texts as fake or true news class was proposed a method that is composed of five main steps, as follows: 1. Reading input news texts (fake or true); 2. Applying preprocessing methods (stop-words removal, steaming and lemmatization normalization); 3. Text vectorization using TF-IDF (Term Frequency-Inverse Document Frequency); 4. Dictionary normalization using the Sum of Squares; 5. Text classification using classification algorithms. This proposed method was tested utilizing a database of news in Portuguese labeled as fake or true, Figure 1 shows the main steps of the proposed method. The preprocessing step (b) in Figure 1 consists in removing the stop-words and applying the text normalization techniques (Steaming and Lemmatization). It is important to say that we applied these techniques separately, we will discuss Figure 1: Illustration of the main steps for the proposed method. this in the experiments section. The stop-words were removed by the authors of the database [13], for the stemming and lemmatization normalization techniques, it was used two Python libraries focused on NLP problems, NLTK [4] and SPACY [24], they were chosen because they have methods for the Portuguese Language. Text vectorization is the process of converting text into numerical representation [23]. The technique for text vectorization (step (c) in Figure 1) that we used in our proposed method is the TF-IDF (Term Frequency-Inverse Document Frequency), this technique consists of a statistical measure that can indicate the magnitude of a word on a document in relation to a set of documents or a linguistic corpus, as shown in Equation 1: \[TFIDF=TF\times IDF \tag{1}\] \(TF\) can be calculated as: \[TF=\frac{N(t)}{T(t)} \tag{2}\] Where \(N(t)\) is the number of times that a term \(t\) appears in a document and \(T(t)\) is the total number of terms in the document. Now we can compute the \(IDF\) as follows: \[IDF=\frac{\log T(n)}{N(t)} \tag{3}\] \(T(n)\) is the total number of documents and \(N(t)\) is the number of documents with the term \(t\) in it. The TF-IDF word value increases proportionally as the number of occurrences of it in a document increases, although, this value is balanced by the frequency of this word on the corpus, this helps to deal with the stop-words or the words that are more common than others [1]. The dictionary normalization step (d) in Figure 1 consists in applying normalization techniques to facilitate the analysis and application of methods. To normalize the dictionary, it was used the sum of squares method that is present in the Scikit-Learn library [15]. For the classification, step (e) in Figure 1, used three common classifiers: SVM (Support Vector Machine), KNN (K-Nearest Neighbors) and DT (Decision Tree). The SVM classifier was used with Linear Kernel, the KNN one was set for three neighbors on the neighborhood and for the Decision Tree it was chosen a max of three leaves. ## 3 Experiments and Results For applying the proposed method, we use the \({}^{1}\)Google Colab platform, which Google makes available infrastructure with Python and Jupyter Notebooks for free, this platform is widely used for studies, research and development in the data mining and machine learning fields. The used database is Fake.Br Corpus, which is compounded by news written in Portuguese, is mostly political news [13]. However, it contains some other news, being in a total of 7200 news, this database is balanced, i.e., half of the news (3600) are labeled fake news and the other half (3600) are labeled true news. The Fake.Br Corpus database was passed through a pre-processing method by the authors of it, the authors removed stop-words, such as accents and diacritics. The headlines, titles, videos and images that news can contain were not considered for detection in this paper, we used only the text that tells the news. Figure 2 shows the most frequent words present on the database and Figure 3 displays an example of one news that is included in the dataset. For the train and test step the database was split, 30% of all data were used to perform the tests of the models and 70% were used to train the classifier algorithms. Table 1 shows the setup and steps that were used to perform the experiments. To perform the tests, we applied both methods of text normalization (Lemmatization and Stemming) to the data, after that we build some different dictio Figure 3: Example of a new included on the dataset (Without stop-words). Figure 2: Word clouds representing the relative frequency of the tokens, available in [13]. naries, but used the same methodology. The first list of dictionaries created was compound of all words that the vectorization algorithm judged relevant, then the second list was created by dictionaries that were limited by 500 words, this value was chosen to reduce the dimension of the model and analyze the impact of these limitations on data classification. The third list of dictionaries created was thought similarly to the second list, but now the dictionaries created were limited by 5000 words, again this value was chosen to analyze the impact of a medium-size dictionary on the data classification. The idea behind switching the values of the words that compound the dictionary is that we can reduce the complexity and the dimension of the models and get good results, or at least the same results as other complex models. Table 2 presents the number of words presents on the first list of dictionaries created by each method in the study. \begin{table} \begin{tabular}{|p{113.8pt}|p{227.6pt}|} \hline **Experiment name** & **Experiment setup description** \\ \hline Stop-Words Removal & It was used database collected without applying any of the normalization methods. \\ \hline Stop-Words Removal + Dictionary of 500 words & The dictionary created was limited by 500 words. \\ \hline Stop-Words Removal + Dictionary of 5000 words & The dictionary created was limited by 5000 words \\ \hline Lemmatization Technique & Applied the Lemmatization technique \\ \hline Lemmatization Technique + Dictionary of 500 words & Dictionary created limited by 500 words for the lemmatization technique \\ \hline Lemmatization Technique + Dictionary of 5000 words & Dictionary created limited by 5000 words for the lemmatization technique \\ \hline Stemming Technique & Applied the Stemming technique \\ \hline Stemming Technique + Dictionary of 5000 words & Dictionary created limited by 500 words for the stemming technique \\ \hline Stemming Technique + Dictionary of 500 words & Dictionary created limited by 5000 words for the stemming technique \\ \hline \end{tabular} \end{table} Table 1: Experiments setup and steps used to perform the experiments. \begin{table} \begin{tabular}{|p{113.8pt}|p{227.6pt}|} \hline **Method** & **Dictionary size** \\ \hline Stop-Word Removal & 68562 words \\ \hline Lemmatizing & 57086 words \\ \hline Stemming & 31485 words \\ \hline \end{tabular} \end{table} Table 2: Size of the built dictionaries applying each experiment setup. For each list of dictionaries, we applied the classifier algorithms quoted previously. Table 3 shows de results of the experiments. As we can see, better results were obtained using the SVM classifiers. Figure 4 shows the confusion matrices that represent the best result for the stemming and lemmatization techniques. Analyzing Table 3 we can see that for the lemmatization, the better result was obtained by using a dictionary with 5000 words, Figure 3(a) shows the confusion matrix of it. Figure 4: Confusion Matrix of the best results 5000-words. (a) Lemmatizing technique and (b) Stemming technique. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Experiment name** & **SVM** & **KNN** & **DT** \\ \hline Stop-Words Removal & 96.11 & 71.06 & 85.79 \\ \hline Stop-Words Removal + Dictionary of 500 words & 93.75 & **74.95** & 85.79 \\ \hline Stop-Words Removal + Dictionary of 5000 words & **96.20** & 72.31 & 85.79 \\ \hline Lemmatization Technique & 95.25 & 69.72 & 83.80 \\ \hline Lemmatization Technique + Dictionary of 500 words & 94.16 & 73.19 & 83.80 \\ \hline Lemmatization Technique + Dictionary of 5000 words & 95.87 & 70.83 & 83.80 \\ \hline Stemming Technique & 95.69 & 70.83 & 86.06 \\ \hline Stemming Technique + Dictionary of 500 words & 93.70 & 71.62 & 86.06 \\ \hline Stemming Technique + Dictionary of 5000 words & 96.11 & 70.79 & 86.06 \\ \hline \end{tabular} \end{table} Table 3: Accuracy rates (%) for fake news classification using FAKE.Br Corpus Dataset, using classifiers: SVM, KNN and Decision Tree (DT). In the same way, we can see that the stemming technique got a better result when using the 5000-word dictionary again. Figure 3(b) exposes the confusion matrix for this method. The best method overall got 96.20% accuracy and was obtained by using only the Stop-Words removal and a dictionary of 5000 words. As we can notice, all three methods (Stop-Words Removal, Lemmatizing and Stemming) had the best results when using a 5000-word dictionary. This is an interesting result when we think that this dictionary is at least 6 times smaller than the one built without limitations. Figure 4(a) shows the confusion matrix for this method. To end the discussions section, we can notice that for the 500-word dictionary, the best result was obtained by the lemmatization technique, that is an exciting result because for the other dictionaries, we can notice that only the stop-word removal pre-process was getting better results. This is an exciting result because we can see that the lemmatization technique can have good results, especially when we compared our results to other state-of-the-art works that say that the lemmatization technique produced the best overall results. Figure 4(b) shows the confusion matrix built by this method. ## 4 Conclusion The spread of fake news is an ordinary problem these days, and fighting against it is extremely important for society, that's because people need to take decisions and attitudes based on true facts and situations. The development of methods that can automatically and safely detect fake news shows very important since individuals can't or at least don't try to check the veracity of the information. Figure 5: Confusion matrix obtained by the SVM model. (a) Only stop-word (5000-word) and (b) Lemmatization technique (500-word). This paper presents a new study of the treatment of text data using stemming and lemmatization methods and techniques to improve the results of the automatic detection of fake news in the Portuguese language. Accordingly, with the obtained results, we can observe that the dictionaries that have an extensive group of words provides better results than the dictionaries that have lesser words. The stemming and lemmatization methods used showed themselves promising for the solution of the problem, however, we can notice that the results do not differ from the results obtained without applying any normalization method, it is worth mentioning that the stemming method reduced halved the words the size of the dictionary and even got a good result, this is a positive accomplish for this experiment. Most text normalization techniques (Stemming and Lemmatization) were developed to be used in the English language, therefore, they do not perform well when applied to texts written in Portuguese. The text normalization area needs more research directed to the Portuguese language. Finally, the obtained results are satisfactory, especially when compared to related works, in which we can notice some improvement in some methods and results.
2308.13719
The Monge-Ampere system: convex integration with improved regularity in dimension two and arbitrary codimension
We prove a convex integration result for the Monge-Ampere system in dimension $d=2$ and arbitrary codimension $k\geq 1$. We achieve flexibility up to the Holder regularity $\mathcal{C}^{1,\frac{1}{1+ 4/k}}$, improving hence the previous $\mathcal{C}^{1,\frac{1}{1+ 6/k}}$ regularity that followed from flexibility up to $\mathcal{C}^{1,\frac{1}{1+d(d+1)/k}}$ in our previous work, valid for any $d,k\geq 1$. The present result agrees with flexibility up to $\mathcal{C}^{1,\frac{1}{5}}$ for $d=2, k=1$ obtained by Conti, Delellis, Szekelyhidi, as well as with the $\mathcal{C}^{1,\alpha}$ result where $\alpha\to 1$ as $k\to\infty$, due to Kallen.
Marta Lewicka
2023-08-26T00:52:35Z
http://arxiv.org/abs/2308.13719v1
# The Monge-Ampere system: ###### Abstract. We prove a convex integration result for the Monge-Ampere system in dimension \(d=2\) and arbitrary codimension \(k\geq 1\). We achieve flexibility up to the Holder regularity \(\mathcal{C}^{1,\frac{1}{1+d/k}}\), improving hence the previous \(\mathcal{C}^{1,\frac{1}{1+d/k}}\) regularity that followed from flexibility up to \(\mathcal{C}^{1,\frac{1}{1+d(d+1)/k}}\) in [9], valid for any \(d,k\geq 1\). Our result agrees with flexibility up to \(\mathcal{C}^{1,\frac{1}{2}}\) for \(d=2,k=1\) obtained in [1], as well as with the \(\mathcal{C}^{1,\alpha}\) result in [7] where \(\alpha\to 1\) as \(k\to\infty\). M.L. was partially supported by NSF grant DMS-2006439. AMS classification: 35Q74, 53C42, 35J96, 53A35. The closely related problem is that of isometric immersions of a given Riemannian metric \(g\): \[\begin{split}&(\nabla u)^{T}\nabla u=g\quad\text{ in }\ \omega,\\ &\text{for }\ u:\omega\to\mathbb{R}^{2+k}.\end{split} \tag{1.2}\] which reduces to (1.1) upon taking the family of metrics \(\{g_{\epsilon}=\operatorname{Id}_{2}+\epsilon A\}_{\epsilon\to 0}\), each a small perturbation of \(\operatorname{Id}_{2}\), making an ansatz \(u_{\epsilon}=\operatorname{id}_{2}+\epsilon v+\epsilon^{2}w\), and gathering the lowest order terms in the \(\epsilon\)-expansions. This leads to the following system: \[\begin{split}&\frac{1}{2}(\nabla v)^{T}\nabla v+\operatorname{ sym}\nabla w=A\quad\text{ in }\ \omega,\\ &\text{for }\ v:\omega\to\mathbb{R}^{k},\quad w:\omega\to\mathbb{R}^{2}. \end{split} \tag{1.3}\] On simply connected \(\omega\), the above system is then equivalent to \(\mathfrak{Det}\,\nabla^{2}v=-\operatorname{curl}\operatorname{curl}A\), reflecting the agreement of the Gaussian curvatures of \(g_{\epsilon}\) and of the surface \(u_{\epsilon}(\omega)\) at their lowest order terms in \(\epsilon\), and bringing us back to (1.1). The special case of \(k=1\) finds application in the theory of elasticity, where the left hand side of (1.3) represents the von Karman stretching content \(\frac{1}{2}\nabla v\otimes\nabla v+\operatorname{sym}\nabla w\), written in terms of the scalar out-of-plane displacement \(v\) and the in-plane displacement \(w\) of the mid-plane \(\omega\) in a thin film. Then, (1.1) reduces to the scalar Monge-Ampere equation \(\det\nabla^{2}v=-\operatorname{curl}\operatorname{curl}A=f\), studied in our previous work [10]. We refer the reader to [9] for a complete discussion of the relation among (1.1), (1.2) and (1.3), in the general case of arbitrary \(d\) and \(k\). Our main result states that any \(\mathcal{C}^{1}\)-regular pair \((v,w)\) which is a subsolution of (1.3), can be uniformly approximated by a sequence of solutions \(\{(v_{n},w_{n})\}_{n=1}^{\infty}\) of regularity \(\mathcal{C}^{1,\alpha}\), as follows: **Theorem 1.1**.: _Let \(\omega\subset\mathbb{R}^{2}\) be an open, bounded domain. Given \(v\in\mathcal{C}^{1}(\bar{\omega},\mathbb{R}^{k})\), \(w\in\mathcal{C}^{1}(\bar{\omega},\mathbb{R}^{2})\) and \(A\in\mathcal{C}^{0,\beta}(\bar{\omega},\mathbb{R}^{2\times 2}_{\operatorname{sym}})\) for some \(\beta\in(0,2)\), assume that:_ \[A-\big{(}\frac{1}{2}(\nabla v)^{T}\nabla v+\operatorname{sym}\nabla w\big{)}> c\operatorname{Id}_{2}\ \ \text{on}\ \ \bar{\omega}, \tag{1.4}\] _for some \(c>0\), in the sense of matrix inequalities. Fix \(\epsilon>0\) and let:_ \[0<\alpha<\min\Big{\{}\frac{\beta}{2},\frac{1}{1+\frac{4}{k}}\Big{\}}.\] _Then, there exists \(\tilde{v}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\) and \(\tilde{w}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{2})\) such that the following holds:_ \[\|\tilde{v}-v\|_{0}\leq\epsilon,\quad\|\tilde{w}-w\|_{0}\leq\epsilon, \tag{1.5}\] \[A-\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{v}+ \operatorname{sym}\nabla\tilde{w}\big{)}=0\quad\text{ in }\ \bar{\omega}. \tag{1.5}\] The above result implies, as in [9]: **Corollary 1.2**.: _For any \(f\in L^{\infty}(\omega,\mathbb{R})\) on an open, bounded, simply connected domain \(\omega\subset\mathbb{R}^{2}\), the following holds. Fix \(k\geq 1\) and fix an exponent:_ \[0<\alpha<\frac{1}{1+\frac{4}{k}}.\] _Then the set of \(\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\) weak solutions to (1.1) is dense in \(\mathcal{C}^{0}(\bar{\omega},\mathbb{R}^{k})\). Namely, every \(v\in\mathcal{C}^{0}(\bar{\omega},\mathbb{R}^{k})\) is the uniform limit of some sequence \(\{v_{n}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\}_{n=1}^{\infty}\), such that:_ \[\mathfrak{Det}\,\nabla^{2}v_{n}=f\quad\text{ on }\ \omega\ \text{ for all }\ n=1\ldots\infty.\] For the isometric immersion problem (1.2) with \(d\geq 1\), \(k=1\), corresponding to codimension \(1\) immersions \(u:\omega\to\mathbb{R}^{d+1}\), a version of Theorem 1.1 has been shown in [2, Theorem 1.1] with flexibility up to \(\mathcal{C}^{1,\frac{1}{1+d(d+1)}}\). The case \(d=2\) is special and flexibility (still in codimension \(1\)) of (1.1) was proved to hold up to \(\mathcal{C}^{1,\frac{1}{5}}\) in [3] using the conformal equivalence of two-dimensional metrics to the Euclidean metric, which is the fact whose linear counterpart we utilize in the present work as well. On the other hand, a result in [7, Theorem 1.1] yields flexibility for (1.2) up to \(\mathcal{C}^{1,\alpha}\) for \(\alpha\) arbitrarily close to \(1\) as \(k\to\infty\), in agreement with our Theorem 1.1 and [9, Theorem 1.1]. We point out that the dependence of regularity on \(k\) has not been quantified in [7], and that having \(k\geq d(d+1)\) was essential even for the local version of that result. ### Overview of the paper: sections 2 and 3 We state all the intermediary results for general dimensions and codimensions \(d,k\geq 1\), and specify to \(d=2\) only when necessary. In section 2 we gather preliminary estimates and building blocks for the proof of Theorem 1.1. First, we recall the single "step" construction of the convex integration algorithm from [9]. Since now it is essential to achieve cancellations of the one-dimensional primitive deficits with least error possible, the previous definition of perturbation fields from [10] would not work for this purpose. Second, we recall the convolution and commutator estimates from [2]. Third, we present a matrix decomposition result in Lemma 2.3, specific to the present dimension \(d=2\). This is essentially a reformulation of a result in [1], which allows us to make a conjecture for \(d\geq 3\) and the flexibility exponent that could be achieved this way. Fourth, we recall the first step in the Nash-Kuiper iteration from [9] which decreases the positive-definite deficit arbitrarily, in particular permitting the application of Theorem 1.3 below. In section 3 we carry out the "stage" construction, that is the first main contribution of this paper and a technical ingredient towards the flexibility range stated in Theorem 1.1. Namely: **Theorem 1.3**.: _Given an open, bounded, smooth domain \(\omega\subset\mathbb{R}^{2}\), there exists \(l_{0}\in(0,1)\) such that the following holds for every \(l\in(0,l_{0})\). Fix \(v\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{2l}(0),\mathbb{R}^{k})\), \(w\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{2l}(0),\mathbb{R}^{2})\) and \(A\in\mathcal{C}^{0,\beta}(\bar{\omega}+\bar{B}_{2l}(0),\mathbb{R}^{2\times 2}_{ \mathrm{sym}})\) defined on the closed \(2l\)-neighbourhood of \(\omega\). Further, fix an exponent \(\gamma\) and constants \(\lambda,M\) satisfying:_ \[\gamma\in(0,1),\quad\lambda>\frac{1}{l},\quad M\geq\max\{\|v\|_{2},\|w\|_{2},1\}.\] _Then, there exist \(\tilde{v}\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{k})\), \(\tilde{w}\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{l},\mathbb{R}^{2})\) defined on the closed \(l\)-neighbourhood of \(\omega\), such that, denoting the defects:_ \[\mathcal{D}=A-\big{(}\frac{1}{2}(\nabla v)^{T}\nabla v+\mathrm{sym}\nabla w \big{)},\quad\tilde{\mathcal{D}}=A-\big{(}\frac{1}{2}(\nabla\tilde{v})^{T} \nabla\tilde{v}+\mathrm{sym}\nabla\tilde{w}\big{)},\] _the following bounds hold:_ \[\|\tilde{v}-v\|_{1}\leq C\lambda^{\gamma/2}\big{(}\|\mathcal{D} \|_{0}^{1/2}+lM\big{)}, \tag{1.6}\] \[\|\tilde{w}-w\|_{1}\leq C\lambda^{\gamma}\big{(}\|\mathcal{D}\|_ {0}^{1/2}+lM\big{)}\big{(}1+\|\mathcal{D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)},\] \[\|\nabla^{2}\tilde{v}\|_{0}\leq C\frac{(\lambda l)^{J}}{l} \lambda^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)},\] \[\|\nabla^{2}\tilde{w}\|_{0}\leq C\frac{(\lambda l)^{J}}{l} \lambda^{\gamma}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\big{(}1+\|\mathcal{ D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)},\] \[\|\tilde{\mathcal{D}}\|_{0}\leq C\Big{(}l^{\beta}\|A\|_{0,\beta}+ \frac{1}{(\lambda l)^{S}}\lambda^{\gamma}\big{(}\|\mathcal{D}\|_{0}+(lM)^{2} \big{)}\Big{)}. \tag{1.7}\] _Above, the norms related to functions \(v,w,A,\mathcal{D}\) and \(\tilde{v},\tilde{w},\tilde{\mathcal{D}}\) are taken on the respective domains of their definiteness. The constants \(C\) depend only on \(\omega,k,\gamma\). The exponents \(S,J\) are given through the least common multiple of the dimension \(2\) and the codimension \(k\) in:_ \[lcm(2,k)=2S=kJ. \tag{1.7}\] We outline how Theorem 1.3 differs from [10, 9]. In [10] as in [2], a "stage" consisted of: \[d_{*}=\frac{d(d+1)}{2}=\dim\mathbb{R}_{\mathrm{sym}}^{d\times d}\] number of "steps", each cancelling one of the rank-one "primitive" deficits in the decomposition of \(\mathcal{D}\). The initially chosen frequency of the corresponding one-dimensional perturbations was multiplied by a factor \(\lambda l\) at each step, leading to the increase of the second derivative by \((\lambda l)^{d_{*}}\), while the remaining error in \(\mathcal{D}\) was of order \(\frac{1}{\lambda l}\). Presently, in agreement with [3] and [1], we first observe that by Lemma 2.3 any positive definite deficit may be replaced by a positive multiple of \(\mathrm{Id}_{2}\) modulo a symmetric gradient of a in-plane field with controlled Holder norms. This reduces the number of primitive deficits from \(2_{*}=3\) to \(2\). Second, it is possible to cancel \(k\) such deficits at once, by using \(k\) linearly independent codimension directions. Since there are \(2\) primitive deficits, then after cancelling these, one may proceed to cancelling the second order deficits obtained as in the rank-one decomposition of the error between the original and the decreased \(\mathcal{D}\); the corresponding frequencies must be then increased by the factor \((\lambda l)^{1/2}\), precisely due to the decrease of \(\mathcal{D}\) by \(\frac{1}{\lambda l}\), as before. We inductively proceed in this fashion (see Figure 1), cancelling even higher order deficits, and adding \(k\)-tuples of single codimension perturbations, for a total of \(N=lcm(2,k)\) steps. The frequencies increase by the factor of \(\lambda l\) over each multiple of \(k\), leading to the total increase of second derivatives by \((\lambda l)^{J}\) in (1.6)\({}_{2}\), and by the factor of \((\lambda l)^{1/2}\) over each multiple of \(2\) (i.e. at even steps), implying the total decrease of the deficit by the factor of \(\frac{1}{(\lambda l)^{S}}\) in (1.6)\({}_{3}\). Third, each application of Lemma 2.3 at even steps, yields bounds in terms of the Holder norms with a necessarily positive \(\gamma\) (due to Schauder's estimates in that proof); hence we need to interpolate between the previously controlled norms and higher order norms, leading to the new factor \(\lambda^{\gamma}\) in all estimates (1.6)\({}_{1}\)-(1.6)\({}_{3}\). ### Overview of the paper: sections 4 to 6 Even though Theorem 1.3 was specific to dimension \(d=2\) due to Lemma 2.3, the Nash-Kuiper scheme involving induction on stages may be stated more generally. Section 4 presents the proof of: **Theorem 1.4**.: _Let \(\omega\subset\mathbb{R}^{d}\) be an open, bounded, smooth domain and let \(k\geq 1\), \(l_{0}\in(0,1)\) be such that the statement of Theorem 1.3 holds true with some given exponents \(S,J\geq 1\) (not necessarily satisfying condition (1.7)). Then we have the following. For every \(v\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{2l_{0}}(0),\mathbb{R}^{k})\), \(w\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{2l_{0}}(0),\mathbb{R}^{d})\) and \(A\in\mathcal{C}^{0,\beta}(\bar{\omega}+\bar{B}_{2l_{0}}(0),\mathbb{R}^{d\times d }_{\mathrm{sym}})\), such that:_ \[\mathcal{D}=A-\big{(}\frac{1}{2}(\nabla v)^{T}\nabla v+\mathrm{sym}\nabla w \big{)}\quad\text{ satisfies }\quad 0<\|\mathcal{D}\|_{0}\leq 1,\] _and for every \(\alpha\) in the range:_ \[0<\alpha<\min\Big{\{}\frac{\beta}{2},\frac{S}{S+2J}\Big{\}}, \tag{1.8}\] _there exist \(\tilde{v}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\) and \(\tilde{w}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{d})\) with the following properties:_ \[\|\tilde{v}-v\|_{1}\leq C\big{(}1+\|\nabla v\|_{0}\big{)}^{2}\| \mathcal{D}_{0}\|_{0}^{1/4},\quad\|\tilde{w}-w\|_{1}\leq C(1+\|\nabla v\|_{0})^ {3}\|\mathcal{D}\|_{0}^{1/4}, \tag{1.9}\] \[A-\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{v}+ \mathrm{sym}\nabla\tilde{w}\big{)}=0\quad\text{ in }\ \bar{\omega}. \tag{1.9}\] _The norms in the left hand side of (1.9)\({}_{1}\) are taken on \(\bar{\omega}\), and in the right hand side on \(\bar{\omega}+\bar{B}_{2l_{0}}(0)\). The constants \(C\) depend only on \(\omega,k,A\) and \(\alpha\)._ The first bound in (1.9)\({}_{1}\) is actually valid with any power smaller than \(\frac{1}{2}\) in \(\|\mathcal{D}_{0}\|_{0}\), and any power larger than \(1\) or \(2\) in \(1+\|\nabla v_{0}\|_{0}\), in \(\|\tilde{v}-v\|_{1}\) or \(\|\tilde{w}-w\|_{1}\), respectively. This is consistent with [9, Theorem 4.1], however the presented bounds are enough for our purpose. The proof of Theorem 1.4 is quite technical. It involves iterating Theorem 1.3, where the key challenge is to choose the right progression of parameters \(l\to 0\), \(\lambda\to\infty\) and \(M\to\infty\), not only consistent with the inductive procedure assumptions and yielding \(\|\mathcal{D}\|_{0}\to 0\), but also to guarantee that the rate of blow-up of \(\|v\|_{2},\|w\|_{2}\) can be compensated by the control on \(\|v\|_{1},\|w\|_{1}\), thus admitting the control of \(\|v\|_{1,\alpha},\|w\|_{1,\alpha}\) through the interpolation inequality. We show how these choices follow naturally, separately in the two cases of \(\frac{\beta}{2}>\frac{S}{S+2J}\) and \(\frac{\beta}{2}\leq\frac{S}{S+2J}\) and that they may be achieved with sufficiently small positive \(\lambda\). As in the iteration scheme for (1.2) in [3] and for (1.3) in [1], both valid for \(d=2,k=1\), we use the "double exponential" ansatz; a technical idea borrowed from the iteration scheme in [4] where the double exponential decay was used to produce Holder solutions to the Euler equations. The fact that we separate estimates Theorem 1.4 from Theorem 1.3 provides a cleaner "modular" proof, ready to tackle the dimension \(d>2\), should a version of Conjecture 2.4 become available. In section 5, we finally prove Theorem 1.1, which at this point becomes quite straightforward. In the last section 6 we present an application of Theorem 1.1 for deriving the scaling laws bounds of the so-called prestrained elastic energies of thin films, in the context of the quantitative isometric immersion problem. We only recall the related setup and state the result, since the proof is exactly the same as in [9, Theorem 7.1]. ### Notation By \(\mathbb{R}^{d\times d}_{\mathrm{sym}}\) we denote the space of symmetric \(d\times d\) matrices. The space of Holder continuous vector fields \(\mathcal{C}^{m,\alpha}(\bar{\omega},\mathbb{R}^{k})\) consists of restrictions of all \(f\in\mathcal{C}^{m,\alpha}(\mathbb{R}^{d},\mathbb{R}^{k})\) to the closure of an open, bounded domain \(\omega\subset\mathbb{R}^{d}\). Then, the \(\mathcal{C}^{m}(\bar{\omega},\mathbb{R}^{k})\) norm of such restriction is denoted by \(\|f\|_{m}\), while its Holder norm in \(\mathcal{C}^{m,\alpha}(\bar{\omega},\mathbb{R}^{k})\) is \(\|f\|_{m,\alpha}\). By \(C>0\) we denote a universal constant which may change from line to line, but which is bigger than \(1\) and independent of all parameters, unless indicated otherwise. ## 2. Convex integration: the basic "step" and preparatory statements The following single "step" construction, see [9, Lemma 2.1, Corollary 2.2], is a building block of the convex integration algorithm in this paper. We recall that a similar calculation in [10] based on [2], had \(\bar{\Gamma}=0\) in the formula below, resulting in the presence of the extra term \(-\frac{2}{\lambda}a\bar{\bar{\Gamma}}(\lambda t_{\eta})\mathrm{sym}(\nabla a \otimes\eta)\) in the right hand side of (2.2). With that term, achieving the error bounds in Theorem 1.3 would not be possible. Namely, we have: **Lemma 2.1**.: _Let \(v\in\mathcal{C}^{2}(\mathbb{R}^{d},\mathbb{R}^{k})\), \(w\in\mathcal{C}^{1}(\mathbb{R}^{d},\mathbb{R}^{d})\) and \(a\in\mathcal{C}^{2}(\mathbb{R}^{d},\mathbb{R})\) be given. Denote:_ \[\Gamma(t)=2\sin t,\quad\bar{\Gamma}(t)=-\frac{1}{2}\cos(2t),\quad\bar{\bar{ \Gamma}}(t)=-\frac{1}{2}\sin(2t),\] _and for two unit vectors \(\eta\in\mathbb{R}^{d}\), \(E\in\mathbb{R}^{k}\) and a frequency \(\lambda>0\), define:_ \[\begin{split}&\tilde{v}(x)=v(x)+\frac{1}{\lambda}a(x)\Gamma( \lambda t_{\eta})E\\ &\tilde{w}(x)=w(x)-\frac{1}{\lambda}a(x)\Gamma(\lambda t_{\eta}) \nabla\langle v(x),E\rangle-\frac{1}{\lambda^{2}}a(x)\bar{\Gamma}(\lambda t_{ \eta})\nabla a(x)+\frac{1}{\lambda}a(x)^{2}\bar{\bar{\Gamma}}(\lambda t_{\eta}) \eta.\end{split} \tag{2.1}\] _where \(t_{\eta}=\langle x,\eta\rangle\). Then, the following identity is valid on \(\mathbb{R}^{d}\):_ \[\begin{split}&\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{ v}+\mathrm{sym}\nabla\tilde{w}\big{)}-\big{(}\frac{1}{2}(\nabla v)^{T}\nabla v+ \mathrm{sym}\nabla w\big{)}-a^{2}\eta\otimes\eta\\ &=-\frac{1}{\lambda}a\Gamma(\lambda t_{\eta})\nabla^{2}\langle v,E\rangle+\frac{1}{\lambda^{2}}\big{(}\frac{1}{2}\Gamma(\lambda t_{\eta})^{2}- \bar{\Gamma}(\lambda t_{\eta})\big{)}\nabla a\otimes\nabla a-\frac{1}{\lambda ^{2}}a\bar{\Gamma}(\lambda t_{\eta})\nabla^{2}a.\end{split} \tag{2.2}\] As pointed out in [9], taking several perturbations in \(\tilde{v}\) of the form \(\frac{1}{\lambda}a_{i}\Gamma(\lambda t_{\eta_{i}})E_{i}\) corresponding to the mutually orthogonal directions \(\{E_{i}\}_{i=1}k\) and matching them with perturbations in \(\tilde{w}\) as in (2.1), achieves cancellation of \(k\) nonnegative primitive deficits of the form \(a_{i}^{2}\eta_{i}\otimes\eta_{i}\) while the errors in (2.2) accumulate in a linear fashion. This is how we use the larger codimension \(k\) to increase the Holder regularity in Theorem 1.1. We will frequently call on the convolution and commutator estimates [2, Lemma 2.1]: **Lemma 2.2**.: _Let \(\phi\in\mathcal{C}_{c}^{\infty}(\mathbb{R}^{d},\mathbb{R})\) be a standard mollifier that is nonnegative, radially symmetric, supported on the unit ball \(B(0,1)\subset\mathbb{R}^{d}\) and such that \(\int_{\mathbb{R}^{d}}\phi\ \mathrm{d}x=1\). Denote:_ \[\phi_{l}(x)=\frac{1}{l^{d}}\phi(\frac{x}{l})\quad\text{ for all }\ l\in(0,1],\ x\in\mathbb{R}^{d}.\] _Then, for every \(f,g\in\mathcal{C}^{0}(\mathbb{R}^{d},\mathbb{R})\) and every \(m\geq 0\), \(\beta\in(0,1]\), there holds:_ \[\|\nabla^{(m)}(f*\phi_{l})\|_{0}\leq\frac{C}{l^{m}}\|f\|_{0}, \tag{2.3}\] \[\|f-f*\phi_{l}\|_{0}\leq C\min\big{\{}l^{2}\|\nabla^{2}f\|_{0},l \|\nabla f\|_{0},l^{\beta}\|f\|_{0,\beta}\big{\}},\] (2.3) \[\|\nabla^{(m)}\big{(}(fg)*\phi_{l}-(f*\phi_{l})(g*\phi_{l})\big{)} \|_{0}\leq Cl^{2-m}\|\nabla f\|_{0}\|\nabla g\|_{0}, \tag{2.3}\] _with a constant \(C>0\) depending only on the differentiability exponent \(m\)._ The next auxiliary result is specific to dimension \(d=2\). We reformulate [1, Proposition 3.1]: **Lemma 2.3**.: _Let \(\omega\subset\mathbb{R}^{2}\) be an open, bounded and Lipschitz set. There exist maps:_ \[\bar{\Psi}:L^{2}(\omega,\mathbb{R}^{2\times 2}_{\mathrm{sym}})\to W^{1,2}( \omega,\mathbb{R}^{2}),\qquad\bar{a}:L^{2}(\omega,\mathbb{R}^{2\times 2}_{ \mathrm{sym}})\to L^{2}(\omega,\mathbb{R}),\] _which are linear, continuous, and such that:_ * _for all_ \(D\in L^{2}(\omega,\mathbb{R}^{2\times 2}_{\mathrm{sym}})\) _there holds:_ \(D+\mathrm{sym}\nabla\big{(}\bar{\Psi}(D)\big{)}=\bar{a}(D)\mathrm{Id}_{2}\)_,_ * \(\bar{\Psi}(\mathrm{Id}_{2})\equiv 0\) _and_ \(\bar{a}(\mathrm{Id}_{2})\equiv 1\) _in_ \(\omega\)_,_ * _for all_ \(m\geq 0\) _and_ \(\gamma\in(0,1]\)_, if_ \(\omega\) _is_ \(\mathcal{C}^{m+2,\gamma}\) _regular then the maps_ \(\bar{\Psi}\) _and_ \(\bar{a}\) _are continuous from_ \(\mathcal{C}^{m,\gamma}(\bar{\omega},\mathbb{R}^{2\times 2}_{\mathrm{sym}})\) _to_ \(\mathcal{C}^{m+1,\gamma}(\bar{\omega},\mathbb{R}^{2})\) _and to_ \(\mathcal{C}^{m,\gamma}(\bar{\omega},\mathbb{R})\)_, respectively, so that:_ \[\|\bar{\Psi}(D)\|_{m+1,\gamma}\leq C\|D\|_{m,\gamma}\text{ and }\ \|\bar{a}(D)\|_{m,\gamma}\leq C\|D\|_{m,\gamma}\quad\text{ for all }\ D\in L^{2}(\omega,\mathbb{R}^{2\times 2}_{ \mathrm{sym}}).\] (2.4) _The constants \(C\) above depend on \(\omega\), \(m,\gamma\) but not on \(D\). Also, there exists \(l_{0}>0\) depending only on \(\omega\), such that (2.4) are uniform on the closed \(l\)-neighbourhoods \(\{\bar{\omega}+\bar{B}_{l}(0)\}_{l\in(0,l_{0})}\) of \(\omega\)._ Proof.: Given \(D\in L^{2}(\omega,\mathbb{R}^{2\times 2}_{\mathrm{sym}})\), we define: \[\bar{\Psi}(D)=\big{(}-\partial_{1}\psi_{1}-\partial_{2}\psi_{2},\partial_{2} \psi_{1}-\partial_{1}\psi_{2}\big{)},\qquad\bar{a}(D)=D_{11}+\partial_{1}\bar{ \Psi}^{1}(D),\] where \(\psi_{1},\psi_{2}\) are solutions to the following two Dirichlet problems on \(\omega\): \[\left\{\begin{array}{ll}\Delta\psi_{1}=D_{11}-D_{22}&\text{ in }\omega\\ \psi_{1}=0&\text{ on }\partial\omega,\end{array}\right.\qquad\qquad\left\{ \begin{array}{ll}\Delta\psi_{2}=2D_{12}&\text{ in }\omega\\ \psi_{2}=0&\text{ on }\partial\omega.\end{array}\right. \tag{2.5}\] It is clear that the maps \(\bar{\Psi}\) and \(\bar{a}\) are linear and satisfy (ii) and (iii). To check condition (i), we calculate components of the symmetric matrix field \(\bar{a}\mathrm{Id}_{2}-\mathrm{sym}\nabla\bar{\Psi}\): \[\bar{a}-\partial_{1}\bar{\Psi}^{1} =D_{11},\] \[\bar{a}-\partial_{1}\bar{\Psi}^{2} =D_{11}+\partial_{1}\bar{\Psi}^{1}-\partial_{2}\bar{\Psi}^{2}=D_{ 11}+(-\partial_{11}\psi_{1}-\partial_{12}\psi_{1})-(\partial_{22}\psi_{1}- \partial_{12}\psi_{2})\] \[=D_{11}-\Delta\psi_{1}=D_{22},\] \[-\frac{1}{2}(\partial_{1}\bar{\Psi}^{2}+\partial_{2}\bar{\Psi}^{ 1})=-\frac{1}{2}\big{(}\partial_{12}\psi_{1}-\partial_{11}\psi_{2}-\partial_{ 12}\psi_{1}-\partial_{22}\psi_{2}\big{)}=\frac{1}{2}\Delta\psi_{2}=D_{12}.\] This completes the proof of (i). The uniformity of the bounds in (2.4) follow from the uniformity of the classical Schauder estimates for solutions to (2.5). The proof is done. We remark that for a general dimension \(d\geq 2\), carrying out the same approach as presented in this paper would necessitate validating the following: **Conjecture 2.4**.: _Let \(\omega\subset\mathbb{R}^{d}\) be an open, bounded, sufficiently regular set. Then, there exist a linear proper subspace \(E_{d}\varsubsetneq\mathbb{R}^{d\times d}_{\mathrm{sym}}\) and linear maps:_ \[\bar{\Psi}:\mathcal{C}^{m,\gamma}(\bar{\omega},\mathbb{R}^{d\times d}_{ \mathrm{sym}})\to\mathcal{C}^{m+1,\gamma}(\bar{\omega},\mathbb{R}^{d}),\qquad \bar{A}:\mathcal{C}^{m,\gamma}(\bar{\omega},\mathbb{R}^{d\times d}_{\mathrm{ sym}})\to\mathcal{C}^{m,\gamma}(\bar{\omega},E_{d}),\] _continuous for all \(m\geq 0\) and \(\gamma\in(0,1]\), and such that:_ * _for all_ \(D\in\mathcal{C}^{m,\gamma}(\bar{\omega},\mathbb{R}^{d\times d}_{\mathrm{sym}})\) _there holds:_ \(D+\mathrm{sym}\nabla\big{(}\bar{\Psi}(D)\big{)}=\bar{A}(D)\)_,_ * \(\bar{\Psi}(\mathrm{Id}_{d})\equiv 0\) _and_ \(\bar{A}(\mathrm{Id}_{d})\equiv\mathrm{Id}_{d}\) _in_ \(\omega\)_._ Indeed, Conjecture 2.4 would imply flexibility of (1.3) and of the Monge-Ampere system (1.1) up to regularity \(\mathcal{C}^{1,\frac{1}{1+2(\dim E_{d})/k}}\). Lemma 2.3 validates Conjecture 2.4 for \(d=2\) and with: \[E_{2}=\big{\{}\alpha\mathrm{Id}_{2};\ \alpha\in\mathbb{R}\big{\}},\] reflecting the fact that every \(2\)-dimensional Riemann metric is conformally equivalent to the Euclidean metric. For \(d=3\), it is natural to ask if Conjecture 2.4 holds with the space: \[E_{3}=\big{\{}\sum_{i=1}^{3}\alpha_{i}e_{i}\otimes e_{i};\ \alpha_{1},\alpha_{2}, \alpha_{3}\in\mathbb{R}\big{\}}\] consisting of diagonal matrices, which is motivated by and in agreement with the fact that every \(3\)-dimensional metric is locally diagonalizable. This result, without the Holder norms estimates, may be proved in the analytic class by an application of the Cartan-Kahler theorem, and in the smooth class by a direct inspection. For \(d\geq 4\), one expects the optimal dimension: \[\dim E_{d}=\dim\mathbb{R}^{d\times d}_{\mathrm{sym}}-d=\frac{d(d-1)}{2}.\] With the above, Conjecture 2.4 would imply flexibility up to regularity \(\mathcal{C}^{1,\frac{1}{1+d(d-1)/k}}\), while we recall that the best exponent known at present, from [9], is \(\mathcal{C}^{1,\frac{1}{1+d(d+1)/k}}\). As the final preparatory result, we recall the "first step" in the Nash-Kuiper iteration, allowing to bring the sup-norm of the given positive definite deficit, below a threshold needed for an application of Theorem 1.4. This result is independent from the Holder continuity estimates; its proof only necessitates the decomposition of symmetric positive definite matrices which are close to \(\mathrm{Id}_{d}\), into "primitive metrics" [2, Lemma 5.2]. Namely, from [9, Theorem 5.2] we quote: **Lemma 2.5**.: _Let \(\omega\subset\mathbb{R}^{d}\) be an open, bounded set. Given \(v\in\mathcal{C}^{\infty}(\bar{\omega},\mathbb{R}^{k})\), \(w\in\mathcal{C}^{\infty}(\bar{\omega},\mathbb{R}^{d})\) and \(A\in\mathcal{C}^{\infty}(\bar{\omega},\mathbb{R}^{d\times d}_{\rm sym})\), assume that:_ \[\mathcal{D}=A-\big{(}\frac{1}{2}(\nabla v)^{T}\nabla v+{\rm sym}\nabla w\big{)} \quad\text{ satisfies }\quad\mathcal{D}>c\,{\rm Id}_{d}\ \ \text{on}\ \ \bar{\omega}\] _for some \(c>0\), in the sense of matrix inequalities. Fix \(\epsilon>0\). Then, there exist \(\tilde{v}\in\mathcal{C}^{\infty}(\bar{\omega},\mathbb{R}^{k})\), \(\tilde{w}\in\mathcal{C}^{\infty}(\bar{\omega},\mathbb{R}^{d})\) such that the following holds with constants \(C\) depending on \(d,k\) and \(\omega\):_ \[\|\tilde{v}-v\|_{0}\leq\epsilon,\quad\|\tilde{w}-w\|_{0}\leq\epsilon, \tag{2.6}\] \[\|\nabla(\tilde{v}-v)\|_{0}\leq C\|\mathcal{D}\|_{0}^{1/2},\quad \|\nabla(\tilde{w}-w)\|_{0}\leq C\|\mathcal{D}\|_{0}^{1/2}\big{(}\|\mathcal{D }\|_{0}^{1/2}+\|\nabla v\|_{0}\big{)},\] (2.6) \[\|A-\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{v}+{\rm sym }\nabla\tilde{w}\big{)}\|_{0}\leq\epsilon. \tag{2.6}\] ## 3. The "stage" for the \(\mathcal{C}^{1,\alpha}\) approximations: a proof of Theorem 1.3 The proof consists of several steps. The inductive construction below is a refinement of [9, Theorem 1.2] in view of Lemma 2.3, allowing to decrease the number of primitive metrics in the deficit decomposition from \(2_{*}=3\) to \(2\). Recall that all constants \(C\), which may change from line to line of a calculation, are assumed to be larger than \(1\), and they depend only on \(\omega\), \(k\), \(\gamma\) and the differentiability exponent \(m\), whenever present. **Proof of Theorem 1.3** **1. (Preparing the data)** Let \(l_{0}\) be as in Lemma 2.3 and fix \(l<l_{0}\). Taking \(\phi_{l}\) as in Lemma 2.2, we define the following smoothed data functions on the \(l\)-thickened set \(\bar{\omega}+\bar{B}_{l}(0)\): \[v_{0}=v*\phi_{l},\quad w_{0}=w*\phi_{l},\quad A_{0}=A*\phi_{l},\quad\mathcal{D }_{0}=\big{(}\frac{1}{2}(\nabla v_{0})^{T}\nabla v_{0}+{\rm sym}\nabla w_{0} \big{)}-A_{0}.\] From Lemma 2.2, one deduces the initial bounds: \[\|v_{0}-v\|_{1}+\|w_{0}-w\|_{1}\leq ClM, \tag{3.1}\] \[\|A_{0}-A\|_{0}\leq Cl^{\beta}\|A\|_{0,\beta},\] (3.1) \[\|\nabla^{(m+1)}v_{0}\|_{0}+\|\nabla^{(m+1)}w_{0}\|_{0}\leq\frac{ C}{l^{m}}lM\quad\text{ for all }\ m\geq 1,\] (3.1) \[\|\nabla^{(m)}\mathcal{D}_{0}\|_{0}\leq\frac{C}{l^{m}}\big{(}\| \mathcal{D}\|_{0}+(lM)^{2}\big{)}\quad\text{ for all }\ m\geq 0. \tag{3.1}\] Indeed, (3.1)\({}_{1}\), (3.1)\({}_{2}\) follow from (2.3)\({}_{2}\) and in view of the lower bound on \(M\). Similarly, (3.1)\({}_{3}\) follows by applying (2.3)\({}_{1}\) to \(\nabla^{2}v\) and \(\nabla^{2}w\) with the differentiability exponent \(m-1\). Since: \[\mathcal{D}_{0}=\frac{1}{2}\big{(}(\nabla v_{0})^{T}\nabla v_{0}-((\nabla v)^{ T}\nabla v)*\phi_{l}\big{)}-\mathcal{D}*\phi_{l},\] we get (3.1)\({}_{4}\) by applying (2.3)\({}_{1}\) to \(\mathcal{D}\), and (2.3)\({}_{3}\) to \(\nabla v\). **2. (Induction definition: frequencies)** We now inductively define the main coefficients, frequencies and corrections in the construction of \((\tilde{v},\tilde{w})\) from \((v,w)\). First, recall that: \[N\doteq lcm(2,k)=2S=kJ,\qquad S,J\geq 1. \tag{3.2}\] We set the initial perturbation frequencies as: \[\lambda_{0}=\frac{1}{l},\qquad\lambda_{1}=\lambda,\] while for \(i=2\dots N\) we define, for \(j=0\dots J-1\) and \(s=0\dots S-1\): \[\lambda_{i}l=(\lambda l)^{1+j+s/2}\quad\text{ for all }\ i\in(kj,k(j+1)]\cap(2s,2(s+1)]. \tag{3.3}\] **3. (Induction definition: decomposition of deficits)** For \(s=0\dots S-1\) we define constants \(\tilde{C}_{s}\), perturbation amplitudes \(a_{s}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R})\) and correction fields \(\Psi_{s}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{2})\), by applying Lemma 2.3 to the already derived deficit \(\mathcal{D}_{s}\) on the set \(\bar{\omega}+\bar{B}_{l}(0)\): \[\tilde{C}_{s}=\frac{2}{r_{0}}\Big{(}\|\mathcal{D}_{s}\|_{0,\gamma }+\frac{(\lambda_{0}\lambda_{2}\dots\lambda_{2s})^{\gamma}}{(\lambda l)^{s}}( \|\mathcal{D}\|_{0}+(lM)^{2})\Big{)},\] \[a_{s}=\big{(}\tilde{C}_{s}-\bar{a}(\mathcal{D}_{s})\big{)}^{1/2 },\qquad\Psi_{s}=\tilde{C}_{s}id_{2}-\bar{\Psi}(\mathcal{D}_{s}).\] Above, \(r_{0}=r_{0}(\gamma)>0\) is given through the requirement: \[\bar{a}(D)>\frac{1}{2}\ \ \text{on}\ \ \bar{\omega}+\bar{B}_{l}(0)\quad\text{ whenever }\quad\|D-\text{Id}_{2}\|_{0,\gamma}<r_{0},\] whose validity is justified by Lemma 2.3. Note that our definition of \(a_{s}\) is correctly posed, because \(\tilde{C}_{s}-\bar{a}(\mathcal{D}_{s})=\tilde{C}_{s}\bar{a}\big{(}\text{Id}_{ 2}-\frac{1}{\tilde{C}_{s}}\mathcal{D}_{s}\big{)}>0\) in view of \(\|\text{Id}_{2}-(\text{Id}_{2}-\frac{1}{\tilde{C}_{s}}\mathcal{D}_{s})\|_{0, \gamma}<r_{0}\). Further: \[\mathcal{D}_{s}=\text{sym}\nabla\Psi_{s}-a_{s}^{2}\text{Id}_{2}\quad\text{ and }\quad a_{s}>\Big{(}\frac{\tilde{C}_{s}}{2}\Big{)}^{1/2}\ \ \text{in}\ \ \bar{\omega}+\bar{B}_{l}(0). \tag{3.4}\] We also obtain, directly from Lemma 2.3: \[\|\Psi_{s}\|_{m+1}\leq C\big{(}\tilde{C}_{s}+\|\mathcal{D}_{s}\|_ {m,\gamma}\big{)}\quad\text{ for all }\ m\geq 0, \tag{3.5}\] \[\|a_{s}\|_{0}\leq C\|\tilde{C}_{s}\text{Id}_{d}-\mathcal{D}_{s} \|_{0,\gamma}^{1/2}\leq C\tilde{C}_{s}^{1/2}.\] For the future estimate of derivatives of \(a_{s}\) of order \(m\geq 1\), we use Faa di Bruno formula's in: \[\|\nabla^{(m)}a_{s}\|_{0} \leq C\Big{\|}\sum_{p_{1}+2p_{2}+\dots mp_{m}=m}a_{s}^{2(1/2-p_{1 }-\dots-p_{m})}\prod_{t=1}^{m}\big{|}\nabla^{(t)}a_{s}^{2}\big{|}^{p_{t}}\Big{\|} _{0} \tag{3.6}\] \[\leq C\sum_{p_{1}+2p_{2}+\dots mp_{m}=m}\frac{1}{\tilde{C}_{s}^{( p_{1}+\dots+p_{m})-1/2}}\prod_{t=1}^{m}\big{(}\tilde{C}_{s}+\|\mathcal{D}_{s} \|_{t,\gamma}\big{)}^{p_{t}}\] \[\leq C\tilde{C}_{s}^{1/2}\sum_{p_{1}+2p_{2}+\dots mp_{m}=m}\prod_ {t=1}^{m}\Big{(}1+\frac{\|\mathcal{D}_{s}\|_{t,\gamma}}{\tilde{C}_{s}}\Big{)} ^{p_{t}},\] in virtue of the lower bound in (3.4). Figure 1. Progression of frequencies \(\lambda_{i}\) and other intermediary quantities defined at integers \(i=1\dots N\), where \(N=lcm(2,k)\). **4. (Induction definition: perturbations)** For each \(i=1\dots N\) we uniquely write: \[\begin{split} i=kj+\gamma=2s+\delta\quad\text{ with }& j=0\dots J-1,\quad\gamma=1\dots k,\\ & s=0\dots S-1,\quad\delta=1,2.\end{split} \tag{3.7}\] Define \(v_{i}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{k})\) and \(w_{i}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{2})\) according to the "step" construction in Lemma 2.1, involving the periodic profile functions \(\Gamma,\bar{\Gamma},\bar{\bar{\Gamma}}\) and the notation \(t_{\eta}=\langle x,\eta\rangle\): \[\begin{split} v_{i}(x)=v_{i-1}(x)+\frac{1}{\lambda_{i}}a_{s}(x) \Gamma(\lambda_{i}t_{e_{\delta}})e_{\gamma},\\ w_{i}(x)=w_{i-1}(x)-\frac{1}{\lambda_{i}}a_{s}(x)\Gamma(\lambda _{i}t_{e_{\delta}})\nabla v_{i-1}^{\gamma}-\frac{1}{\lambda_{i}^{2}}a_{s}(x) \bar{\Gamma}(\lambda_{i}t_{e_{\delta}})\nabla a_{s}+\frac{1}{\lambda_{i}}a_{ s}(x)^{2}\bar{\bar{\Gamma}}(\lambda_{i}t_{e_{\delta}})e_{\delta}.\end{split}\] We observe that by construction of \(v_{i}\), the second term in \(w_{i}\) can be rewritten as follows: \[\frac{1}{\lambda_{i}}a_{s}(x)\Gamma(\lambda_{i}t_{e_{\delta}})\nabla v_{i-1}^{ \gamma}=\frac{1}{\lambda_{i}}a_{s}(x)\Gamma(\lambda_{i}t_{e_{\delta}})\nabla v _{jk}^{\gamma}. \tag{3.8}\] We eventually set: \[\tilde{v}=v_{N},\qquad\tilde{w}=w_{N}-\sum_{s=0}^{S-1}\Psi_{s}. \tag{3.9}\] **5. (Induction definition: deficits)** For each \(i=1\dots N\), we define the partial deficit: \[V_{i}=\big{(}\frac{1}{2}(\nabla v_{i})^{T}\nabla v_{i}+\text{sym}\nabla w_{i} \big{)}-\big{(}\frac{1}{2}(\nabla v_{i-1})^{T}\nabla v_{i-1}+\text{sym}\nabla w _{i-1}\big{)},\] and for each \(s=1\dots S\) we set the combined deficit: \(\mathcal{D}_{s}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R} _{\text{sym}}^{2\times 2})\) in: \[\begin{split}\mathcal{D}_{s}&=\big{(}\frac{1}{2}( \nabla v_{2s})^{T}\nabla v_{2s}+\text{sym}\nabla w_{2s}\big{)}-\big{(}\frac{1 }{2}(\nabla v_{2(s-1)})^{T}\nabla v_{2(s-1)}+\text{sym}\nabla w_{2(s-1)}\big{)} -a_{s-1}^{2}\text{Id}_{2}\\ &=\sum_{i=2s-1}^{2s}\Big{(}V_{i}-a_{s-1}^{2}e_{\delta}\otimes e_{ \delta}\Big{)}=V_{2s-1}+V_{2s}-a_{s-1}^{2}\text{Id}_{2}.\end{split}\] Above, components of the last sum we used the convention (3.7), where \(\delta=\delta(i)=1,2\). By Lemma 2.1 and (3.8), and setting \(j=0\dots J-1\) again according to (3.7), we get: \[\begin{split} V_{i}-a_{s-1}^{2}e_{\delta}\otimes e_{\delta}=& -\frac{1}{\lambda_{i}}a_{s-1}\Gamma(\lambda_{i}t_{e_{\delta}})\nabla^{2}v_{ jk}^{\gamma}-\frac{1}{\lambda_{i}^{2}}a_{s-1}\bar{\Gamma}(\lambda_{i}t_{e_{ \delta}})\nabla^{2}a_{s-1}\\ &+\frac{1}{\lambda_{i}^{2}}\big{(}\frac{1}{2}\Gamma(\lambda_{i}t_ {e_{\delta}})^{2}-\bar{\Gamma}(\lambda_{i}t_{e_{\delta}})\big{)}\nabla a_{s-1} \otimes\nabla a_{s-1}.\end{split} \tag{3.10}\] We right away note that, by (3.4) there holds: \[\begin{split}\tilde{\mathcal{D}}&=(A-A_{0})- \mathcal{D}_{0}-\Big{(}\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{v} +\text{sym}\nabla\tilde{w}\big{)}-\big{(}\frac{1}{2}(\nabla v_{0})^{T}\nabla v _{0}+\text{sym}\nabla w_{0}\big{)}\Big{)}\\ &=(A-A_{0})-\mathcal{D}_{0}+\sum_{s=0}^{S-1}\text{sym}\nabla\Psi_{ s}-\sum_{s=1}^{S}\sum_{i=2s-1}^{2s}V_{i}\\ &=(A-A_{0})+\sum_{s=0}^{S-1}\text{sym}\nabla\Psi_{s}-\sum_{s=0}^{S }\mathcal{D}_{s}-\sum_{s=1}^{S}a_{s-1}^{2}\text{Id}_{2}=(A-A_{0})-\mathcal{D} _{S}.\end{split} \tag{3.11}\] **6. (Inductive estimates)** In steps 7-8 below we will show the following estimates, valid for all \(m\geq-1\) and \(i=1\dots N\), and where \(s=s(i)\) is given according to (3.7): \[\left.\begin{array}{l}\|\nabla^{(m+1)}(v_{i}-v_{i-1})\|_{0}\leq C \frac{\lambda_{i}^{m}}{(\lambda l)^{s/2}}\big{(}\lambda_{0}\lambda_{2}\dots \lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)},\\ \|\nabla^{(m+1)}(w_{i}-w_{i-1})\|_{0}\leq C\frac{\lambda_{i}^{m}}{( \lambda l)^{s/2}}\big{(}\lambda_{0}\lambda_{2}\dots\lambda_{2s}\big{)}^{ \gamma}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\times\\ \times\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)},\end{array} \right\}\] Also, for all \(m\geq 0\) and \(s=0\dots S\) we will prove that: \[\|\mathcal{D}_{s}\|_{m}\leq C\frac{\lambda_{2s}^{m}}{(\lambda l)^{s}}\big{(} \lambda_{0}\lambda_{2}\dots\lambda_{2(s-1)}\big{)}^{\gamma}\big{(}\|\mathcal{ D}\|_{0}+(lM)^{2}\big{)}.\] Note that the bound (3.12)\({}_{2}\) at its lowest counter value \(s=0\), follows in view of (3.1)\({}_{4}\), and since \(\lambda_{0}=\frac{1}{l}\). We further observe that, using interpolation and the preparatory bound (3.6), the estimate (3.12)\({}_{2}\) easily implies for all \(m\geq 0\) and \(s=0\dots S-1\): \[\left.\begin{array}{l}\tilde{C}_{s}\leq C\frac{1}{(\lambda l)^{s}}\big{(} \lambda_{0}\lambda_{2}\dots\lambda_{2s}\big{)}^{\gamma}\big{(}\|\mathcal{D} \|_{0}+(lM)^{2}\big{)},\\ \|\Psi_{s}\|_{m+1}\leq C\frac{\lambda_{2s}^{m}}{(\lambda l)^{s}}\big{(}\lambda _{0}\lambda_{2}\dots\lambda_{2s}\big{)}^{\gamma}\big{(}\|\mathcal{D}\|_{0}+( lM)^{2}\big{)},\\ \|a_{s}\|_{m}\leq C\frac{\lambda_{2s}^{m}}{(\lambda l)^{s/2}}\big{(}\lambda _{0}\lambda_{2}\dots\lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^ {1/2}+lM\big{)}.\end{array}\right\}\] **7. (Proof of estimate (3.12)\({}_{1}\))** With \(s,j,\delta,\gamma\) as in (3.7), definition of \(v_{i}\) in step 4 yields: \[\|\nabla^{(m+1)}(v_{i}-v_{i-1})\|_{0} \leq C\sum_{p+q=m+1}\lambda_{i}^{p-1}\|\nabla^{(q)}a_{s}\|_{0}\] \[\leq C\lambda_{i}^{m}\sum_{q=0}^{m+1}\frac{1}{\lambda_{i}^{q}} \frac{\lambda_{2s}^{q}}{(\lambda l)^{s/2}}\big{(}\lambda_{0}\lambda_{2}\dots \lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)},\] where we used the induction assumption (3.12)\({}_{3}\). The first bound in (3.12)\({}_{1}\) then follows, because \(\lambda_{2s}\leq\lambda_{i}\), due to \(2s<i\). For bounding the \(w\)-increment we write, recalling (3.8): \[\|\nabla^{(m+1)}(w_{i}-w_{i-1})\|_{0}\leq C\sum_{p+q+t=m+1}\lambda _{i}^{p-1}\|\nabla^{(q)}a_{s}\|_{0}\|\nabla^{(t+1)}v_{jk}\|_{0} \tag{3.13}\] \[\qquad+C\sum_{p+q+t=m+1}\Big{(}\lambda_{i}^{p-2}\|\nabla^{(q)}a_{ s}\|_{0}\|\nabla^{(t+1)}a_{s}\|_{0}+\lambda_{i}^{p-1}\|\nabla^{(q)}a_{s}\|_{0} \|\nabla^{(t)}a_{s}\|_{0}\Big{)}\] We split the first term in the right hand side above, according to whether \(t=0\) or \(t\geq 1\): \[\sum_{p+q+t=m+1}\lambda_{i}^{p-1}\|\nabla^{(q)}a_{s}\|_{0}\|\nabla^ {(t+1)}v_{jk}\|_{0} \tag{3.14}\] \[\qquad=\sum_{p+q=m+1}\lambda_{i}^{p-1}\|\nabla^{(q)}a_{s}\|_{0}\| \nabla v_{jk}\|_{0}+\sum_{p+q+t=m}\lambda_{i}^{p-1}\|\nabla^{(q)}a_{s}\|_{0}\| \nabla^{(t+2)}v_{jk}\|_{0}\] \[\qquad\leq C\lambda_{i}^{m}\sum_{q=0}^{m+1}\frac{1}{\lambda_{i}^ {q}}\frac{\lambda_{2s}^{q}}{(\lambda l)^{s/2}}\big{(}\lambda_{0}\lambda_{2} \ldots\lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)} \|\nabla v_{jk}\|_{0}\] \[\qquad\qquad+C\lambda_{i}^{m}\sum_{q+t=0\ldots m}\frac{1}{ \lambda_{i}^{q+t+1}}\frac{\lambda_{2s}^{q}}{(\lambda l)^{s/2}}\big{(}\lambda _{0}\lambda_{2}\ldots\lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0 }^{1/2}+lM\big{)}\|\nabla^{(t+2)}v_{jk}\|_{0}\] \[\qquad\leq C\frac{\lambda_{i}^{m}}{(\lambda l)^{s/2}}\big{(} \lambda_{0}\lambda_{2}\ldots\lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D }\|_{0}^{1/2}+lM\big{)}\Big{(}\|\nabla v_{jk}\|_{0}+\sum_{t=0}^{m}\frac{1}{ \lambda_{i}^{t+1}}\|\nabla^{(t+2)}v_{jk}\|_{0}\Big{)}\] For every \(t=0\ldots m+1\), the inductive assumption (3.12)\({}_{1}\) gives: \[\|\nabla^{(t+1)}v_{jk}\|_{0} \leq\|\nabla^{(t+1)}v_{0}\|_{0}+\sum_{q=1}^{jk}\|\nabla^{(t+1)}(v _{q}-v_{q-1})\|_{0} \tag{3.15}\] \[\leq\|\nabla^{(t+1)}v_{0}\|_{0}+C\sum_{q=1}^{jk}\frac{\lambda_{q} ^{t}}{(\lambda l)^{s(q)/2}}\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{2s(q)} \big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\] Hence, for the case \(t=0\) in (3.14), in virtue of (3.1)\({}_{1}\) and since \(jk<i\), we get directly: \[\|\nabla v_{jk}\|_{0} \leq ClM+\|\nabla v\|_{0}+C\sum_{q=1}^{jk}\frac{1}{(\lambda l)^{s (q)/2}}\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{2s(q)}\big{)}^{\gamma/2} \big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\] \[\leq ClM+\|\nabla v\|_{0}+C\big{(}\lambda_{0}\lambda_{2}\ldots \lambda_{2s(jk)}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\] \[\leq C\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{2s(i)}\big{)}^ {\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)}.\] The same bound for \(t=1\ldots m+1\), in view of (3.1)\({}_{3}\), implies that: \[\sum_{t=0}^{m}\frac{1}{\lambda_{i}^{t+1}}\|\nabla^{(t+2)}v_{jk}\| _{0} \leq C\sum_{t=0}^{m}\bigg{(}\frac{lM}{(\lambda_{i}l)^{t+1}}+\sum_ {q=1}^{jk}\frac{1}{(\lambda l)^{s(q)/2}}\big{(}\lambda_{0}\lambda_{2}\ldots \lambda_{2s(q)}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)} \bigg{)}\] \[\leq C\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{2s(i)}\big{)}^ {\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}.\] Thus, by (3.14) we see that the first term in the right hand side of (3.13) is bounded by: \[C\frac{\lambda_{i}^{m}}{(\lambda l)^{s/2}}\big{(}\lambda_{0}\lambda_{2}\ldots \lambda_{2s}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\big{(} \|\mathcal{D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)}\] On the other hand, the second term in the right hand side of (3.13) is likewise bounded by: \[C\lambda_{m}^{i}\sum_{q+t=0\ldots m+1}\bigg{(}\frac{\lambda_{2s}^ {q+t+1}}{\lambda_{i}^{q+t+1}}+\frac{\lambda_{2s}^{q+t}}{\lambda_{i}^{q+t}} \bigg{)}\frac{1}{(\lambda l)^{s}}\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{2 s}\big{)}^{\gamma}\big{(}\|\mathcal{D}\|_{0}+(lM)^{2}\big{)}\] \[\leq C\frac{\lambda_{i}^{m}}{(\lambda l)^{s}}\big{(}\lambda_{0} \lambda_{2}\ldots\lambda_{2s}\big{)}^{\gamma}\big{(}\|\mathcal{D}\|_{0}+(lM)^{ 2}\big{)},\] by (3.12)\({}_{3}\) and since \(\lambda_{2s}\leq\lambda_{i}\). This completes the proof of the second estimate in (3.12)\({}_{1}\). **8. (Proof of estimate (3.12)\({}_{2}\))** Let \(i\in(kj,k(j+1)]\cap(2(s-1),2s]\) with \(j=0\ldots J-1\), \(s=1\ldots S\), and denote \(\delta=i-2(s-1)\). From (3.10) we see that for all \(m\geq 0\): \[\begin{split}&\big{\|}\nabla^{(m)}\big{(}V_{i}-a_{s-1}^{2}e_{ \delta}\otimes e_{\delta}\big{)}\big{\|}_{0}\leq C\sum_{p+q+t=m}\lambda_{i}^{p- 1}\|\nabla^{(q)}a_{s-1}\|_{0}\|\nabla^{(t+2)}v_{jk}\|_{0}\\ &\qquad+C\sum_{p+q+t=m}\lambda_{i}^{p-2}\Big{(}\|\nabla^{(q+1)}a_ {s-1}\|_{0}\|\nabla^{(t+1)}a_{s-1}\|_{0}+\|\nabla^{(q)}a_{s-1}\|_{0}\|\nabla^{ (t+2)}a_{s-1}\|_{0}\Big{)}.\end{split} \tag{3.16}\] By (3.12)\({}_{3}\), (3.15), (3.1)\({}_{3}\) and the fact that \(\lambda_{i}\leq\lambda_{2s}\) we get: \[\begin{split}&\sum_{p+q+t=m}\lambda_{i}^{p-1}\|\nabla^{(q)}a_{s- 1}\|_{0}\|\nabla^{(t+2)}v_{jk}\|_{0}\\ &\qquad\leq C\lambda_{i}^{m}\sum_{q+t=0\ldots m}\frac{1}{\lambda_ {i}^{q+t+1}}\frac{\lambda_{2(s-1)}^{q}}{(\lambda l)^{(s-1)/2}}\big{(}\lambda_ {0}\lambda_{2}\ldots\lambda_{2(s-1)}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_ {0}^{1/2}+lM\big{)}\times\\ &\qquad\qquad\qquad\qquad\times\bigg{(}\frac{lM}{l^{t+1}}+\sum_{r =1}^{jk}\frac{\lambda_{r}^{t+1}}{(\lambda l)^{s(r)/2}}\big{(}\lambda_{0} \lambda_{2}\ldots\lambda_{2(s-1)}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0} ^{1/2}+lM\big{)}\bigg{)}\\ &\qquad\leq C\frac{\lambda_{i}^{m}}{(\lambda l)^{(s-1)/2}}\big{(} \lambda_{0}\lambda_{2}\ldots\lambda_{2(s-1)}\big{)}^{\gamma}\big{(}\|\mathcal{ D}\|_{0}+(lM)^{2}\big{)}\sum_{t=0}^{m}\bigg{(}\frac{1}{(\lambda_{i}l)^{t+1}}+ \sum_{r=1}^{jk}\frac{\lambda_{r}^{t+1}}{\lambda_{i}^{t+1}(\lambda l)^{s(r)/2} }\bigg{)}\\ &\qquad\leq C\lambda_{2s}^{m}\big{(}\lambda_{0}\lambda_{2}\ldots \lambda_{2(s-1)}\big{)}^{\gamma}\big{(}\|\mathcal{D}\|_{0}+(lM)^{2}\big{)} \bigg{(}\frac{1}{(\lambda_{i}l)(\lambda l)^{(s-1)/2}}+\sum_{r=1}^{jk}\frac{ \lambda_{r}}{\lambda_{i}(\lambda l)^{s(r)/2}(\lambda l)^{(s-1)/2}}\bigg{)}. \end{split}\] Recalling (3.3) and noting that \(j(jk)\leq j(i)-1\), we check the following: \[\begin{split}&\frac{1}{(\lambda_{i}l)(\lambda l)^{(s-1)/2}}= \frac{1}{(\lambda l)^{(s-1)/2+1+j(i)}(\lambda l)^{(s-1)/2}}\leq\frac{1}{( \lambda l)^{s}},\\ &\sum_{r=1}^{jk}\frac{\lambda_{r}}{\lambda_{i}(\lambda l)^{s(r)/2 }(\lambda l)^{(s-1)/2}}=\sum_{r=1}^{jk}\frac{(\lambda l)^{1+j(r)+s(r)/2}}{( \lambda l)^{1+j(i)+s(i)/2}(\lambda l)^{s(r)/2}(\lambda l)^{(s-1)/2}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\leq C\frac{(\lambda l)^{j( jk)}}{(\lambda l)^{j(i)}(\lambda l)^{s-1}}\leq\frac{C}{(\lambda l)^{s}}.\end{split}\] Inserting the above into the previous estimate, we see that the first term in the right hand side of (3.16) is bounded by: \[C\frac{\lambda_{2s}^{m}}{(\lambda l)^{s}}\big{(}\lambda_{0}\lambda_{2}\ldots \lambda_{2(s-1)}\big{)}^{\gamma}\big{(}\|\mathcal{D}\|_{0}+(lM)^{2}\big{)}.\] On the other hand, for the second term in the right hand side of (3.16) we have: \[\begin{split}& C\lambda_{i}^{m}\sum_{q+t=0\ldots m}\frac{1}{ \lambda_{i}^{q+t+2}}\frac{\lambda_{2(s-1)}^{q+t+2}}{(\lambda l)^{s-1}}\big{(} \lambda_{0}\lambda_{2}\ldots\lambda_{2(s-1)}\big{)}^{\gamma}\big{(}\|\mathcal{ D}\|_{0}+(lM)^{2}\big{)}\\ &\qquad\leq C\frac{\lambda_{2s}^{m}}{(\lambda l)^{s}}\big{(} \lambda_{0}\lambda_{2}\ldots\lambda_{2(s-1)}\big{)}^{\gamma}\big{(}\|\mathcal{D} \|_{0}+(lM)^{2}\big{)},\end{split}\] because: \[\frac{\lambda_{2(s-1)}}{\lambda_{i}}\leq\frac{\lambda_{2(s-1)}}{\lambda_{2(s- 1)+1}}=\frac{1}{(\lambda l)^{1/2}}.\] This ends the proof of (3.12)\({}_{2}\) and the proof of our inductive estimates. **9. (End of proof)** We now show that (3.12)\({}_{1}\)-(3.12)\({}_{3}\) imply (1.6)\({}_{1}\)-(1.6)\({}_{3}\). First, taking \(m=-1,0\) and in view of (3.9), (3.1)\({}_{1}\) we conclude a preliminary version of (1.6)\({}_{1}\): \[\|\tilde{v}-v\|_{1} \leq\|v_{0}-v\|_{1}+\sum_{i=1}^{N}\|v_{i}-v_{i-1}\|_{1}\leq C \big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{N}\big{)}^{\gamma/2}\big{(}\| \mathcal{D}\|_{0}^{1/2}+lM\big{)},\] \[\|\tilde{w}-w\|_{1} \leq\|w_{0}-w\|_{1}+\sum_{i=1}^{N}\|w_{i}-w_{i-1}\|_{1}+\sum_{s=0} ^{S-1}\|\Psi_{s}\|_{1}\] \[\leq C\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{N}\big{)}^{ \gamma}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\big{(}1+\|\mathcal{D}\|_{0}^ {1/2}+lM+\|\nabla v\|_{0}\big{)}.\] Taking \(m=1\), by (3.1)\({}_{3}\) and since \(1+j(N)=J\), we get a version of the first bound in (1.6)\({}_{2}\): \[\|\nabla^{2}\tilde{v}\|_{0} =\|\nabla^{2}v_{N}\|_{0}\leq\|\nabla^{2}v_{0}\|_{0}+\sum_{i=1}^{N }\|\nabla^{2}(v_{i}-v_{i-1})\|_{0}\] \[\leq C\Big{(}\frac{1}{l}+\sum_{i=1}^{N}\frac{\lambda_{i}}{( \lambda l)^{s(i)/2}}\Big{)}\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{N} \big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\] \[=C\frac{(\lambda l)^{J}}{l}\big{(}\lambda_{0}\lambda_{2}\ldots \lambda_{N}\big{)}^{\gamma/2}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}.\] Similarly, we also get the second bound, recalling again (3.3): \[\|\nabla^{2}\tilde{w}\|_{0} =\|\nabla^{2}w_{N}-\sum_{s=0}^{S-1}\nabla^{2}\Psi_{s}\|_{0}\leq \|\nabla^{2}w_{0}\|_{0}+\sum_{i=1}^{N}\|\nabla^{2}(w_{i}-w_{i-1})\|_{0}+\sum_{ s=0}^{S-1}\|\Psi_{s}\|_{2}\] \[\leq C\Big{(}\frac{1}{l}+\sum_{i=1}^{N}\frac{\lambda_{i}}{( \lambda l)^{s(i)/2}}+\sum_{s=0}^{S-1}\frac{\lambda_{2s}}{(\lambda l)^{s}} \Big{)}\big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{N}\big{)}^{\gamma}\times\] \[\qquad\times\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)}\big{(}1+\| \mathcal{D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)}\] \[\leq C\frac{(\lambda l)^{J}}{l}\big{(}\lambda_{0}\lambda_{2} \ldots\lambda_{N}\big{)}^{\gamma}\big{(}\|\mathcal{D}\|_{0}^{1/2}+lM\big{)} \big{(}1+\|\mathcal{D}\|_{0}^{1/2}+lM+\|\nabla v\|_{0}\big{)}.\] Finally, (3.11), (3.1)\({}_{2}\), and (3.12)\({}_{2}\) applied with \(m=0\) yield a version of (1.6)\({}_{3}\): \[\|\tilde{\mathcal{D}}\|_{0} =\|(A-A_{0})-\mathcal{D}_{S}\|_{0}\leq\|A-A_{0}\|_{0}+\|\mathcal{ D}_{S}\|_{0}\] \[\leq C\Big{(}l^{\beta}\|A\|_{0,\beta}+\frac{1}{(\lambda l)^{S}} \big{(}\lambda_{0}\lambda_{2}\ldots\lambda_{2(S-1)}\big{)}^{\gamma}\big{(}\| \mathcal{D}\|_{0}+(lM)^{2}\big{)}\Big{)}\] We conclude the final estimates by a straightforward calculation in: \[\lambda_{0}\lambda_{2}\ldots\lambda_{N}=\frac{\prod_{p=1}^{N/2}( \lambda l)^{1+j(2p)+(p-1)/2}}{l^{N/2+1}}=\left\{\begin{array}{ll}\frac{( \lambda l)^{(k^{2}+6k)/16}}{l^{k/2+1}}&\mbox{ for $k$ even},\\ \frac{(\lambda l)^{(k^{2}+5k+2)/4}}{l^{k+1}}&\mbox{ for $k$ odd},\end{array}\right.\] which implies that: \[\lambda_{0}\lambda_{2}\ldots\lambda_{N}\leq\frac{(\lambda l)^{(k^{2}+5k+2)/4} }{l^{k+1}}\leq\lambda^{(k^{2}+5k+2)/4}.\] Thus, we achieve \((\ref{1.6})_{1}\)-\((\ref{1.6})_{3}\) with the auxiliary exponent \(\frac{k^{2}+5k+2}{4}\gamma\), rather than \(\gamma\). These yield the claimed bounds as well, by a simple re-parametrisation. The proof is done. ## 4. The Nash-Kuiper scheme in \(\mathcal{C}^{1,\alpha}\): a proof of Theorem 1.4 Before giving the proof, we note that taking \(S,J\) as in Theorem 1.3, for which (1.7) implies: \[\frac{S}{S+2J}=\frac{1}{1+4/k},\] Theorem 1.4 automatically yields the following result, particular to dimension \(d=2\): **Corollary 4.1**.: _Given an open, bounded, smooth domain \(\omega\subset\mathbb{R}^{2}\), there exists \(l_{0}\in(0,1)\) such that the following holds for every \(l\in(0,l_{0})\). For every \(v\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{2l}(0),\mathbb{R}^{k})\), \(w\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{2l}(0),\mathbb{R}^{2})\) and \(A\in\mathcal{C}^{0,\beta}(\bar{\omega}+\bar{B}_{2l}(0),\mathbb{R}^{2\times 2}_{ \mathrm{sym}})\), such that:_ \[\mathcal{D}=A-\big{(}\frac{1}{2}(\nabla v)^{T}\nabla v+\mathrm{sym}\nabla w \big{)}\quad\text{ satisfies }\quad 0<\|\mathcal{D}\|_{0}\leq 1,\] _and for every \(\alpha\) in the range:_ \[0<\alpha<\min\Big{\{}\frac{\beta}{2},\frac{1}{1+4/k}\Big{\}}, \tag{4.1}\] _there exist \(\tilde{v}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\) and \(\tilde{w}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{2})\) with the following properties:_ \[\|\tilde{v}-v\|_{1}\leq C(1+\|\nabla v\|_{0})^{2}\|\mathcal{D}\|_ {0}^{1/4},\quad\|\tilde{w}-w\|_{1}\leq C(1+\|\nabla v\|_{0})^{3}\|\mathcal{D} \|_{0}^{1/4}, \tag{4.2}\] \[A-\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{v}+\mathrm{ sym}\nabla\tilde{w}\big{)}=0\quad\text{ in }\,\,\bar{\omega}. \tag{4.2}\] _The norms in the left hand side of (4.2)\({}_{1}\) are taken on \(\bar{\omega}\), and in the right hand side on \(\bar{\omega}+\bar{B}_{2l}(0)\). The constants \(C\) depend only on \(\omega,k,A\) and \(\alpha\)._ The remaining part of this section will be devoted to: **Proof of Theorem 1.4** **1.** We set \(v_{0}=v,w_{0}=w,\mathcal{D}_{0}=\mathcal{D}\). Then, for each \(i\geq 1\) we will define: \[v_{i}\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{l_{i}}(0),\mathbb{R}^{k}),\quad w _{i}\in\mathcal{C}^{2}(\bar{\omega}+\bar{B}_{l_{i}}(0),\mathbb{R}^{d}),\quad \mathcal{D}_{i}=A-\big{(}\frac{1}{2}(\nabla v_{i})^{T}\nabla v_{i}+\mathrm{ sym}\nabla w_{i}\big{)},\] by applying Theorem 1.3 to \(v_{i-1}\), \(w_{i-1}\), \(A\), with specific parameters \(\gamma,l_{i-1},\lambda_{i-1},M_{i-1}\). To this end, we will define \(\gamma\in(0,1)\), \(\{l_{i}\}_{i=1}^{\infty}\), \(\big{\{}\lambda_{i},M_{i}\}_{i=0}^{\infty}\) satisfying, as below, the bounds for all \(i\geq 0\) and convergences as \(i\to\infty\). We may also decrease \(l_{0}\) if needed. Namely, we will require: \[\begin{split}& l_{i+1}\leq\frac{l_{i}}{2},\quad l_{i}\lambda_{i}>1, \quad M_{i}\geq\max\{\|v_{i}\|_{2},\|w_{i}\|_{2},1\},\quad M_{i}\nearrow\infty, \\ &\|\mathcal{D}_{i}\|_{0}\leq(l_{i}M_{i})^{2},\quad l_{i}M_{i}\to 0.\end{split} \tag{4.3}\] From (1.6)-(1.6)\({}_{3}\) we then get for all \(i\geq 0\): \[\begin{split}&\|v_{i+1}-v_{i}\|_{1}\leq C\lambda_{i}^{\gamma}l_{i}M_ {i},\qquad\|w_{i+1}-w_{i}\|_{1}\leq C\lambda_{i}^{\gamma}l_{i}M_{i}\big{(}1+l_{ i}M_{i}+\|\nabla v_{i}\|_{0}\big{)},\\ &\|v_{i+1}\|_{2}\leq C(\lambda_{i}l_{i})^{J}\lambda_{i}^{\gamma}M_ {i},\qquad\|w_{i+1}\|_{2}\leq C(\lambda_{i}l_{i})^{J}\lambda_{i}^{\gamma}M_{i} \big{(}1+l_{i}M_{i}+\|\nabla v_{i}\|_{0}\big{)},\\ &\|\mathcal{D}_{i+1}\|_{0}\leq C\Big{(}l_{i}^{\beta}\|A\|_{0, \beta}+\frac{1}{(\lambda_{i}l_{i})^{S}}\lambda_{i}^{\gamma}(l_{i}M_{i})^{2} \Big{)}.\end{split} \tag{4.4}\] The above bound on \(\|v_{i+1}\|_{2}\) follows by: \[\|v_{i+1}\|_{2} \leq\|\nabla^{2}v_{i+1}\|_{0}+\|v_{i+1}-v_{i}\|_{1}+\|v_{i}\|_{1}\] \[\leq C(\lambda_{i}l_{i})^{J}\lambda^{\gamma}M_{i}+C\lambda_{i}^{ \gamma}l_{i}M_{i}+M_{i}\leq C(\lambda_{i}l_{i})^{J}\lambda^{\gamma}M_{i},\] in view of (1.6)\({}_{2}\), (1.6)\({}_{1}\) and (4.3). The bound on \(\|w_{i+1}\|_{2}\) is obtained similarly. We also recall our convention that all constants denoted by \(C\) are bigger than \(1\) and may change from line to line, but depend only on \(\omega\), \(k\), \(\gamma\) (and \(S,J\geq 1\) in the present proof). **2.** To show the validity of (4.3), we make the following ansatz: \[\lambda_{i}=\frac{b}{l_{i}^{a}}\quad\text{ with some }\ a\in(1,2)\ \ \text{and}\ \ b>1. \tag{4.5}\] Anticipating the details of the proof, it is convenient to keep in mind that we assign: sufficiently large \(b\), \(l_{0}\) small and \((a-1)\) small, and \(\gamma\) small. The requirements in (4.3) are then implied by: \[\begin{split}& l_{i+1}\leq\frac{l_{i}}{2},\quad M_{i}\geq\max\{\|v_{i}\|_{2},\|w_{i} \|_{2},1\},\quad M_{i}\nearrow\infty,\\ & b^{\gamma}\sum_{i=0}^{\infty}l_{i}^{1-a\gamma}M_{i}\leq C\frac{ b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}\|\mathcal{D}_{0} \|_{0}^{1/2},\\ &\|\mathcal{D}_{i}\|_{0}\leq(l_{i}M_{i})^{2},\end{split} \tag{4.6}\] whereas the bounds in (4.4) may be rewritten as: \[\begin{split}&\|v_{i+1}-v_{i}\|_{1}\leq Cb^{\gamma}l_{i}^{1-a \gamma}M_{i},\qquad\|w_{i+1}-w_{i}\|_{1}\leq Cb^{\gamma}l_{i}^{1-a\gamma}\frac {b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}M_{i}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}, \\ &\|v_{i+1}\|_{2}\leq C\frac{b^{J+\gamma}}{l_{i}^{(a-1)J+a\gamma}}M _{i},\qquad\|w_{i+1}\|_{2}\leq C\frac{b^{J+\gamma}}{l_{i}^{(a-1)J+a\gamma}} \frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}M_{i}\big{(}1+\|\nabla v_{0}\|_{0} \big{)},\\ &\|\mathcal{D}_{i+1}\|_{0}\leq C\Big{(}l_{i}^{\beta}\|A\|_{0,\beta }+\frac{l_{i}^{2+(a-1)S-a\gamma}}{b^{S-\gamma}}M_{i}^{2}\Big{)}\end{split} \tag{4.7}\] The middle two bounds above imply that: \[\max\{\|v_{i+1}\|_{2},\|w_{i+1}\|_{2},1\}\leq C\frac{b^{J+(S+2J+1)\gamma}}{l_{ i}^{(a-1)J+3\gamma a}}M_{i}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}.\] Consequently, and splitting the last bound in (4.7) between the two terms in its right hand side, we see that the requirements in (4.6) are implied by the satisfaction of: \[\begin{split}& l_{i+1}\leq\frac{l_{i}}{2},\quad\|\mathcal{D}_{0} \|_{0}=(l_{0}M_{0})^{2},\\ & b^{\gamma}\sum_{i=0}^{\infty}l_{i}^{1-a\gamma}M_{i}\leq C\frac{ b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}\| \mathcal{D}_{0}\|_{0}^{1/2},\\ &\Big{(}\frac{M_{i+1}}{M_{i}}\Big{)}^{2}\geq\max\Big{\{}\frac{2C}{ b^{S-\gamma}}\frac{l_{i}^{2+(a-1)S-a\gamma}}{l_{i+1}^{2}},\frac{Cb^{2J+(S+2J+1) \gamma}}{l_{i}^{2(a-1)J+6\gamma a}}\Big{\}}\cdot\big{(}1+\|\nabla v_{0}\|_{0} \big{)}^{2},\\ & M_{i+1}^{2}\geq\frac{2Cl_{i}^{\beta}}{l_{i+1}^{2}}\|A\|_{0, \beta}.\end{split} \tag{4.8}\] In the right hand side of second line estimate above, its first term prevails provided that: \[l_{i+1}^{2}\leq\frac{l_{i}^{2+(a-1)(S+2J)5+a\gamma}}{Cb^{S+2J+(2S+4J+1)\gamma}}\] Consequently, we define: \[\begin{split} l_{i}=B^{\frac{q^{i}-1}{q-1}}l_{0}^{q^{i}}& \quad\text{ where }\quad\frac{1}{B}=Cb^{\frac{S}{2}+J+(S+2J+\frac{1}{2})\gamma}\\ &\quad\text{ and }\quad q=1+(a-1)\big{(}\frac{S}{2}+J\big{)}+\frac{5a \gamma}{2}\end{split} \tag{4.9}\] and note that the estimates in (4.8) are then guaranteed by: \[b^{\gamma}\sum_{i=0}^{\infty}B^{\frac{(q^{i}-1)(1-a\gamma)}{q-1}} l_{0}^{q^{i}(1-a\gamma)}M_{i}\leq C\frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}} \big{(}1+\|\nabla v_{0}\|_{0}\big{)}\|\mathcal{D}_{0}\|_{0}^{1/2}, \tag{4.10}\] \[\frac{M_{i+1}^{2}}{M_{i}^{2}}\geq\frac{2C}{b^{S-\gamma}}\frac{1} {B^{\frac{(a-1)S-a\gamma}{q-1}}}\frac{1}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)} ^{q^{i}(2J(a-1)+ga\gamma)}}\big{(}1+\|\nabla v_{0}\|_{0}\big{)},\] (4.10) \[M_{i+1}^{2}\geq 2C\|A\|_{0,\beta}\frac{B^{\frac{2-\beta}{q-1}}}{ \big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(2q-\beta)}}. \tag{4.10}\] We will assume the initial normalisation: \[\|\mathcal{D}_{0}\|_{0}=(l_{0}M_{0})^{2},\] and show (4.10)\({}_{1}\)-(4.10)\({}_{3}\) for all \(i\geq 0\), by separating our construction into two cases below. **3.** We start by observing that condition (4.10)\({}_{2}\) holds, if we set: \[M_{i+1}^{2}=M_{0}^{2}\Big{(}\frac{2C}{b^{S-\gamma}}\frac{(1+\|\nabla v_{0}\|_{ 0})^{2}}{B^{\frac{S(a-1)-a\gamma}{q-1}}}\Big{)}^{i+1}\big{(}B^{\frac{1}{q-1}} l_{0}\big{)}^{\frac{2J(a-1)+6a\gamma}{q-1}}\frac{1}{\big{(}B^{\frac{1}{q-1}}l_{0} \big{)}^{q^{i+1}\frac{2J(a-1)+6a\gamma}{q-1}}}, \tag{4.11}\] for all \(i\geq 0\). On the other hand, (4.10)\({}_{3}\) follows directly, by assigning: \[M_{i+1}^{2}=\big{(}M_{0}(1+\|\nabla v_{0}\|_{0})\big{)}^{2(i+1)}l_{0}^{2-\beta }\frac{B^{\frac{2-\beta}{q-1}}}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(2 q-\beta)}}, \tag{4.12}\] and taking \(M_{0}^{2}l_{0}^{2-\beta}\geq 8C\|A\|_{0,\beta}\), which is guaranteed by assigning \(l_{0}\) small enough to have: \[2C\|A\|_{0,\beta}l_{0}^{\beta}\leq\|\mathcal{D}_{0}\|_{0}. \tag{4.13}\] We now choose the larger one of definitions (4.11), (4.12), asymptotically as \(i\to\infty\), which in view of \(l_{0},B<1\) reduces to choosing the larger exponent in the power of \(\frac{1}{B^{\frac{1}{q-1}}l_{0}}\). There holds: \[\frac{\beta}{2}>\frac{S}{S+2J}\implies 2q-\beta <2\frac{J+\frac{3a}{a-1}\gamma}{\big{(}\frac{S}{2}+J\big{)}+\frac {5a}{2(a-1)}\gamma}\] \[=\frac{2}{q-1}\big{(}J(a-1)+3a\gamma\big{)}<\frac{2q}{q-1}\big{(} J(a-1)+3a\gamma\big{)},\] provided that \(a-1\) small and \(\gamma\) small. In that case we proceed with (4.11). Further: \[\frac{\beta}{2}\leq\frac{S}{S+2J}\implies\frac{\beta}{2}<q\frac{S-\frac{a}{a- 1}\gamma}{\big{(}S+2J\big{)}+\frac{5a}{a-1}\gamma}\implies 2q-\beta>\frac{2q}{q-1}\big{(}J(a-1)+3a \gamma\big{)},\] by the same order of assigning \(a\) and then a small \(\gamma\). In that case we will adopt (4.12). **4. (Case \(\dfrac{\boldsymbol{\beta}}{\boldsymbol{2}}>\dfrac{\mathbf{S}}{\mathbf{S}+2 \mathbf{J}}\), definition (4.11))** Below, we show that (4.10)\({}_{1}\) and (4.10)\({}_{3}\) may be achieved by assigning \(b,l_{0},a,\gamma\) appropriately. We first consider the bound (4.10)\({}_{3}\), which is implied by the following estimate: \[M_{0}^{2}\bigg{(}\frac{2C}{b^{S-\gamma}}\frac{(1+\|\nabla v_{0}\|_{0})^{2}}{B ^{\frac{S(a-1)-a\gamma}{q-1}}}\bigg{)}^{i+1}\frac{\big{(}B^{\frac{1}{q-1}}l_{ 0}\big{)}^{\frac{2J(a-1)+6a\gamma}{q-1}}}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)} ^{q^{i}\big{(}\frac{2J(a-1)+6a\gamma}{q-1}-(2q-\beta)\big{)}}}\geq 2C\|A\|_{0, \beta}B^{\frac{2-\beta}{q-1}}.\] Multiplying both sides by \(l_{0}^{2}\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{\beta-2q}\) and recalling \(M_{0}^{2}l_{0}^{2}=\|\mathcal{D}_{0}\|_{0}\), we equivalently write: \[\bigg{(}\frac{2C}{b^{S-\gamma}}\frac{(1+\|\nabla v_{0}\|_{0})^{2}}{B^{\frac{S (a-1)-a\gamma}{q-1}}}\bigg{)}^{i+1}\frac{1}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)} ^{(q^{i}-1)\big{(}\frac{2J(a-1)+6a\gamma}{q-1}-(2q-\beta)\big{)}}}\geq\frac{2 C\|A\|_{0,\beta}}{\|\mathcal{D}_{0}\|_{0}}B^{-2}l_{0}^{\beta-2(q-1)}.\] Since \(q^{i}-1\geq(q-1)i\), the above is implied by: \[\bigg{(}\frac{2C}{b^{S-\gamma}}\frac{1}{B^{\frac{S(a-1)-a\gamma}{q-1}}}\bigg{)} ^{i+1}\bigg{(}\frac{1}{B^{\frac{2J(a-1)+6a\gamma}{q-1}-(2q-\beta)}}\bigg{)}^{ i}\geq\frac{2C\|A\|_{0,\beta}}{\|\mathcal{D}_{0}\|_{0}}B^{-2}l_{0}^{\beta-2(q-1)},\] and further by: \[\frac{1}{b^{S-\gamma}B^{\frac{S(a-1)-a\gamma}{q-1}}}\frac{B^{2}}{l_{0}^{\beta -2(q-1)}}\bigg{(}\frac{2C}{b^{S-\gamma}B^{\beta-2(q-1)}}\bigg{)}^{i}\geq\frac{ 2C\|A\|_{0,\beta}}{\|\mathcal{D}_{0}\|_{0}}. \tag{4.14}\] We now observe that the base power quantity in the left hand side above can be written as: \[\frac{2C}{b^{S-\gamma}B^{\beta-2(q-1)}}=Cb^{\big{(}\frac{S}{2}+J+(S+2J+\frac{ 1}{2})\gamma\big{)}(\beta-2(q-1))-S+\gamma}\geq 1,\] as the exponent there is positive, by the first implication in step 3. Thus, (4.14) follows from: \[\frac{1}{b^{S-\gamma+(S+2J+(2S+4J+1)\gamma)\frac{2J+\frac{6a}{q-1}\gamma}{S+2 J+\frac{5a}{q-1}\gamma}}}\frac{1}{l_{0}^{\beta-2(q-1)}}\geq\frac{2C\|A\|_{0, \beta}}{\|\mathcal{D}_{0}\|_{0}},\] implied, if only \(a-1\) and \(\gamma\) are small, by: \[\frac{1}{b^{S+4J}}\frac{1}{l_{0}^{\beta-2(q-1)}}\geq\frac{2C\|A\|_{0,\beta}}{ \|\mathcal{D}_{0}\|_{0}}. \tag{4.15}\] We will show the validity of this and other requirements in step 6 below. We now consider the estimate in (4.10)\({}_{1}\), namely: \[b^{\gamma}\sum_{i=0}^{\infty}B^{\frac{(q^{i}-1)(1-a\gamma)}{q-1}} l_{0}^{q^{i}(1-a\gamma)}M_{0}\Big{(}\frac{2C}{b^{S-\gamma}}\frac{(1+\| \nabla v_{0}\|_{0})^{2}}{B^{\frac{S(a-1)-a\gamma}{q-1}}}\Big{)}^{i/2}\frac{1} {\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{(q^{i}-1)\frac{J(a-1)+3a\gamma}{q-1}}}\] \[\leq C\frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}\big{(}1+\|\nabla v _{0}\|_{0}\big{)}^{2}\|\mathcal{D}_{0}\|_{0}^{1/2}.\] In view of \(M_{0}^{2}l_{0}^{2}=\|\mathcal{D}_{0}\|_{0}\), the left hand side above can be equivalently written and estimated: \[b^{\gamma}\frac{\|\mathcal{D}_{0}\|_{0}^{1/2}}{l_{0}^{a\gamma}} \sum_{i=0}^{\infty}\Big{(}\frac{2C}{b^{S-\gamma}}\frac{(1+\|\nabla v_{0}\|_{0} )^{2}}{B^{\frac{S(a-1)-a\gamma}{q-1}}}\Big{)}^{i/2}\frac{1}{\big{(}B^{\frac{1} {q-1}}l_{0}\big{)}^{(q^{i}-1)\big{(}\frac{J(a-1)+3a\gamma}{q-1}-1+a\gamma\big{)}}}\] \[\leq b^{\gamma}\frac{\|\mathcal{D}_{0}\|_{0}^{1/2}}{l_{0}^{a \gamma}}\sum_{i=0}^{\infty}\big{(}C(1+\|\nabla v_{0}\|_{0})\big{)}^{i}\big{(}B ^{\frac{1}{q-1}}l_{0}\big{)}^{(q^{i}-1)\big{(}\frac{S-\frac{a}{a-1}\gamma}{S+2 J+\frac{3a}{a-1}\gamma}-a\gamma\big{)}}\] \[\leq\|\mathcal{D}_{0}\|_{0}^{1/2}\Big{(}\frac{b}{l_{0}^{a}}\Big{)} ^{\gamma}\sum_{i=0}^{\infty}\Big{(}C(1+\|\nabla v_{0}\|_{0})B^{\frac{S-\frac{ a}{a-1}\gamma}{S+2J+\frac{3a}{a-1}\gamma}-a\gamma}\Big{)}^{i}\] \[\leq\|\mathcal{D}_{0}\|_{0}^{1/2}\Big{(}\frac{b}{l_{0}^{a}}\Big{)} ^{\gamma}\sum_{i=0}^{\infty}\Bigg{(}\frac{C(1+\|\nabla v_{0}\|_{0})}{b\big{(} \frac{S}{2}+J+(S+2J+\frac{1}{2})\gamma)\big{)}\big{(}\frac{S-\frac{a}{a-1} \gamma}{S+2J+\frac{3a}{a-1}\gamma}-a\gamma\big{)}}\Bigg{)}^{i},\] where we again used \(q^{i}-1\geq(q-1)i\) for all \(i\geq 0\). The bound in (4.10)\({}_{1}\) will follow, in particular, by assuring that the series in the right hand side above sums to less than \(2\), through: \[2C(1+\|\nabla v_{0}\|_{0})\leq b^{\big{(}\frac{S}{2}+J+(S+2J+\frac{1}{2}) \gamma\big{)}\big{(}\frac{S-\frac{a}{a-1}\gamma}{S+2J+\frac{3a}{a-1}\gamma}-a \gamma\big{)}} \tag{4.16}\] **5. (Case \(\dfrac{\boldsymbol{\beta}}{2}\leq\dfrac{\mathbf{S}}{\mathbf{S}+2\mathbf{J}}\), definition (4.12))** In this second case, we show that (4.10)\({}_{1}\) and (4.10)\({}_{2}\) are valid with appropriate \(b,l_{0},a,\gamma\). We first consider (4.10)\({}_{2}\) at \(i=0\), which is: \[\frac{1}{B^{2}}\frac{(1+\|\nabla v_{0}\|_{0})^{2}}{l_{0}^{(S+2J)(a -1)+5a\gamma}}=\frac{1(1+\|\nabla v_{0}\|_{0})^{2}}{(B^{\frac{1}{q-1}}l_{0})^{ 2(q-1)}}=\frac{M_{1}^{2}}{M_{0}^{2}} \geq\frac{2C}{b^{S-\gamma}}\frac{1}{B^{\frac{S(a-1)-a\gamma}{q-1}} }\frac{(1+\|\nabla v_{0}\|_{0})^{2}}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{2J (a-1)+6a\gamma}}\] \[=\frac{2C}{b^{S-\gamma}}\frac{1}{B^{2}}\frac{(1+\|\nabla v_{0}\| _{0})^{2}}{l_{0}^{2J(a-1)+6a\gamma}}.\] Thus, for the validity of the above requirement, we necessitate: \[2Cl_{0}^{S(a-1)-a\gamma}\leq b^{S-\gamma}, \tag{4.17}\] which is achieved with \(b\) large. To complete the analysis of (4.10)\({}_{2}\), we will show for all \(i\geq 0\): \[\frac{M_{0}^{2}}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(q-1)(2q-\beta)}} \geq\frac{2C}{b^{S-\gamma}}\frac{1}{B^{\frac{S(a-1)-a\gamma}{q-1}}}\frac{1}{ \big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(2J(a-1)+6a\gamma)}},\] which is equivalent to: \[M_{0}\geq\frac{2C}{b^{S-\gamma}}\frac{1}{B^{\frac{S(a-1)-a\gamma}{q-1}}}\frac {1}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}\big{(}2J(a-1)+6a\gamma-(q-1)(2 q-\beta)\big{)}}}. \tag{4.18}\] By the second implication in step 3, we note the sign of the exponent: \[2J(a-1)+6a\gamma-(q-1)(2q-\beta)<\big{(}2J(a-1)+3a\gamma\big{)}(1-q)<0\] Hence, and recalling that \((l_{0}M_{0})^{2}=\|\mathcal{D}_{0}\|_{0}\), it follows that (4.18) is implied by: \[\frac{\|\mathcal{D}_{0}\|_{0}^{1/2}}{l_{0}}\geq\frac{2C}{b^{S-\gamma}}\frac{1} {B^{\frac{S(a-1)-a\gamma}{q-1}}}=\frac{2C}{b^{S-\gamma-(S+2J+(2S+4J+1)\gamma) \frac{S-\frac{a}{a-1}\gamma}{S+2J+\frac{3a}{a-1}\gamma}}}.\] Since the power in the exponent above is positive, we see that it is enough to assure that: \[2Cl_{0}\leq|\mathcal{D}_{0}|_{0}^{1/2}. \tag{4.19}\] We will show the validity of this and other requirements in step 7 below. We now validate the estimate in (4.10)\({}_{1}\), namely: \[b^{\gamma}\bigg{(}l_{0}^{1-a\gamma}M_{0}+\sum_{i=0}^{\infty} B^{\frac{(q^{i+1}-1)(1-a\gamma)}{q-1}}l_{0}^{q+1(1-a\gamma)}\Big{(}M_{0}( 1+\|\nabla v_{0}\|_{0})\Big{)}^{i+1}l_{0}^{1-\beta/2}\frac{B^{\frac{1-\beta/2} {q-1}}}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(q-\beta/2)}}\bigg{)}\] \[\leq C\frac{b^{(2+2J)\gamma}}{l_{0}^{2a\gamma}}(1+\|\nabla v_{0} \|_{0})\|\mathcal{D}_{0}\|_{0}^{1/2}.\] The left hand side above may be rewritten and estimated by: \[b^{\gamma}\bigg{(}\frac{\|\mathcal{D}_{0}\|_{0}^{1/2}}{l_{0}^{ a\gamma}}+\sum_{i=0}^{\infty}\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(\beta/ 2-aq\gamma)}\frac{M_{0}^{i+1}(1+\|\nabla v_{0}\|_{0})^{i+1}l_{0}^{1-\beta/2}} {B^{\frac{\beta/2-a\gamma}{q-1}}}\bigg{)}\] \[\leq b^{\gamma}\bigg{(}\frac{\|\mathcal{D}_{0}\|_{0}^{1/2}}{l_{0 }^{a\gamma}}+M_{0}(1+\|\nabla v_{0}\|_{0})\big{(}B^{\frac{1}{q-1}}l_{0}\big{)} ^{\beta/2-aq\gamma}\frac{l_{0}^{1-\beta/2}}{B^{\frac{\beta/2-aq\gamma}{q-1}}}\times\] \[\qquad\qquad\qquad\qquad\times\sum_{i=0}^{\infty}\Big{(}\big{(} B^{\frac{1}{q-1}}l_{0}\big{)}^{(q-1)(\beta/2-a\gamma)}M_{0}(1+\|\nabla v_{0}\|_{0}) \Big{)}^{i}\bigg{)}\] \[\leq\frac{b^{\gamma}\|\mathcal{D}_{0}\|_{0}^{1/2}}{l_{0}^{aq \gamma}}\bigg{(}1+\frac{1+\|\nabla v_{0}\|_{0}}{B^{a\gamma}}\sum_{i=0}^{ \infty}\Big{(}B^{\beta/2-aq\gamma}\frac{(1+\|\nabla v_{0}\|_{0})\|\mathcal{D} _{0}\|_{0}^{1/2}}{l_{0}}\Big{)}^{i}\bigg{)}\] \[\leq C\frac{b^{\gamma+(\frac{S}{2}+J+(S+2J+\frac{1}{2})\gamma)a \gamma}}{l_{0}^{aq\gamma}}(1+\|\nabla v_{0}\|_{0})\|\mathcal{D}_{0}\|_{0}^{1/ 2},\] where we used the fact that \(q^{i}\geq(q-1)i+1\) for all \(i\geq 0\) and the requirement that the ratio in the geometric series above is less that \(\frac{1}{2}\), implied by: \[2(1+\|\nabla v_{0}\|_{0})\leq l_{0}b^{S\beta/6}, \tag{4.20}\] which we note automatically implies (4.17). **6. (Case \(\dfrac{\boldsymbol{\beta}}{2}>\dfrac{\mathbf{S}}{\mathbf{S}+2\mathbf{J}}\), viability of assumptions in step 4 and \(\mathcal{C}^{1}\) convergence)** We now examine (4.15), (4.16). Under the usual assumption \(a-1,\gamma\ll 1\), these are implied by: \[b^{S/4}\geq C(1+\|\nabla v_{0}\|_{0}),\qquad l_{0}^{\beta}\leq\frac{\| \mathcal{D}_{0}\|_{0}}{C\|A\|_{0,\beta}b^{S+4J}}. \tag{4.21}\] Hence we define: \[b^{S/4}=C(1+\|\nabla v_{0}\|_{0}),\qquad l_{0}^{\beta}=\frac{\| \mathcal{D}_{0}\|_{0}}{C\|A\|_{0,\beta}b^{S+4J}}=\frac{\|\mathcal{D}_{0}\|_{0 }}{C\|A\|_{0,\beta}(1+\|\nabla v_{0}\|_{0})^{\frac{4}{S}(S+4J)}}.\] Consequently, the right hand side of the bound in (4.10)\({}_{1}\) becomes, if only \(\gamma\ll 1\): \[\begin{split} C\frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}& \big{(}1+\|\nabla v_{0}\|_{0}\big{)}\|\mathcal{D}_{0}\|_{0}^{1/2}\\ &=C\bigg{(}\frac{(1+\|\nabla v_{0}\|_{0})^{\frac{4}{S}\big{(}S+2J +\frac{2a}{\beta}(S+4J)\big{)}}\|A\|_{0,\beta}^{2a/\beta}}{\|\mathcal{D}_{0}\| _{0}^{2a/\beta}}\bigg{)}^{\gamma}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}\| \mathcal{D}_{0}\|_{0}^{1/2}\\ &\leq C\big{(}1+\|A\|_{0,\beta}\big{)}^{2a\gamma/\beta}\big{(}1+ \|\nabla v_{0}\|_{0}\big{)}^{2}\|\mathcal{D}_{0}\|_{0}^{1/2-2a\gamma/\beta} \end{split}\] and we likewise observe that: \[C\bigg{(}\frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}\big{(}1+\|\nabla v_{0}\|_{ 0}\big{)}\bigg{)}^{2}\|\mathcal{D}_{0}\|_{0}^{1/2}\leq C\big{(}1+\|A\|_{0, \beta}\big{)}^{4a\gamma/\beta}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}^{3}\| \mathcal{D}_{0}\|_{0}^{1/2-4a\gamma/\beta}.\] In particular, if \(\gamma\leq\frac{\beta}{32}\) so that \(\frac{4a\gamma}{\beta}\leq\frac{1}{4}\), the exponents on the deficits are greater than \(\frac{1}{4}\) and: \[\big{(}1+\|A\|_{0,\beta}\big{)}^{4a\gamma/\beta}\leq\big{(}1+\|A\|_{0,\beta} \big{)}^{1/4},\qquad\|\mathcal{D}_{0}\|_{0}^{1/2-4a\gamma/\beta}\leq\| \mathcal{D}_{0}\|_{0}^{1/4}.\] Recalling (4.7) it now follows that: \[\begin{split}&\sum_{i=0}^{\infty}\|v_{i+1}-v_{i}\|_{1}\leq C \big{(}1+\|A\|_{0,\beta}\big{)}^{1/8}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}^{2} \|\mathcal{D}_{0}\|_{0}^{3/8},\\ &\sum_{i=0}^{\infty}\|w_{i+1}-w_{i}\|_{1}\leq C\big{(}1+\|A\|_{0, \beta}\big{)}^{1/4}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}^{3}\|\mathcal{D}_{0} \|_{0}^{1/4},\end{split} \tag{4.22}\] hence the sequences \(\{v_{i}\}_{i=0}^{\infty}\), \(\{w_{i}\}_{i=0}^{\infty}\) converge in \(\mathcal{C}^{1}(\bar{\omega})\) to some limiting fields \(\tilde{v}\in\mathcal{C}^{1}(\bar{\omega},\mathbb{R}^{k})\), \(\tilde{w}\in\mathcal{C}^{1}(\bar{\omega},\mathbb{R}^{d})\) that satisfy (1.9)\({}_{1}\). The validity of (1.9)\({}_{2}\) is clear by the last assertion in (4.3). **7. (Case \(\frac{\boldsymbol{\beta}}{\boldsymbol{2}}\leq\frac{\mathbf{S}}{\mathbf{S}+ \mathbf{2J}}\), viability of assumptions in step 5 and \(\mathcal{C}^{1}\) convergence)** We now examine (4.13), (4.19), (4.20). Under the usual assumption \(a-1,\gamma\ll 1\), these follow from: \[C\big{(}1+\|A\|_{0,\beta}\big{)}l_{0}^{\beta}\leq\|\mathcal{D}_{0}\|_{0}, \qquad 2(1+\|\nabla v_{0}\|_{0})\leq l_{0}b^{S\beta/6}. \tag{4.23}\] Hence we define: \[l_{0}^{\beta}=\frac{\|\mathcal{D}_{0}\|_{0}}{C\big{(}1+\|A\|_{0,\beta}\big{)}},\qquad b^{S\beta/6}=\frac{2(1+\|\nabla v_{0}\|_{0})}{l_{0}}=\frac{C(1+\|\nabla v _{0}\|_{0})\big{(}1+\|A\|_{0,\beta}\big{)}^{1/\beta}}{\|\mathcal{D}_{0}\|_{0} ^{1/\beta}}\] Consequently, the right hand side of the bound in (4.10)\({}_{1}\) becomes, for \(\gamma\ll 1\): \[\begin{split} C\frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}& \big{(}1+\|\nabla v_{0}\|_{0}\big{)}\|\mathcal{D}_{0}\|_{0}^{1/2}\\ &=C\bigg{(}\frac{(1+\|\nabla v_{0}\|_{0})^{\frac{6}{S\beta^{2}}(S +2J)}\big{(}1+\|A\|_{0,\beta}\big{)}^{\frac{6}{S\beta^{2}}(S+2J)+\frac{2a}{ \beta}}}{\|\mathcal{D}_{0}\|_{0}^{\frac{6}{S\beta^{2}}(S+2J)+\frac{2a}{\beta}} }\bigg{)}^{\gamma}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}\|\mathcal{D}_{0}\|_{0} ^{1/2}\\ &\leq C\big{(}1+\|A\|_{0,\beta}\big{)}\big{(}\frac{6}{S\beta^{2}}(S +2J)+\frac{2a}{\beta}\big{)}\gamma\big{(}1+\|\nabla v_{0}\|_{0}\big{)}^{2}\| \mathcal{D}_{0}\|_{0}^{1/2-\big{(}\frac{6}{S\beta^{2}}(S+2J)+\frac{2a}{\beta} \big{)}}\gamma\end{split}\] As in the previous step, taking \(\gamma\) small enough to have \(\big{(}\frac{6}{S\beta^{2}}(S+2J)+\frac{2a}{\beta}\big{)}\gamma\leq\frac{1}{8}\), the above is further estimated by: \[C\big{(}1+\|A\|_{0,\beta}\big{)}^{1/8}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}^{2} \|\mathcal{D}_{0}\|_{0}^{3/8},\] eventually leading to (4.22), as well as the same convergences conclusion with (1.9)\({}_{1}\), (1.9)\({}_{2}\). **8. (Convergence in \(\mathcal{C}^{1,\alpha}\))** To conclude the proof, it remains to show the improved regularity of the limiting fields, namely: \(\tilde{v}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\), \(\tilde{w}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{d})\). The interpolation inequality \(\|\cdot\|_{1,\alpha}\leq C\|\cdot\|_{1}^{1-\alpha}\|\cdot\|_{2}^{\alpha}\) and (4.7), imply that for all \(i\geq 0\) there holds on \(\bar{\omega}\): \[\begin{split}\|v_{i+2}-& v_{i+1}\|_{1,\alpha}+\|w_{ i+2}-w_{i+1}\|_{1,\alpha}\\ &\leq Cb^{\alpha J+\gamma}l_{i+1}^{1-\alpha-\alpha J(a-1)-a \gamma}M_{i+1}\frac{b^{(S+2J)\gamma}}{l_{0}^{2a\gamma}}\big{(}1+\|\nabla v_{0} \|_{0}\big{)}\\ &=C\frac{b^{\alpha J+(S+2J+1)\gamma}}{l_{0}^{2a\gamma}}\Big{(}B^ {\frac{q^{i+1}-1}{q-1}}l_{0}^{q^{i+1}}\Big{)}^{1-\alpha-\alpha J(a-1)-a\gamma }M_{i+1}\big{(}1+\|\nabla v_{0}\|_{0}\big{)}\end{split} \tag{4.24}\] We will argue that the sequences \(\{v_{i}\}_{i=0}^{\infty}\), \(\{w_{i}\}_{i=0}^{\infty}\) are Cauchy in \(\mathcal{C}^{1,\alpha}(\bar{\omega})\), by comparing the right hand side above with terms of a converging power series. Recall that the first case in step 4 is determined by \(\frac{\beta}{2}>\frac{S}{S+2J}\), which directly implies that: \[0<\alpha<\frac{S}{S+2J}.\] There, the quantities \(M_{i+1}\) are given according to (4.11). We gather only those terms from the right hand side of (4.24) that involve the counter \(i\), so that \(\|v_{i+2}-v_{i+1}\|_{1,\alpha}+\|w_{i+2}-w_{i+1}\|_{1,\alpha}\) is bounded, up to a multiplier independent of \(i\), by: \[\begin{split}&\Big{(}B^{\frac{q^{i+1}-1}{q-1}}l_{0}^{q^{i+1}-1} \Big{)}^{1-\alpha-\alpha J(a-1)-a\gamma}\bigg{(}\frac{C(1+\|\nabla v_{0}\|_{0 })^{2}}{b^{S-\gamma}B^{\frac{S(a-1)-a\gamma}{q-1}}}\bigg{)}^{\frac{i+1}{2}} \frac{1}{\big{(}B^{\frac{1}{q-1}}l_{0}\big{)}^{(q^{i+1}-1)\frac{J(a-1)+3a \gamma}{q-1}}}\\ &\quad=\Big{(}B^{\frac{1}{q-1}}l_{0}\Big{)}^{(q^{i+1}-1)\big{(}1 -\frac{J(a-1)+3a\gamma}{q-1}-\alpha-\alpha J(a-1)-a\gamma\big{)}}\bigg{(} \frac{C(1+\|\nabla v_{0}\|_{0})^{2}}{b^{S-\gamma}B^{\frac{S(a-1)-a\gamma}{q-1 }}}\bigg{)}^{\frac{i+1}{2}}\\ &\leq\Bigg{(}CB^{\frac{S-\frac{a}{2}\frac{1}{S+2J}\gamma}{b^{S+2 J+\frac{1}{a-1}}}-\alpha-\alpha J(a-a)-a\gamma}\frac{1+\|\nabla v_{0}\|_{0}}{ \big{(}b^{S-\gamma}B^{\frac{S(a-1)-a\gamma}{q-1}}\big{)}^{1/2}}\bigg{)}^{i+1}, \end{split} \tag{4.25}\] where we used that \(q^{i+1}-1\geq(q-1)(i+1)\). Observing now the calculation: \[b^{S-\gamma}B^{\frac{S(a-1)-a\gamma}{q-1}}=Cb^{S-(S-\frac{a}{a-1}\gamma)\frac {S+2J+(2S+4J+1)\gamma}{S+2J+\frac{5a\gamma}{a-1}}-\gamma}=Cb^{\mathcal{O}( \gamma)},\] and denoting \(\delta=\frac{S}{S+2J}-\alpha>0\), we hence see that for \((a-1),\gamma\ll 1\), the right hand side of (4.25) is further bounded by: \[\bigg{(}CB^{\delta/3}\frac{1+\|\nabla v_{0}\|_{0}}{\big{(}b^{S-\gamma}B^{ \frac{S(a-1)-a\gamma}{q-1}}\big{)}^{1/2}}\bigg{)}^{i+1}\leq\bigg{(}\frac{C(1+ \|\nabla v_{0}\|_{0})}{b^{\frac{\delta}{2}\big{(}\frac{S}{2}+J+(S+2J+\frac{1}{ 2})\gamma\big{)}}}\bigg{)}^{i+1}.\] Consequently, the asserted comparison with the converging power series is achieved provided that the ratio above is less than \(1\), which is implied by: \[b^{\delta S/4}\geq C(1+\|\nabla v_{0}\|_{0}),\] and which is consistent with the defining requirements for \(b,l_{0}\) in (4.21). The second case, in step 5, is determined by \(\frac{\beta}{2}\leq\frac{S}{S+2J}\), which implies: \[0<\alpha<\frac{\beta}{2},\] There, the quantities \(M_{i+1}\) are given according to (4.12), so the terms in the right hand side of (4.24) that involve the counter \(i\), are: \[\Big{(}B^{\frac{q+1}{q-1}}l_{0}^{q+1}\Big{)}^{1-\alpha-\alpha J(a-1 )-a\gamma}\Big{(}\frac{1+\|\nabla v_{0}\|_{0}}{l_{0}}\Big{)}^{i}\frac{1}{\big{(}B ^{\frac{1}{q-1}}l_{0}\big{)}^{q^{i}(q-\frac{\beta}{2})}}\] \[\qquad=\Big{(}B^{\frac{1}{q-1}}l_{0}\Big{)}^{q^{i}\big{(}\frac{ \beta}{2}-q\alpha-q\alpha J(a-1)-qa\gamma\big{)}}\Big{(}\frac{1+\|\nabla v_{0} \|_{0}}{l_{0}}\Big{)}^{i}.\] Using the bound \(q^{i}\geq(q-1)i+1\) valid for all \(i\geq 0\), and denoting \(\delta=\frac{\beta}{2}-\alpha>0\), we estimate the above displayed quantity, up to a multiplier independent of \(i\), by: \[\Bigg{(}\frac{B^{\frac{\beta}{2}-q\alpha-q\alpha J(a-1)-qa\gamma}(1+\|\nabla v _{0}\|_{0})}{l_{0}}\Bigg{)}^{i}\leq\bigg{(}\frac{B^{\delta/2}(1+\|\nabla v_{0 }\|_{0})}{l_{0}}\bigg{)}^{i}\leq\bigg{(}\frac{1+\|\nabla v_{0}\|_{0}}{l_{0}b^{ \frac{\delta}{2}\big{(}\frac{\beta}{2}+J+(S+2J+\frac{1}{2})\gamma\big{)}}} \bigg{)}^{i}.\] We see that the ratio of the related power series is less that \(1\) provided that: \[l_{0}b^{\delta S/4}>1+\|\nabla v_{0}\|_{0},\] which is consistent with the requirements in (4.23) in step 7. This ends the proof of the \(\mathcal{C}^{1,\alpha}\) convergences and completes the proof of Theorem 1.4. ## 5. A proof of Theorem 1.1 We first replace \(\omega\) by its smooth superset, on which \(v,w,A\) are defined and (1.4) holds. Without loss of generality, the same is true on its closed \(2l\)-neighbourhood \(\bar{\omega}+\bar{B}_{2l}(0)\), for some \(0<l<l_{0}\) that allows for the application of Corollary 4.1. Fix \(\epsilon\ll 1\), small as indicated below. First, we let \(v_{1}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{k})\), \(w_{1}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{2})\), \(A_{1}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{2\times 2 }_{\text{sym}})\) with: \[\|v_{1}-v\|_{1}\leq\epsilon^{5},\quad\|w_{1}-w\|_{1}\leq\epsilon^ {5},\quad\|A_{1}-A\|_{0}\leq\epsilon^{5},\] \[\mathcal{D}_{1}=A_{1}-\big{(}\frac{1}{2}(\nabla v_{1})^{T}\nabla v _{1}+\text{sym}\nabla w_{1}\big{)}>c_{1}\text{Id}_{2}\quad\text{ on }\bar{\omega}+\bar{B}_{l}(0)\ \text{ for some }c_{1}>0.\] The last property above follows from: \[\|\mathcal{D}_{1}-\mathcal{D}\|_{0} \leq\|A_{1}-A\|_{0}+\|\nabla(w_{1}-w)\|_{0}+\frac{1}{2}\|\nabla( v_{1}-v)\|_{0}\big{(}\|\nabla v_{1}\|_{0}+\|\nabla v\|_{0}\big{)} \tag{5.1}\] \[\leq 3\epsilon^{5}(1+\|\nabla v\|_{0}).\] Second, use Lemma 2.5 to get \(v_{2}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{k})\), \(w_{2}\in\mathcal{C}^{\infty}(\bar{\omega}+\bar{B}_{l}(0),\mathbb{R}^{2})\) satisfying: \[\|v_{2}-v_{1}\|_{0}\leq\epsilon^{5},\quad\|w_{2}-w_{1}\|_{0}\leq \epsilon^{5},\] \[\|\nabla(v_{2}-v_{1})\|_{0}\leq C\|\mathcal{D}_{1}\|_{0}^{1/2} \leq C\big{(}\|\mathcal{D}\|_{0}^{1/2}+\epsilon^{5/2}+\|\nabla v\|_{0}^{1/2} \big{)},\] \[\mathcal{D}_{2}=A_{1}-\big{(}\frac{1}{2}(\nabla v_{2})^{T}\nabla v _{2}+\text{sym}\nabla w_{2}\big{)}\quad\text{ satisfies }\ \|\mathcal{D}_{2}\|_{0}\leq\epsilon^{5},\] where we applied (5.1) in the gradient increment bound of \(v\). If the deficit \(\mathcal{D}_{3}\), defined on \(\bar{\omega}+\bar{B}_{l}(0)\) in: \[\mathcal{D}_{3}=A-\big{(}\frac{1}{2}(\nabla v_{2})^{T}\nabla v_{2}+\text{sym }\nabla w_{2}\big{)}\] is equivalently zero then we may simply take \(\tilde{v}=v_{2}\) and \(\tilde{w}=w_{2}\) to satisfy the claim of the Theorem. Otherwise, we use Corollary 4.1 to get \(v_{2}\), \(w_{2}\) and \(A\), since: \[0<\|\mathcal{D}_{3}\|_{0}\leq\|A-A_{1}\|_{0}+\|\mathcal{D}_{2}\|_{0}\leq 2 \epsilon^{5}\leq 1,\] and consequently obtain \(\tilde{v}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{k})\), \(\tilde{w}\in\mathcal{C}^{1,\alpha}(\bar{\omega},\mathbb{R}^{2})\) with the properties: \[\|\tilde{v}-v_{2}\|_{0}\leq C(1+\|\nabla v_{2}\|_{0})^{2}\|\mathcal{ D}_{3}\|_{0}^{1/4}\leq C\big{(}1+\|\nabla v_{0}\|_{0}+\|\mathcal{D}\|_{0}\big{)} \epsilon^{5/4},\] \[\|\tilde{w}-w_{2}\|_{0}\leq C(1+\|\nabla v_{2}\|_{0})^{3}\| \mathcal{D}_{3}\|_{0}^{1/4}\leq C\big{(}1+\|\nabla v_{0}\|_{0}^{3/2}+\|\mathcal{ D}\|_{0}^{3/2}\big{)}\epsilon^{5/4},\] \[A-\big{(}\frac{1}{2}(\nabla\tilde{v})^{T}\nabla\tilde{v}+\mathrm{ sym}\nabla\tilde{w}\big{)}=0\quad\text{ in }\;\bar{\omega}.\] It now suffices to take \(\epsilon\) sufficiently small (in function of \(\|\mathcal{D}\|_{0}\), \(\|\nabla v\|_{0}\) and of \(C\) that depend only on \(\omega,k,A\) and \(\alpha\)), to replace the right hand sides of both bounds above have \(\epsilon^{6/5}\). Thus: \[\|\tilde{v}-v\|_{0}\leq\|\tilde{v}-v_{2}\|_{0}+\|v_{2}-v_{1}\|_{0 }+\|v_{1}-v\|_{0}\leq 3\epsilon^{6/5}\leq\epsilon,\] \[\|\tilde{w}-w\|_{0}\leq\|\tilde{w}-w_{2}\|_{0}+\|w_{2}-w_{1}\|_{0 }+\|w_{1}-w\|_{0}\leq 3\epsilon^{6/5}\leq\epsilon,\] for \(\epsilon\ll 1\). The proof is done. ## 6. Application: energy scaling bound for thin films In this section, we present an application of Theorem 1.1 towards obtaining a energy bound on a multidimensional non-Euclidean elasticity functional on thin films with two-dimensional midplate. More precisely, given \(\omega\subset\mathbb{R}^{2}\) we consider the family of domains: \[\Omega^{h}=\big{\{}(x,z);\ x\in\omega,\ z\in B(0,h)\subset\mathbb{R}^{k}\big{\}} \subset\mathbb{R}^{2+k},\] parametrised by \(h\ll 1\) and the family of Riemannian metrics on \(\Omega^{h}\) of the form: \[g^{h}=\mathrm{Id}_{2+k}+2h^{\gamma/2}S,\quad\text{ where }\gamma>0\text{ and }S\in\mathcal{C}^{\infty}(\bar{\omega},\mathbb{R}^{(2+k)\times(2+k)}_{\rm sym }).\] We then pose the problem of minimizing the following functionals, as \(h\to 0\): \[\mathcal{E}^{h}(u)=\fint_{\Omega^{h}}W\big{(}(\nabla u)(g^{h})^{-1/2}\big{)} \;\mathrm{d}(x,z)\qquad\text{ for all }\;u\in H^{1}(\Omega^{h},\mathbb{R}^{2+k}).\] The density function \(W:\mathbb{R}^{(d+k)\times(d+k)}\to[0,\infty]\) is assumed to be \(\mathcal{C}^{2}\)-regular in the vicinity of the special orthogonal group of rotations \(\mathrm{SO}(2+k)\), to be equal to \(0\) at \(\mathrm{Id}_{2+k}\), and to be frame-invariant in the sense that \(W(RF)=W(F)\) for all \(R\in\mathrm{SO}(2+k)\). The value \(\mathcal{E}^{h}(u)\) may be interpreted as the averaged pointwise deficit of \(u\) from being the orientation preserving isometric immersion of \(g^{h}\) on \(\Omega^{h}\). When \(k=1\) then \(\mathcal{E}^{h}(u)\) models the elastic energy (per unit thickness) of the deformation \(u\) of a thin three-dimensional film with midplate \(\omega\) and thickness \(2h\) and prestrained by \(g^{h}\). Questions on the asymptotics of minimizing configurations to \(\mathcal{E}^{h}\) as \(h\to 0\), in function of the scaling exponent \(\beta\) in: \(\inf\mathcal{E}^{h}\sim Ch^{\beta}\), received a lot of attention, particularly via techniques of dimension reduction and \(\Gamma\)-convergence, starting with the seminal paper [5], see also [8] and references therein. Extending the analysis for \(d=2\), \(k=1\) in [6, Theorem 1.4] and following verbatim the general proof of [9, Theorem 7.1] (valid with arbitrary \(d,k\geq 1\)), we get: **Theorem 6.1**.: _Assume that \(\omega\subset\mathbb{R}^{2}\) is an open, bounded domain and let \(k\geq 1\). Denote \(s=\frac{4}{k}\). Then, there holds:_ * _if_ \(\gamma\geq 4\)_, then_ \(\inf\mathcal{E}^{h}\leq Ch^{\beta}\)_, for every_ \(\beta<2+\frac{\gamma}{2}\)_,_ * _if_ \(\gamma\in\big{[}\frac{4k}{3k+4},4\big{)}\)_, then_ \(\inf\mathcal{E}^{h}\leq Ch^{\beta}\) _for every_ \(\beta<\frac{4k+\gamma(k+4)}{2k+4}\)_,_ * _if_ \(\gamma\in\big{(}0,\frac{4k}{3k+4}\big{)}\)_, then_ \(\inf\mathcal{E}^{h}\leq Ch^{2\gamma}\)
2303.09433
Classification of semi-weight representations of reduced stated skein algebras
We classify the finite dimensional semi-weight representations of the reduced stated skein algebras at odd roots of unity of connected marked surfaces which either have a boundary component with at least two boundary edges or which do not have any unmarked boundary component. We deduce computations of the PI-degrees and Azumaya loci of unreduced stated skein algebras of essential surfaces having at most one boundary arc per boundary component and of the unrestricted quantum moduli algebras of lattice gauge field theory.
H. Karuo, J. Korinman
2023-03-16T16:09:00Z
http://arxiv.org/abs/2303.09433v3
# Classification of semi-weight representations of reduced stated skein algebras ###### Abstract. We classify the finite dimensional semi-weight representations of the reduced stated skein algebras at odd roots of unity of connected essential marked surfaces which either have a boundary component with at least two boundary arcs or which do not have any unmarked boundary component. Key words and phrases:Stated skein algebras, quantum cluster algebras, quantum groups, TQFTs 2020 Mathematics Subject Classification: 57B56, 57M25 ## 1. Introduction ### Background on reduced stated skein algebras and their representations Let \(\Sigma\) be an oriented compact surface and \(\mathcal{A}\) a finite set of disjoint arcs embedded in \(\partial\Sigma\). The pair \(\mathbf{\Sigma}:=(\Sigma,\mathcal{A})\) will be called a _marked surface_. For \(A\in\mathbb{C}^{*}\) a root of unity of odd order \(N\), the _reduced stated skein algebra_\(\overline{\mathcal{S}}_{A}(\mathbf{\Sigma})\) was introduced in [10] as the quotient of the stated skein algebra by the kernel of Bonahon-Wong quantum trace. In particular, when \(\mathcal{A}=\emptyset\), \(\overline{\mathcal{S}}_{A}(\mathbf{\Sigma})\) is the usual Kauffman-bracket skein algebra. Its representations appear in quantum hyperbolic geometry and are conjectured to form the building blocks of some \(\mathrm{SL}_{2}\)-HQFT (see [1, 2, 1, 2, 3]). Let \(Z_{\mathbf{\Sigma}}\) denote the center of \(\overline{\mathcal{S}}_{A}(\mathbf{\Sigma})\). By [11] (after the original work in [1]), there exists a _Frobenius morphism_ \[Fr_{\mathbf{\Sigma}}:\overline{\mathcal{S}}_{+1}(\mathbf{\Sigma})\hookrightarrow Z _{\mathbf{\Sigma}}\] which embeds the commutative algebra \(\overline{\mathcal{S}}_{+1}(\mathbf{\Sigma})\) at \(A=+1\) into \(Z_{\mathbf{\Sigma}}\). Let \(Z_{\mathbf{\Sigma}}^{0}\subset Z_{\mathbf{\Sigma}}\) denote the image of the Frobenius morphism. A representation \(r:\overline{\mathcal{S}}_{A}(\mathbf{\Sigma})\to\mathrm{End}(V)\) is a _weight representation_ if \(V\) is semi-simple as a module over \(Z_{\mathbf{\Sigma}}\) and is called a _semi-weight representation_ if \(V\) is semi-simple as a \(Z_{\mathbf{\Sigma}}^{0}\) module. The purpose of this paper is to make progresses towards the **Problem 1.1**.: Classify all finite dimensional weight and semi-weight representations of \(\overline{\mathcal{S}}_{A}(\mathbf{\Sigma})\). We will solve Problem 1.1 in the case where \(\mathbf{\Sigma}=(\Sigma,\mathcal{A})\) is a connected marked surface which either has a boundary component with at least two boundary arcs or which has at least one boundary arc and no unmarked boundary component. The center \(Z_{\mathbf{\Sigma}}\) was computed in [11] and is described as follows. Partition the set of boundary components of \(\Sigma\) into two subsets \(\pi_{0}(\partial\Sigma)=\hat{\mathcal{P}}\bigsqcup\Gamma^{\partial}\) where \(\hat{\mathcal{P}}\) is the subset of boundary components which do not intersect \(\mathcal{A}\) and \(\Gamma^{\partial}\) the boundary components which contain some boundary arcs. For each \(p\in\hat{\mathcal{P}}\), there is a central element \(\gamma_{p}\in Z_{\mathbf{\Sigma}}\) and for each \(\partial\in\Gamma^{\partial}\) there is an invertible central element \(\alpha_{\partial}\in Z_{\mathbf{\Sigma}}\) such that \(Z_{\mathbf{\Sigma}}\) is generated by the image of the Frobenius morphism together with the elements \(\gamma_{p}\) and \(\alpha_{\partial}^{\pm 1}\). Let \(\widehat{X}(\mathbf{\Sigma}):=\mathrm{MaxSpec}(Z_{\mathbf{\Sigma}})\), \(X(\mathbf{\Sigma}):=\mathrm{MaxSpec}(Z_{\mathbf{\Sigma}}^{0})\) and \(p:\widehat{X}(\mathbf{\Sigma})\to X(\mathbf{\Sigma})\) the map defined by the inclusion \(Z_{\mathbf{\Sigma}}^{0}\subset Z_{\mathbf{\Sigma}}\). For a point \(\widehat{x}\in\widehat{X}(\mathbf{\Sigma})\) (i.e. a maximal ideal in \(Z_{\mathbf{\Sigma}}\)), we denote by \(\chi_{\widehat{x}}:Z_{\mathbf{\Sigma}}\to\mathbb{C}\) the corresponding character with kernel \(\widehat{x}\) and use similar notations for \(X(\mathbf{\Sigma})\). The variety
2307.05926
Filling time-series gaps using image techniques: Multidimensional context autoencoder approach for building energy data imputation
Building energy prediction and management has become increasingly important in recent decades, driven by the growth of Internet of Things (IoT) devices and the availability of more energy data. However, energy data is often collected from multiple sources and can be incomplete or inconsistent, which can hinder accurate predictions and management of energy systems and limit the usefulness of the data for decision-making and research. To address this issue, past studies have focused on imputing missing gaps in energy data, including random and continuous gaps. One of the main challenges in this area is the lack of validation on a benchmark dataset with various building and meter types, making it difficult to accurately evaluate the performance of different imputation methods. Another challenge is the lack of application of state-of-the-art imputation methods for missing gaps in energy data. Contemporary image-inpainting methods, such as Partial Convolution (PConv), have been widely used in the computer vision domain and have demonstrated their effectiveness in dealing with complex missing patterns. To study whether energy data imputation can benefit from the image-based deep learning method, this study compared PConv, Convolutional neural networks (CNNs), and weekly persistence method using one of the biggest publicly available whole building energy datasets, consisting of 1479 power meters worldwide, as the benchmark. The results show that, compared to the CNN with the raw time series (1D-CNN) and the weekly persistence method, neural network models with reshaped energy data with two dimensions reduced the Mean Squared Error (MSE) by 10% to 30%. The advanced deep learning method, Partial convolution (PConv), has further reduced the MSE by 20-30% than 2D-CNN and stands out among all models.
Chun Fu, Matias Quintana, Zoltan Nagy, Clayton Miller
2023-07-12T05:46:37Z
http://arxiv.org/abs/2307.05926v2
Filling time-series gaps using image techniques: Multidimensional context autoencoder approach for building energy data imputation ###### Abstract Building energy prediction and management has become increasingly important in recent decades, driven by the growth of Internet of Things (IoT) devices and the availability of more energy data. However, energy data is often collected from multiple sources and can be incomplete or inconsistent, which can hinder accurate predictions and management of energy systems and limit the usefulness of the data for decision-making and research. To address this issue, past studies have focused on imputing missing gaps in energy data, including random and continuous gaps. One of the main challenges in this area is the lack of validation on a benchmark dataset with various building and meter types, making it difficult to accurately evaluate the performance of different imputation methods. Another challenge is the lack of application of state-of-the-art imputation methods for missing gaps in energy data. Contemporary image-inpainting methods, such as Partial Convolution (PConv), have been widely used in the computer vision domain and have demonstrated their effectiveness in dealing with complex missing patterns. Given that energy data often exhibits regular daily or weekly patterns, such methods could be leveraged to exploit the regularity of the data to learn underlying patterns and generate more accurate predictions for missing values. To study whether energy data imputation can benefit from the image-based deep learning method, this study compared PConv, Convolutional neural networks (CNNs), and weekly persistence method using one of the biggest publicly available whole building energy datasets, consisting of 1479 power meters worldwide, as the benchmark. The results show that, compared to the CNN with the raw time series (1D-CNN) and the weekly persistence method, neural network models with reshaped energy data with two dimensions reduced the Mean Squared Error (MSE) by 10% to 30%. The advanced deep learning method, Partial convolution (PConv), has further reduced the MSE by 20-30% than 2D-CNN and stands out among all models. Based on these results, this study demonstrates the potential applicability of time-series imaging in imputing energy data. The proposed imputation model has also been tested on a benchmark dataset with a range of meter types and sources, demonstrating its generalizability to include additional accessible energy datasets. This offers a scalable and effective solution for filling in missing energy data in both academic and industrial contexts. keywords: Missing data, Data reconstruction, Data preprocessing, Deep learning, Computer vision + Footnote †: journal: Applied Thermal Engineering ## 1 Introduction Machine learning (ML) has recently demonstrated great promise for energy forecasting by providing accurate and reliable predictions of future energy demand and supply. As detailed by Kazmi et al. [1], the field is evolving, driven by increased computing power, advancements in data handling, and innovative algorithmic methods, which together have significant potential to transform energy demand forecasting at both building and urban scales. The ML algorithms have shown remarkable potential in predicting energy consumption in buildings [2; 3; 4], forecasting renewable energy output [5; 6; 7], and estimating grid electricity demand [8; 9; 10]. By outperforming traditional statistical methods, ML models have demonstrated significant improvements in terms of accuracy and reliability [11; 12]. Moreover, ML also has been applied in building operations and energy management, such as cooling load forecasting-based optimization for chiller control [13] and ML-based methods for optimally managing energy sources with power grids [14]. However, with these advancements brought by ML, practical applications often face challenges due to instability in data quality of building energy consumption, which can result in problems with forecasting performance. Among the issues with data quality, missing data is a primary one that hinders the efficacy of ML models. These missing data can originate from a spectrum of causes, ranging from terminal equipment and operator error to malfunctions of sensors [15; 16]. The missing data can have cascading consequences, negatively impacting stability, performance, and failure prevention [17]. In certain commercial buildings, it can result in energy waste ranging from 15 and 30 percent [18; 19]. Given the criticality of the issue, our study seeks to utilize state-of-the-art deep learning methods to bridge the missing data gaps, thereby contributing a new perspective to the building energy data imputation field. ### Data mining and modeling techniques for addressing data quality issues The presence of abnormal data in energy forecasting is a significant issue, as emphasized in the ASHRAE Great Energy Predictor III (GEPIII) competition on Kaggle [20]. Additionally, the abnormal energy-consumption behavior may also limit the predictability of ML on energy forecasting [21]. To address these issues, several initiatives, such as the Large-scale Energy Anomaly Detection (LEAD) competition, provides a benchmark dataset for energy anomaly detection in commercial buildings [22], and a tree-based ensemble model was proposed for scalable anomaly detection [23]. Notable advancements in research include the development of a model for multivariate time series anomaly detection in energy consumption based on a graph attention mechanism [24] and an enhanced fault detection method for building energy systems in the presence of missing data using expectation-maximization and Bayesian network [25]. Concurrently, unsupervised techniques have also been extensively utilized to discern patterns in energy usage through clustering and to identify anomalies in daily profiles of energy data [26; 27; 28]. The applications of ML in the building life cycle are primarily focused on the operation and maintenance phase [29]. In particular, specific use cases need more data in quantity and diversity and thus opted for data-driven methods to tackle this. As for data quantity, generative models, specifically Generative Adversarial Networks (GANs), have been used for scenarios without sufficient historical data, requiring synthetic data. These GAN-based models were first proposed as a way to learn the distribution of the input data and generate synthetic data that resembles the input data [30]. In the built environment, these models have been used in power demand prediction [31], building load profile [32], fault detection and diagnosis (FDD) [33], and indoor thermal comfort [34] to generate synthetic complementary data for model training. Another approach is to augment energy time-series data using masking noise injection for more efficient imputation of missing values [35]. Several data augmentation methods that manipulate time series in terms of magnitude and temporal order have also been discussed [36; 37]. These papers emphasize the importance of data augmentation in time series analysis and energy modeling, especially in the absence of sufficient data. Other approaches circumvent the lack of available and diverse data by incorporating external data streams to aid in the data mining task. Fu et al. [38] used Google searches, in the form of Google trends, as additional input data for energy consumption forecasting. Moreover, Sun et al. [39] used building facade images to complement energy consumption and building metadata for energy efficiency rating prediction. These models showcase the incorporation of external data streams of different sensing modalities and their potential aid. However, while these approaches increase the performance of the downstream task, they do not address the data availability issues. Incorporating new data streams may potentially introduce additional availability issues. ### Imputation models in energy and building fields To enhance data availability, a plethora of methods have been proposed for data imputation, ranging from traditional statistical methods, regression models, and deep learning. The types of missing data can be categorized as short-term missing (or random missing) and long-term missing (or continuous missing). Short-term missing (or random missing) refers to missing data points that cover a relatively small amount of time (e.g., hourly or daily missing), while long-term missing (or continuous missing) refers to data gaps that cover a longer period of time (e.g., weekly or monthly missing in time series). Traditional data reconstruction methods can be traced back to linear or polynomial regression methods, which have proven effective and efficient in data imputation, especially for short-term missing data [40; 41]. Some other supervised-learning regression methods based on lag features were often used to impute missing energy data gaps. For example, linear regression, weighted K-nearest neighbors (kNN), support vector machines (SVM), and mean imputation was implemented for random missing values varying from 5% to 20% missing rate, demonstrating superior performance than zero or mean imputation [40]. Another comparative study by Wang et al. stated that both statistical methods and machine learning regression models exhibit good performance for imputing short-term missing gaps within a one-day period in time series data[41]. However, these methods have demonstrated limited performance in imputing long-term missing data, which is commonly encountered in real-world scenarios. This limitation can be attributed to the reliance of statistical and regression models on nearby values, which are unavailable for long-term missing data. To address the long-term missing data mentioned above, neural networks have been utilized for data imputation due to their advantage of learning context in unstructured datasets. For example, a method using an improved Convolutional Neural Network (CNN) that considers the correlation of power data from dimensions of time and space was proposed for filling missing data [42]. Recurrent Neural Network (RNN) and Bi-directional Long Short Term Memory model (BI-LSTM) were proposed to impute energy data missing due to the temporal aspects of energy data [43; 44]. More recently, autoencoder has received growing attention among various frameworks of neural networks because of its self-reconstruction and feature extraction capabilities. For example, a study applied an autoencoder to reconstruct missing gaps in indoor environmental quality data, showcasing improved performance compared to classic polynomial interpolations [45]. In another study, a 2D Convolutional Neural Network (2D-CNN) autoencoder was employed to address missing energy data considering the weekly periodicity of the data. The energy data was reshaped into a two-dimensional format, and the proposed autoencoder effectively imputed both random and continuous missing data [46]. The studies above demonstrate how autoencoders can be applied to data imputation in the built environment for both random and continuous missing. However, most of the studies have been conducted in the context of a limited number of buildings or a single power grid, leaving their applicability to a broader set of scenarios (e.g., unseen buildings or different sites) untested. Moreover, these neural networks belong to earlier or outdated frameworks, and some modern deep learning techniques that could yield even more accurate predictions have yet to be thoroughly investigated for data imputation in this field. In light of these observations, the development of modern deep learning frameworks considering generalizability in application holds great potential for advancing data imputation methods in the built environment. ### Image-based data imputation Missing data is a ubiquitous problem in various fields, such as audio signals repair [47; 48], and image inpainting in the computer vision field [49; 50]. Modern frameworks developed within these fields to address missing data issues hold the potential to revolutionize the way missing data is handled in the field of building energy. One such framework is partial convolutions, which was proven effective in image inpainting tasks involving regular (e.g., circles and rectangles) and irregular holes in images [51]. Another promising framework is the diffusion model, which employs iterative diffusion processes to fill in missing data while preserving the smoothness and continuity of the original signal [52]. However, these modern image-based techniques have yet to be widely applied to building energy data. There are a few reasons for this. The first reason is that energy data has traditionally been treated and imputed as time series. Consequently, past research has primarily relied on statistical methods for time series or RNNs to handle missing gaps. This means that the structured context and two-dimensional representation of energy data that inpainting methods can leverage have yet to be fully utilized. Another factor may be the lack of sufficient energy data for training. Past studies often utilized a limited number of meters or grids for developing and evaluating the imputation model. In contrast, fields like image or audio processing typically have access to larger datasets, allowing for more extensive training and evaluation to verify the generalizability and effectiveness of the proposed methods. For instance, benchmark image datasets often consist of thousands to millions of image samples [53], which far exceed the scale of typical energy datasets that may have, at most, thousands of energy meters. In response to these challenges, this research aims to transform conventional one-dimensional time-series representation of meter data into two-dimensional images (i.e., time of week by week). We propose to implement an advanced image inpainting framework to impute missing gaps with assistance from the structured context inherent in energy data. In addition, this research will harness the largest publicly available building energy dataset, encompassing more than one thousand power meters, for comprehensive model training and validation. ### Research objectives and novelty To fill the aforementioned gaps, this study aims to investigate how the context of energy data affects data imputation performance. It achieves this through the use of a global dataset with diverse meters while setting different rates of both short-term and long-term missing data. Leveraging deep learning techniques from the field of computer vision, this research reshapes time-series energy data to harness the cyclical nature of such data and applies state-of-the-art image-inpainting models for energy data imputation (as illustrated in Figure 1). The research objectives of this study are to: 1. Present a novel viewpoint in imputing energy data by converting time series into images and investigating image-based algorithms. 2. Explore the use of modern deep learning frameworks, such as partial convolutions (PConv) and 2D-CNN, for imputing missing data in energy time series. 3. Evaluate these frameworks' performance in imputing both random and continuous missing data with varying missing rates and test their generalizability to different sites. 4. Benchmark the performance of different image-based frameworks (i.e., 2D-CNN and PConv) against baselines (i.e., 1D-CNN and weekly persistence method). This study's novelty lies in applying image-inpainting models, which have been widely used in the field of computer vision, to the imputation of energy data. This presents a new and innovative approach to the challenge of missing data in energy time series and offers the potential to further improve the accuracy of data imputation through the contextual learning of the underlying energy data structure. Consequently, it could enhance the data quality used for energy-related downstream tasks such as energy forecasting and building energy modeling. Furthermore, this study verifies the models' generalizability across various buildings and meter types and their effectiveness in addressing long-term missing data. By demonstrating the applicability and effectiveness of these models in the field of building energy, this study opens up new avenues for future research at the intersection of image-based techniques in the built environment. ## 2 Methodology The proposed methodology for this study consists of three phases, as illustrated in Figure 2. In the first phase, training models for imputation will be developed using energy data with synthetic missing values as input and the corresponding raw data as output. The objective is to train a model that can accurately impute the synthetic missing values and produce results that closely resemble the raw data. In the second phase, the performance of these trained models will be evaluated and quantified on a test dataset with varying settings of missing and various meter types. Finally, in the third phase, a comprehensive comparison and discussion of the results obtained from all models, including the baseline method, will be conducted to draw conclusions. ### Dataset: Building Data Genome 2.0 (BDG2) This study used the time-series hourly data from power meters in the Building Data Genome 2 (BDG2) project dataset for modeling. These data were also employed in the Great Energy Predictor III competition hosted on the Kaggle platform [54]. The Building Data Genome 2.0 (BDG2) is an open dataset containing hourly meter readings and metadata of 3,053 power meters over two years. Each building within the dataset is associated with metadata such as floor area, weather, and primary use type. Due to the variability of meter characteristics observed across thousands of meters worldwide, this dataset is an ideal benchmark dataset for comparing different machine learning algorithms while testing generalizability. Table 1 outlines the metadata variables available within the dataset. It's worth noting that only the historical energy consumption data was used, and other potential data sources, such as weather and calendar data, were not incorporated in the imputation process. That's because the focus of this research was to impute missing data in energy time series by leveraging nearby values of the data. ### Data preprocessing The energy dataset will be subjected to several preprocessing steps to prepare it for model training. These steps encompass data cleaning to remove inconsistencies or errors, data normalization to standardize the data across a common range, data splitting into training, validation, and test sets, and data augmentation, such as flipping and shifting the time series to expand the dataset. By following these commonly used preprocessing steps in the domain of building energy data, the resulting models are equipped to deliver accurate and reliable predictions. Figure 1: The one-dimensional time series of energy data can be reshaped into a two-dimensional heatmap image for testing different imputation methods. Comparisons are made between the one- and two-dimensional imputation, including baseline model, 1D- and 2D-CNN, and advanced deep learning techniques used for image inpainting. #### 2.2.1 Data cleaning Since this dataset is sourced from power meters installed in real-world buildings, it is expected to contain missing data resulting from system errors or equipment failures. Cleaning the dataset by removing anomalies or errors is necessary to ensure the quality of the data used for training the imputation model. To accomplish this, we utilized the data cleaning results provided by the winning team in the GEPIII Kaggle competition, which has also been used as a benchmark for detecting anomalies in prior research [55]. This process involves identifying and removing long streaks of constant values, large positive/negative spikes, and other anomalies determined through visual inspection. After cleaning the dataset, we chose the power meters with low missing rates (less than 5%) for our study, resulting in a total of 1479 meters (approximately 50% of the entire BDG2 dataset). #### 2.2.2 Data normalization To enhance the training process, the numerical values of the power meters were normalized to ensure consistent data distributions. Normalization is particularly essential when preparing the data for neural networks, as these models rely on the gradient descent algorithm to optimize their parameters. This algorithm can be affected by the scale of the data; thus, normalization helps achieve convergence. In this study, we applied min-max normalization, which scales the data to a range of \([0,1]\). This is achieved by subtracting the minimum value in each series from the data and dividing by the difference between the maximum and minimum values (as shown in Equation 1). \[X_{norm}=\frac{X-X_{min}}{X_{max}-X_{min}} \tag{1}\] #### 2.2.3 Data splitting This study employed a 5-fold cross-validation technique to divide the 1479 power meter data into training, validation, and testing sets. Each round of modeling utilized 60% of the data for training, 20% for validation to improve training efficiency, and 20% for testing to evaluate model performance. To ensure the model's generalizability, the data was divided by site ID to prevent the model from imputing missing data based on similar meters within the same site. This means the test dataset for evaluating the model's performance was from unseen sites for the trained model. This allows us to check if the model is able to accurately impute data for meters from new sources, which is crucial in real-world scenarios. #### 2.2.4 Data augmentation Data augmentation was employed in this study due to the small dataset size (1479 time series or images) available for developing machine learning models for meter-wise imputation, particularly for image-based Convolutional Neural Networks (CNNs) and modern image inpainting methods. To address this issue, the study utilized two specific data augmentation techniques for time-series data: shifting and flipping, as illustrated in Figure 4. Shifting moves the time series forward by a certain number of steps. This can be beneficial for tasks where the sequence of data points is critical as it enables the generation of new data points, similar to the original data but with a unique temporal arrangement. Flipping, on the other hand, comes in two forms - horizontal and vertical. Horizontal flipping, which reverses the time series \begin{table} \begin{tabular}{l l l l} \hline \hline **Category** & **Variable** & **Unit** & **Content** \\ \hline Power meter & & & Type of meter: electricity, \\ & & & chilled water, steam, or \\ & & & hotwater \\ & & Meter readings & kWh & Energy consumption \\ Building & Primary use & - & Primary category of activities \\ metadata & Year built & - & Year building was opened \\ & Floors & - & Number of floors \\ & Floor area & Sq foot & Gross floor area \\ Weather & Temperature & °C & Outdoor temperature \\ data & Cloud cover & Oktas & Portion of the sky covered \\ & Dew point & °C & Outdoor dew temperature \\ & Precipitation & Millimeter & Precipitation depth \\ & Pressure & Millibar & Sea level pressure \\ & Wind speed & m/s & Wind speed \\ & Wind bearing & Degree & Wind direction \\ \hline \hline \end{tabular} \end{table} Table 1: Building metadata available in the Building Data Genome 2 (BDG2) project. Figure 2: Overview of the proposed framework and different phases in the working pipeline. sequence, is inappropriate for our study as the temporal order is of utmost importance. Hence, horizontal flipping has been excluded from our data augmentation strategies. Vertical flipping doesn't disrupt the chronological order; rather, it inverts the values, akin to reflecting the time series along the horizontal axis. This strategy can be pertinent in the context of building energy data. For example, chilled water and hot water meter readings are often inversely correlated due to their opposing operational nature; the flipped representation of such time series may provide additional context for model training. Therefore, our study used vertical flipping as a data augmentation method. Despite the fact that the existence of other data augmentation methods, such as those outlined in relevant work [36; 37], this study specifically opted for flipping and shifting as they effectively preserve the inherent structure and temporal sequence of the original data. By applying these techniques, the initial dataset was expanded fourfold, resulting in approximately three times more data, equivalent to over 5000 time series. ### Missing masks Missing masks are synthetic filters used to train imputation models to predict missing gaps in energy data. There are two common types of missing gaps in energy data: random missing, resulting from occasional events, and continuous missing, caused by system shutdown or sensor failure (as shown in Figure 4). Both types of missing data are included in the imputation tasks of this study and are given a missing rate range of 5% to 50%. #### 2.3.1 Random missing Random missing (also known as short-term missing), which refers to gaps in time series data that are randomly scattered over time, has garnered attention in previous studies. The granularity of the missing data, such as hourly or daily intervals, can vary to assess a model's ability to handle different time scales. In this study, daily granularity is utilized for the missing gaps. Prior research has primarily focused on imputing hourly intervals, and their results indicate that this task is relatively simple [44; 45]. Hourly missing data can be accurately estimated by nearby values, and even classic statistical methods can achieve satisfactory results [56; 41]. In contrast, missing entire days at random poses a more significant challenge for an imputation model. To evaluate the performance of an imputation model under various missing levels, a missing rate from 5% to 50% is employed. #### 2.3.2 Continuous missing Continuous or consecutive missing (also known as long-term missing) refers to gaps in time series data that occur over a prolonged period, such as missing data that lasts for a week or a month. Imputing this type of missing data can be more challenging as there are fewer nearby data points for reference. Nonetheless, deep learning architectures, particularly those used in computer vision, have successfully addressed this issue by utilizing the data structure to impute the missing values. Similar to the setting of random missing data, a range of 5% to 50% continuous missing rate is chosen to evaluate the model's performance. ### Modeling This study explored several modeling approaches for imputing missing energy data, including a weekly persistence model as a naive baseline, CNN-based models, and PConv. Table 2) provides a summary of the models used and their respective settings. #### 2.4.1 Weekly persistence model The weekly persistence model, a naive method, was implemented in this study as a baseline for comparison with other imputation approaches [57; 58]. Unlike sophisticated algorithms like deep neural networks or regression methods, the weekly persistence model operates on the assumption that energy consumption patterns persist over time. Specifically, it predicts the Figure 4: Missing type: (a) Random missing: randomly selected days of missing data (b) Continuous missing: consecutive days of missing data Figure 3: Data augmentation: (a) Shifting: shifting time series by a certain number of timesteps (b) Flipping: flipping time series in the vertical direction. energy consumption at a given time in the current week by referencing the energy consumption at the same time in the previous week. For instance, the consumption value at 8:00 AM on a Wednesday is used to predict the energy usage for the subsequent Wednesday at 8:00 AM. Despite its simplicity, the weekly persistence model has been effectively used in energy modeling as a benchmark and has, in some scenarios, outperformed classical statistical methods [59]. Its widespread use and demonstrated effectiveness make it a valuable comparison point for evaluating the performance of more advanced imputation methods. #### 2.4.2 Convolutional Neural Networks (CNNs) Convolutional Neural Networks (CNNs) are a type of neural network that is particularly well-suited for processing data with a spatial structure, such as images or time series. Compared to Fully-Connected Neural Networks (FCNNs), CNNs differ in that they learn local patterns in their input feature space, while FCNNs learn global patterns. This means that after a CNN has successfully learned a pattern in one location, it can recognize it in another location with minimal effort. In contrast, an FCNN would have to learn the pattern again if it appeared in a different location. As a result, CNNs are more efficient when dealing with multidimensional data, as they require fewer training samples to learn representations with high generalization power. Thus, we used two CNN frameworks for data imputation: a one-dimensional CNN (1D-CNN) for time series data and a two-dimensional CNN (2D-CNN) for reshaped data. To fit energy data to a 2D-CNN, the data is reshaped with the time of the week and week number serving as the two dimensions. For example, suppose the energy data from a power meter consists of hourly readings over a year. In that case, it is reshaped into a matrix where rows represent the time of the week (e.g., hour 1, hour 2,..., hour 168) and columns represent the week number (e.g., week 1, week 2,..., week 52). The reshaped data with the dimensions (\(168\times 52\)) is then fed into the 2D-CNN for processing and imputation of missing values. The CNNs in this study were implemented using an autoencoder architecture, consisting of two components: an encoder and a decoder. The encoder converts the input data into a compressed representation with reduced dimensions. This compression process could capture and extract the essential features of the input data in a more compact form, enabling efficient representation learning. Subsequently, the decoder component receives the compressed representation generated by the encoder and reconstructs the original data from this compressed representation. The decoder's role is to transform the compressed representation back into its original format, facilitating the recovery of the complete information. During the training process, the autoencoder aims to minimize the difference between the original data and the reconstructed data, encouraging the model to learn an effective compression and reconstruction process. In the context of imputing missing values in energy data, the autoencoder is provided with data containing synthetic missing gaps, aiming to predict the original complete data as its output. #### 2.4.3 Partial Convolution (PConv) In addition to utilizing traditional CNN models for imputing missing energy data, an advanced deep learning framework called Partial Convolution (PConv) was also tested to explore the potential of a computer-vision inpainting technique in the energy domain. One of the main advantages of PConv is its automatic transmission mask mechanism, which allows the model to identify the areas that need to be repaired and use this information to achieve state-of-the-art results. The PConv model was implemented using the U-Net architecture, modified to use partial convolutions instead of regular convolutions. The final architecture is shown in Figure 5. Notably, PConv excels in handling irregularly shaped missing areas, as opposed to traditional image repair techniques that typically only deal with regular gaps (e.g., rectangular and circle shapes). This makes PConv particularly robust and well-suited for a wide range of missing data scenarios. While there are other advanced deep learning frameworks for image inpainting, such as Generative Adversarial Networks (GANs) and diffusion models, they can be difficult to implement and train. GANs, in particular, can be challenging to train due to the need to balance the generator and discriminator and the instability of the minimax loss function [60; 61]. In contrast, PConv does not require a discriminator to be trained and has a more simplified model architecture, making it more suitable for small datasets like the building energy dataset used in this study. Regarding the input data for training PConv, the energy time series data was preprocessed into a two-dimensional matrix, identical to the input format for 2D-CNN models. #### 2.4.4 Model hyperparameters setting and training process All of the models compared in this study, except for the weekly persistence model, are based on neural networks. The performance of these models is significantly influenced by their hyperparameters, such as the number of layers and channels. For the 1D- and 2D-CNN models, the hyperparameters were based on the autoencoder example provided on the Keras official website 1. Keras is a popular high-level API for deep learning, built on top of TensorFlow, one of the most widely \begin{table} \begin{tabular}{l l l} \hline \hline **Model** & **Hyperparameters** & \begin{tabular}{l} **Dimensions of data** \\ **(samples, dim1, dim2)** \\ \end{tabular} \\ \hline Weekly & None & \((1479,8736,1)\) \\ persistence & \begin{tabular}{l} encoder = 3 layers; decoder = 3 layers; \\ fully-connected layer at bottleneck \\ \end{tabular} & \((1479,8736,1)\) \\ 2D-CNN & \begin{tabular}{l} encoder = 2 layers; decoder = 2 layers; \\ fully-connected layer at bottleneck \\ \end{tabular} & \((1479,168,52)\) \\ PConv & \begin{tabular}{l} encoder = 4 layers; decoder = 4 layers \\ \end{tabular} & \((1479,168,52)\) \\ \hline \hline \end{tabular} \end{table} Table 2: Overview of models and naive baseline regarding hyperparameters and dimensions of data. used deep learning frameworks. To expand the model's capacity for learning, a fully-connected layer is added at the bottleneck, which enables the model to learn complex relationships between the input and output data by allowing information to flow freely between all neurons in the layer. The hyperparameters for the PConv model were adopted from the original paper by Liu et al. [51]. However, due to memory constraints in the Google Colab platform2 used in this study, the image size was adjusted to 192\(\times\)192 pixels from the original 256\(\times\)256 pixels. It's worth noting, however, that the resolution of 192\(\times\)192 pixels is sufficiently large to accommodate the original dimensions of 168\(\times\)52, thus ensuring that no significant loss of data or features occurs in the resizing process. The dimensions of the reshaped time series were resized from 168\(\times\)52 to 192\(\times\)192 using interpolation. To maintain data integrity, the dimensions were resized to 192\(\times\)192 using interpolation, a method that retains the key structure and trends of the data. An overview of the chosen hyperparameters for the models can be found in Table 2. Footnote 2: [https://colab.research.google.com/](https://colab.research.google.com/) In this study, we employed several strategies to enhance model training and ensure the robustness of the results. To facilitate model training, both random and continuous synthetic missing masks with varying rates were randomly applied to the training dataset, along with additional irregular missing masks, which were suggested in the original paper[51]. Furthermore, an early stopping mechanism was implemented in the PConv and CNN models to monitor the convergence of the model training process. This mechanism observes the model's performance on a validation set comprising 20% of the total dataset and stops the training process when the validation loss fails to improve over five consecutive epochs. The training process is allowed to continue for a total of 50 epochs unless halted earlier by the early stopping mechanism. Figure 6 provides an example of the loss over epochs for the PConv model, demonstrating the convergence of training and validation loss and the triggering of the early stop when the patience of 5 epochs was reached due to the lack of improvement in the validation loss. #### 2.4.5 Evaluation metrics To assess the model's performance for imputing missing gaps, we employed the Mean Squared Error (MSE) and R-squared metric, both of which are widely accepted measures for assessing model performance in time series prediction tasks, particularly in energy prediction. MSE is a commonly used measure of prediction error that calculates the average squared difference between the predicted and actual values. This metric is used to quantify the prediction error between the predicted results and the ground truth on synthetic missing energy data. R Figure 5: U-net architecture of Pconv for imputing two-dimensional energy data. squared, also known as the coefficient of determination, quantifies the extent to which the trend of the predicted values aligns with the actual values. In energy time series data, it is crucial to predict trends and patterns accurately, and R-squared can help determine whether the model can capture these trends and patterns. Though Normalized Root Mean Squared Error (NRMSE) is another commonly used metric in this field, we opted not to incorporate it into our evaluation. This decision was due to the pre-processing step where all energy meter data were normalized via a max-min normalization technique prior to the application of the inpainting algorithm, thereby limiting the added value that the NRMSE metric could offer in this context. ### Experiment design Table 3 presents the settings for the experiment, which are based on the data preprocessing, missing masks, and modeling techniques described in previous sections. This experimental design will allow us to evaluate the model's generalization performance and assess its ability to handle missing data of various degrees under different scenarios. ## 3 Results In the result section, we will discuss and interpret our findings pertaining to the prediction error values between the models and explore how the type of meter (e.g., electricity and shot water meters) affects the predictability of missing values. To further demonstrate the effectiveness of image imputation in energy data, examples in the form of trend plots (1D) and heatmaps (2D) from meters will also be presented to show how context can assist in the data imputation process for each model. ### Random and continuous missing To investigate whether the extra dimension from reshaped time series impacts the imputation of missing values, the prediction errors of the models under different missing categories are comprehensively compared in Figure 7. Overall, continuous missing data poses a higher challenge for imputation in comparison to random missing data. This is due to the fact that imputing random missing gaps can leverage more context from neighboring values. When comparing different models' performance in imputing continuous missing data, we observe that the 1D-CNN model significantly underperforms in comparison to two-dimensional models (i.e., 2D-CNN and PConv) and the weekly persistence method. In contrast, PConv, as an image-inpainting framework, demonstrates superior results with an MSE of 0.017, significantly lower than other models by 27 to 61%. As for data imputation results for random missing on the right boxplots, each model performs much better than continuous missing data, and PConv overwhelmingly outperforms other models, with an average MSE 50% lower than the next best model (i.e., 2D-CNN). The results show that two-dimensional methods with reshaped energy data considerably outperform the one-dimensional and na"ive methods. Especially, PConv has better performance than 2D-CNN, showcasing the efficacy of image-based techniques in missing energy data imputation. However, it is essential to note that a higher missing rate for continuous data could significantly impact the performance of the models. For instance, when the missing rate for continuous data rises from 5% to 10%, the average R\({}^{2}\) of PConv significantly decreases from 0.75 to less than 0.7 (as shown in Figure 8). This highlights the fundamental difficulty of imputing data when the missing rate exceeds 10% of the annual data, roughly equivalent to a two-week absence of data. ### Breakdown by meter types The effect of different meter types on imputation results is further explored and shown in Figure 9. Generally, PConv offers the most accurate predictions, followed by 2D-CNN, the weekly persistence method, and finally, 1D-CNN. Interestingly, the weekly persistence method performs comparably to the 2D-CNN in predicting electricity meters. This could be attributed to the more regular patterns present in electricity meter data, which allows the weekly persistence method to predict values accurately by mimicking values from the previous week. In contrast, meters highly influenced by weather conditions, such as chiller water, steam, and hot water meters, present greater prediction difficulties for the weekly persistence method due to their inherent irregularity. Despite PConv's superior performance across other models, it still struggles with predicting the continuous missing data for weather-dependent meters. For example, as demonstrated by the bottom-left boxplots, PConv's imputation on steam and hot water meters cannot even reach an \begin{table} \begin{tabular}{l l} \hline \hline & **Setting of experiments** \\ \hline \multirow{3}{*}{Models and data dimensions} & - Weekly persistence model (1D) \\ & - 1D-CNN (1D) \\ & - 2D-CNN (2D) \\ & - PConv (2D) \\ \multirow{2}{*}{Missing masks} & Random and continuous days with missing rates between 5 and 50\% \\ \multirow{2}{*}{Cross-validation Data augmentation} & \multirow{2}{*}{Five folds split by site ID} \\ & & \\ \hline \hline \end{tabular} \end{table} Table 3: Overview of modeling and experiment settings. Figure 6: Training and validation loss over epochs for the PConv model. R squared of 0.6 for continuous missing data. This underscores the challenge that stems from not only the continuity of missing data and its rate but also the dependencies of the meters on weather conditions. ### Zoom-in of imputation results from visualized examples In the time-series plots of Figure 10 and 11, we look closely at a few examples and compare prediction results across models and missing settings. In the case of random missing data Figure 8: Zooming in on R\({}^{2}\) comparison to observe the decline of imputation performance as the missing rate increases Figure 7: Comparison between models in different imputation tasks with varying missing rates and types. shown in plots 10a and 10b, we found that PConv and 2D-CNN, both employing the two-dimensional context, perform remarkably well in the prediction of time-series profiles. Conversely, the 1D-CNN fails to capture energy profiles, and the weekly persistence method only offers moderate predictive outcomes. This discrepancy is particularly noticeable in the case of the hot water meter due to its weather dependence. In 11a and 11b, prediction results with more challenging continuous missing gaps are shown. Although both PConv and 2D-CNN predictions exhibit stable periodic patterns closely aligned with the original data, the 2D-CNN model struggles to capture sudden shifts in overall trends. In contrast, PConv and the weekly persistence method are able to account for such trend changes in their imputation predictions. As for the prediction of hot water usage with continuous missing data, both PConv and 2D-CNN outperform other methods, benefiting from the two-dimensional context. The weekly persistence method and 1D-CNN perform poorly in predicting weather-dependent hot water meter, owing to the irregularity in its usage patterns. Figure 12 illustrates the same results in the form of heatmaps, emphasizing the benefits of a two-dimensional context in the imputation process. Despite the promising results shown previously, there were instances where the predicted values deviated significantly from the actual values, particularly when the missing rate escalates to 30%, as illustrated in Figure 13. In Figure 13a, 1D-CNN and the weekly persistence method fail to capture the weekly regularity of energy data, resulting in constant values appearing uniformly colored. Meanwhile, the predictions generated by the 2D-CNN appear to be noisy. Even though PConv outperforms the other models considerably, it still shows unclear areas, likely resulting from the lack of additional weather information when attempting to impute such long-term missing data. The difficulties become even more pronounced in Figure 13b, which features 30% continuous missing data in the hot water meter. None of the models are able to generate results of satisfactory quality. The absence of weather information, which is particularly crucial for weather-dependent meters, significantly hampers the predictive capabilities of the models. These failures demonstrate the limitations of imputation models when confronted with long-term continuous missing gaps. Figure 9: Breakdown of model performance according to meter types. Overall, the order of performance from best to worst is PConv, 2D-CNN, weekly persistence, and 1D-CNN. PConv, a modern framework for image inpainting with deep learning, consistently performs significantly better than the other models in any task of missing imputation. Among the other models, 2D-CNN marginally outperforms the weekly persistence model, whereas 1D-CNN significantly lags behind. However, the effectiveness of 2D-CNN and weekly persistence can fluctuate depending on the specific circumstances. In situations where there's a trend shift within the missing gaps, weekly persistence might perform slightly better than 2D-CNN as it can incorporate this trend shift into its predictions by replicating the values from the adjacent week. Furthermore, when the prediction target is to predict missing gaps for weather-dependent meters, such as hot water meters, 2D-CNN performs better than the weekly persistence model. The latter primarily depends on the regularity of the time series, which can be a limiting factor in such cases. ## 4 Discussion In this study, we explored the use of various machine learning models, including advanced image inpainting techniques, Figure 10: Time series plots of power meters for comparing imputation results from different methods for 10% random missing data. to impute missing data from energy meters at different missing rates in both random and continuous fashion. ### Evaluation and comparison of imputation models This study evaluated a range of imputation models, from a naive baseline (i.g., weekly persistence model) to advanced ones (2D-CNN and PConv). Notably, this study introduced a unique aspect to the field by incorporating advanced image inpainting techniques, specifically PConv, for imputing missing data. Contrary to the majority of previous works that concentrated on imputing short-term missing data (e.g., hourly or daily), our study broadened its scope to cover longer-term periods, such as weekly or even monthly missing data. The relative ease of imputing short-term missing data based on nearby values was affirmed in the study, thus differentiating our work owing to its long-term focus. Moreover, the superior performance of PConv over other methods underlined its efficacy, especially when faced with the challenging task of long-term imputation. Furthermore, most past studies were benchmarked on a restricted number of meters within specific buildings or sites, while the proposed method in this study was comprehensively validated on a benchmark dataset with thousands of meters worldwide. Consequently, the distinct nature of this study's tasks involving long-term missing data and the inherent complexities of the dataset make it less directly comparable to the imputation results of earlier studies. Nonetheless, to maintain Figure 11: Time series plots of power meters for comparing imputation results from different methods for 10% continuous missing data. methodological continuity with past studies, our research incorporated some common deep learning frameworks, such as the CNN-based autoencoder and naive weekly persistence model, as benchmarking references. The inclusion of these model frameworks, various missing periods, and our benchmark dataset ensures that our deep learning method based on reshaped data contributes to advancing data imputation in the field. Figure 12: Visual comparisons of models in 2D heatmaps. Each example from left to right: input image, PConv, and ground truth. All images have a 192 × 192 pixel dimension. Figure 13: Some examples of failed prediction with high continuous missing rate. ### Exploring the maximum time period for imputing continuously missing data This study assessed the feasibility and limitation of employing energy time series as images for imputing missing data. Our results demonstrate that, while the proposed approach PConv outperformed other models via learning context within images, it struggled with the prediction of extended periods of continuous missing values. Specifically, when the missing rate of continuous data exceeded 10%, approximating to about a month of missing, the average R\({}^{2}\) of PConv dropped below 0.7. Additionally, the imputation performance of PConv significantly deteriorated when applied to weather-dependent meters, primarily due to less predictable regularity compared to electricity meters. These findings reveal that, even though image-based deep learning models may improve the performance of imputation, their capability to accurately forecast missing values over extended periods remains restricted. ### Challenges and future direction in imputing data missing Imputing continuous missing data, especially with increasing missing rates, poses significant challenges due to the lack of adjacent context values required for long-term prediction. This study demonstrates that the advanced deep learning frameworks in computer vision, such as 2D-CNN and PConv, exhibit superior performance via learning context from time-series data represented as images. However, even with these advanced methods, predicting long-term continuous missing values remains problematic. This is evident in the performance of PConv, which noticeably deteriorates when the proportion of continuous missing data exceeds 10%. This research emphasizes the necessity for comprehensive context information, such as weather and occupant-related data, to enhance the performance of the imputation model. Without these contextual data, the model may struggle to generate accurate predictions for long-term missing values, given that such predictions are typically influenced by weather patterns and human activity. Encouragingly, future work in this area could benefit from strategies employed in image inpainting studies that integrated additional layers of data to improve accuracy. For instance, one study integrated a texture information layer into the imputation process, successfully enhancing the performance by including detailed outlines of human faces or the sketch drawings of buildings [62]. Applying the same concept to building energy data imputation, weather and occupant behavior data could serve as valuable additional data layers to improve predictions. As an example, the utility of Google Trends data as a human behavior proxy to enhance energy prediction has been proven [38]. Thus, incorporating such additional data layers, human behavior data and weather data, into data imputation for building energy data presents a promising direction for future research. This approach could potentially address the current challenges in imputing long-term missing data and contribute to more accurate and reliable predictions. ## 5 Conclusion This study aimed to investigate whether the missing imputation of energy data could be improved by integrating an extra dimension through reshaping energy time series. The prediction performance was extensively evaluated by testing various model frameworks and diverse missing rates and types. The results revealed the beneficial role of the two-dimensional energy data structure in data imputation, particularly when used with advanced image-based deep learning models like PConv, leading to better imputation performance than classical CNN models and the weekly persistence method. Even when applied to subsets of different meter types, PConv is able to maintain consistent high prediction accuracy across different meter types. However, a challenge arose when PConv attempted to accurately impute long-term continuous missing values beyond a 10% missing data rate. This situation caused the average R\({}^{2}\) of PConv to decrease below 0.7, thereby revealing a limitation of the proposed imputation model and reshaped energy data. To the best of our knowledge, this is the first study to extensively verify this imputation method using more than one thousand power meters in the field of building energy. This research demonstrates the potential of employing two-dimensional reshaped energy data in conjunction with image-based deep learning techniques for imputing missing energy data. The proposed method, which has been thoroughly tested and validated on a wide-ranging benchmark dataset with diverse meter types and sources, is scalable and generalizable. With the future availability of more energy datasets, our approach holds promise as a potent tool for automatically imputing missing data on a significant scale. This could offer substantial benefits to the industry by facilitating enhanced energy predictions and management based on the imputed data. Furthermore, the proposed method could find wider application across other time series data in the built environment, such as HVAC, lighting, and appliances data, offering a holistic benefit to the building industry. ## 6 Limitations While this study yielded exciting results and made significant contributions, there are several limitations that should be acknowledged and to be addressed in the future. The first limitation is that our focus was on imputing continuous and random missing gaps in separate tasks, neglecting other forms of missing data patterns encountered in the real world. For example, multiple scattered continuous missing gaps or a combination of both random and continuous missing were not specifically considered and assessed in this study. Secondly, the study did not consider the integration of weather data, despite its well-established impact on energy consumption. Their inclusion in the imputation models could provide valuable contextual information for more accurate predictions. Similarly, the integration of calendar data, which captures human events like national holidays or extended vacations that influence energy consumption, could further enhance the imputation process. Investigating the integration of these contextual factors would contribute to more comprehensive and accurate imputation models. These limitations suggest that further research is needed to fully explore and leverage the potential of image-based techniques for imputing missing energy data. ## 7 Reproducibility This analysis can be reproduced using the data and code from the following GitHub repository: [https://github.com/buds-lab/Filling-time-series-gaps-using-image-techniques](https://github.com/buds-lab/Filling-time-series-gaps-using-image-techniques). ## CRediT author statement **CF**: Conceptualization, Methodology, Software, Formal Analysis, Investigation, Data Curation, Visualization, Writing - Original Draft; **MQ**: Methodology, Writing - Reviewing & Editing; **ZN**: Methodology, Writing - Reviewing & Editing; **CM**: Conceptualization, Methodology, Resources, Writing - Reviewing & Editing, Supervision, Project administration, Funding acquisition. ## Funding This research is funded by the NUS-based Singapore MOE Tier 1 Grant titled Ecological Momentary Assessment (EMA) for Built Environment Research (A-0008301-01-00).
2301.08154
KdV breathers on a cnoidal wave background
Using the Darboux transformation for the Korteweg-de Vries equation, we construct and analyze exact solutions describing the interaction of a solitary wave and a traveling cnoidal wave. Due to their unsteady, wavepacket-like character, these wave patterns are referred to as breathers. Both elevation (bright) and depression (dark) breather solutions are obtained. The nonlinear dispersion relations demonstrate that the bright (dark) breathers propagate faster (slower) than the background cnoidal wave. Two-soliton solutions are obtained in the limit of degeneration of the cnoidal wave. In the small amplitude regime, the dark breathers are accurately approximated by dark soliton solutions of the nonlinear Schr\"odinger equation. These results provide insight into recent experiments on soliton-dispersive shock wave interactions and soliton gases.
Mark A. Hoefer, Ana Mucalica, Dmitry E. Pelinovsky
2023-01-19T16:14:53Z
http://arxiv.org/abs/2301.08154v2
# KdV breathers on a cnoidal wave background ###### Abstract. Using the Darboux transformation for the Korteweg-de Vries equation, we construct and analyze exact solutions describing the interaction of a solitary wave and a traveling cnoidal wave. Due to their unsteady, wavepacket-like character, these wave patterns are referred to as breathers. Both elevation (bright) and depression (dark) breather solutions are obtained. The nonlinear dispersion relations demonstrate that the bright (dark) breathers propagate faster (slower) than the background cnoidal wave. Two-soliton solutions are obtained in the limit of degeneration of the cnoidal wave. In the small amplitude regime, the dark breathers are accurately approximated by dark soliton solutions of the nonlinear Schrodinger equation. These results provide insight into recent experiments on soliton-dispersive shock wave interactions and soliton gases. ## 1. Introduction The localized and periodic traveling wave solutions of the Korteweg-de Vries (KdV) equation are so ubiquitous and fundamental to nonlinear science that their names, "soliton" and "cnoidal wave," have achieved a much broader usage, representing localized and periodically extended traveling wave solutions across a wide range of nonlinear evolutionary equations. Consequently, it is natural and important to consider their interactions. While the traditional notion of linear superposition cannot be used, the complete integrability of the KdV equation implies a nonlinear superposition principle. For example, soliton interactions can be described by exact \(N\)-soliton solutions, which can be constructed by successive Darboux transformations [1]. By utilizing solutions of the spectral problem for the stationary Schrodinger equation and the temporal evolution equation whose compatibility is equivalent to solving the KdV equation, the Darboux transformation achieves a nonlinear superposition principle by effectively "adding" one soliton to the base solution. In the spectral problem, the soliton appears as an additional eigenvalue that is added to the spectrum of the base solution. Compared to soliton interactions, soliton-cnoidal wave interactions have not been explored in as much detail. The purpose of this paper is to apply the Darboux transformation to the cnoidal wave solution of the KdV equation in order to obtain the nonlinear superposition of a single soliton and a cnoidal wave. These exact solutions, expressed in terms of Jacobi theta functions and elliptic integrals, represent the interactions of a soliton and a cnoidal wave. The motivation for this study comes from recent experiments and analysis of the interaction of solitons and dispersive shock waves (DSWs) [2, 3, 4]. The DSWs can be viewed as modulated cnoidal waves [5, 6] so that soliton-DSW interaction is analogous to soliton-cnoidal wave interaction. Two different types of soliton-DSW interaction dynamics were observed in [2]. When a soliton completely passes through a DSW, the nature of the interaction gives rise to an elevation (bright) nonlinear wavepacket. When a soliton becomes embedded or trapped within a DSW, the trapped soliton resembles a depression (dark) nonlinear wavepacket. Similar transmission and trapping scenarios were analyzed for solitons interacting with rarefaction waves [7, 8]. Breathers are localized, unsteady solutions that exhibit two distinct time scales or velocities; one associated with propagation and the other with internal oscillations. A canonical model equation that admits breather solutions is the focusing modified Korteweg-de Vries (mKdV) equation. These solutions can be interpreted as bound states of two soliton solutions [9, 10]. It is in a similar spirit that we regard as a breather, the soliton-cnoidal wave interactions considered here. Such wavepacket solutions are propagating, nonlinear solutions with internal oscillations. Among our main results, we find two distinct varieties of exact solutions of the KdV equation, corresponding to elevation (bright) or depression (dark) breathers interacting with the cnoidal wave background. These breathers are topological because they impart a phase shift to the cnoidal wave. We show that bright breathers propagate faster than the cnoidal wave, whereas dark breathers move slower. Furthermore, bright breathers of sufficiently small amplitude exhibit a negative phase shift, whereas bright breathers of sufficiently large amplitude exhibit a positive phase shift. On the other hand, dark breathers with the strongest localization have a positive phase shift. Small amplitude dark breathers can exhibit either a negative or positive phase shift. Each breather solution is characterized by its position and a spectral parameter, determining a nonlinear dispersion relation, which uniquely relates the breather velocity to the breather phase shift. Exact solutions representing soliton-cnoidal wave interactions have previously been constructed using other solution methods. The first result was developed in [11] within the context of the stability analysis of a cnoidal wave of the KdV equation. The authors used the Marchenko equation of the inverse scattering transform and obtained exact solutions for "dislocations" of the cnoidal wave. More special solutions for soliton-cnoidal wave interactions were obtained in [12] by using the nonlocal symmetries of the KdV equation. These solutions are expressed in a closed form as integrals of Jacobi elliptic functions, but they do not represent the most general exact solutions for soliton-cnoidal wave interactions. Quasi-periodic (finite-gap) solutions and solitons on a quasi-periodic background have been obtained as exact solutions of the KdV equation by using algebro-geometric methods [13, 14]. In the limit of a single gap, such solutions describe interactions of solitons with a cnoidal wave. By using the degeneration of hyperelliptic curves and Sato Grassmannian theory, mixing between solitons and quasi-periodic solutions was obtained recently in [15] based on [16], not only for the KdV equation but also for the KP hierarchy of integrable equations. Finally, in a very recent preprint [17], inspired by recent works on soliton gases [18, 19], the degeneration of quasi-periodic solutions was used to construct multisoliton-cnoidal wave interaction solutions. Compared to previous work, which primarily involve Weierstrass functions with complex translation parameters, we give explicit solutions in terms of Jacobi elliptic functions with real-valued parameters. This approach allows us to clarify the nature of soliton-cnoidal wave interactions, plot their corresponding properties, and analyze the exact solutions in various limiting regimes. We also demonstrate that the Darboux transformation provides a more straightforward method for obtaining these complicated interaction solutions compared to the degeneration methods used in [15, 17]. The paper is organized as follows. The main results are formulated in Section 2 and illustrated graphically. In Section 3, we introduce the normalized cnoidal wave solution with one parameter. Symmetries of the KdV equation are then introduced that can be used to generate the more general family of cnoidal waves with four arbitrary parameters. Eigenfunctions of the stationary Schrodinger equation with the normalized cnoidal wave potential are reviewed in Section 4. The time evolution of the eigenfunctions is obtained in Section 5. In Section 6, the Darboux transformation is used to generate breather solutions to the KdV equation. Properties of bright and dark breathers are explored in Sections 7 and 8, respectively. The paper concludes with Section 9. ## 2. Main results We take the Korteweg-de Vries (KdV) equation in the normalized form \[u_{t}+6uu_{x}+u_{xxx}=0, \tag{1}\] where \(t\) is the evolution time, \(x\) is the spatial coordinate for wave propagation, and \(u\) is the fluid velocity. As is well-known [20], every smooth solution \(u(x,t)\) of the KdV equation (1) is the compatibility condition of the stationary Schrodinger equation \[(-\partial_{x}^{2}-u)v=\lambda v \tag{2}\] and the time evolution problem \[v_{t}=(4\lambda-2u)v_{x}+u_{x}v, \tag{3}\] where \(\lambda\) is the \((x,t)\)-independent spectral parameter. The normalized traveling cnoidal wave of the KdV equation (1) is given by \[u(x,t)=\phi_{0}(x-c_{0}t),\qquad\phi_{0}(x):=2k^{2}\mathrm{cn}^{2}(x,k),\quad c _{0}:=4(2k^{2}-1), \tag{4}\] where \(\mathrm{cn}(x,k)\) is the Jacobi elliptic function, and \(k\in(0,1)\) is the elliptic modulus. Table 1 collects together elliptic integrals and Jacobi elliptic functions used in our work, see [21, 22, 23]. The main result of this work is the derivation and analysis of two solution families of the KdV equation (1) parametrized by \(\lambda\) and \(x_{0}\in\mathbb{R}\), where \(\lambda\) belongs to \((-\infty,-k^{2})\) for the first family and \((1-2k^{2},1-k^{2})\) for the second family. Both the solution families can be expressed in the form \[u(x,t)=2\left[k^{2}-1+\frac{E(k)}{K(k)}\right]+2\partial_{x}^{2}\log\tau(x,t), \tag{5}\] where the \(\tau\)-function for the first family is given by \[\tau(x,t):=\Theta(x-c_{0}t+\alpha_{b})e^{\kappa_{b}(x-c_{b}t+x_{0})}+\Theta(x- c_{0}t-\alpha_{b})e^{-\kappa_{b}(x-c_{b}t+x_{0})} \tag{6}\] with uniquely defined \(\kappa_{b}>0\), \(c_{b}>c_{0}\) and \(\alpha_{b}\in(0,K(k))\) and the \(\tau\)-function for the second family is given by \[\tau(x,t):=\Theta(x-c_{0}t+\alpha_{d})e^{-\kappa_{d}(x-c_{d}t+x_{0})}+\Theta(x- c_{0}t-\alpha_{d})e^{\kappa_{d}(x-c_{d}t+x_{0})} \tag{7}\] with uniquely defined \(\kappa_{d}>0\), \(c_{d}<c_{0}\), and \(\alpha_{d}\in(0,K(k))\). Figure 1 depicts the spatiotemporal evolution of a solution \(u(x,t)\) given by (5) and (6). This solution represents a bright breather on a cnoidal wave background (hereafter referred to as a bright breather) with speed \(c_{b}>c_{0}\) and inverse width \(\kappa_{b}\), where \(c_{0}\) is the speed of the background cnoidal wave. As a result of the bright soliton, the cnoidal wave background is spatially shifted by \(-2\alpha_{b}\). Figure 2 shows the spatiotemporal evolution of a solution \(u(x,t)\) given by (5) and (7). This solution is a dark breather on a cnoidal wave background (hereafter referred to as a dark breather), where the breather core exhibits the inverse spatial width \(\kappa_{d}\) and speed \(c_{d}<c_{0}\). The dark breather gives rise to the spatial shift \(2\alpha_{d}\) of the cnoidal background. Figure 1. Bright breather on a cnoidal wave with \(k=0.8\) for \(\lambda=-1.2\) and \(x_{0}=0\). Using properties of Jacobi elliptic functions, we obtain explicit expressions for the parameters of the \(\tau\)-functions (6) and (7) and their dependence on the parameter \(\lambda\) that characterizes the dynamical properties of bright and dark breathers. Although the analytical expressions (5) with either (6) or (7) are not novel and can be found in equivalent forms in [11, 15, 17], it is the first time to the best of our knowledge that the dynamical properties of bright and dark breathers have been explicitly investigated for the KdV equation (1). We also obtain asymptotic expressions for bright and dark breathers in the limits when \(\lambda\) approaches the band edges or when the elliptic modulus \(k\) approaches the end points \(0\) and \(1\). ## 3. Traveling cnoidal wave A traveling wave solution \(u(x,t)=\phi(x-ct)\) to the KdV equation (1) satisfies the second-order differential equation after integration in \(x\): \[\phi^{\prime\prime}+3\phi^{2}-c\phi=b, \tag{8}\] where \(b\in\mathbb{R}\) is the integration constant and the single variable \(x\) stands for \(x-ct\). The second-order equation (8) is integrable with the first-order invariant \[(\phi^{\prime})^{2}+2\phi^{3}-c\phi^{2}-2b\phi=d, \tag{9}\] where \(d\in\mathbb{R}\) is another integration constant. The following proposition summarizes the existence of periodic solutions to system (8) and (9). **Proposition 1**.: _There exists a family of periodic solutions to system (8) and (9) for every \((b,c,d)\) satisfying \(c^{2}+12b>0\) and \(d\in(U(\phi_{+}),U(\phi_{-}))\), where \(U(\phi):=2\phi^{3}-c\phi^{2}-2b\phi\) and \(\phi_{\pm}\) are critical points of \(U\) given by \(\phi_{\pm}=(c\pm\sqrt{c^{2}+12b})/6\)._ Figure 2. Dark breather on a cnoidal wave with \(k=0.7\) for \(\lambda=0.265\) and \(x_{0}=0\). Proof.: If \(c^{2}+12b>0\), the mapping \(\phi\mapsto U(\phi)\) has two critical points \(\phi_{\pm}\). Since \(U^{\prime}(\phi_{\pm})=6\phi_{\pm}^{2}-2c\phi_{\pm}-2b=0\) and \(U^{\prime\prime}(\phi_{\pm})=12\phi_{\pm}-2c=\pm 2\sqrt{c^{2}+12b}\), \(\phi_{+}\) is the minimum of \(\tilde{U}\) and \(\phi_{-}\) is the maximum of \(U\). If \(d=U(\phi_{+})\), the only bounded solution of system (8) and (9) is a constant solution corresponding to the center point \((\phi_{+},0)\). If \(d=U(\phi_{-})\), the only bounded solution of system (8) and (9) is a homoclinic orbit from the saddle point \((\phi_{-},0)\) which surrounds the center point \((\phi_{+},0)\). The family of periodic orbits exists in a punctured neighbourhood around the center point enclosed by the homoclinic orbit, for \(d\in(U(\phi_{+}),U(\phi_{-}))\). If \(c^{2}+12b\leq 0\), the mapping \(\phi\mapsto U(\phi)\) is monotonically increasing. There exist no bounded solutions of system (8) and (9) with the exception of the constant solution \(\phi=c/6\) in the marginal case \(c^{2}+12b=0\). It follows from Proposition 1 that the most general periodic traveling wave solution has three parameters \((b,c,d)\), up to translations, that are defined in a subset of \(\mathbb{R}^{3}\) for which \(c^{2}+12b>0\) and \(d\in(U(\phi_{+}),U(\phi_{-}))\). For each \((b,c,d)\) in this subset of \(\mathbb{R}^{3}\), the translational parameter \(x_{0}\in\mathbb{R}\) generates the family of solutions \(\phi(x+x_{0})\) due to translation symmetry. Two of the three parameters of the periodic solution family can be chosen arbitrarily due to the following two symmetries of the KdV equation (1): * Scaling transformation: if \(u(x,t)\) is a solution, so is \(\alpha^{2}u(\alpha x,\alpha^{3}t)\), \(\alpha>0\). * Galilean transformation: if \(u(x,t)\) is a solution, so is \(\beta+u(x-6\beta t,t)\), \(\beta\in\mathbb{R}\). Due to these symmetries, if \(\phi_{0}\) is a periodic solution to system (8) and (9) with \((b,c,d)=(b_{0},c_{0},d_{0})\), then \(\beta+\alpha^{2}\phi_{0}(\alpha x)\) is also a periodic solution to system (8) and (9) with \[(b,c,d)=(-3\beta^{2}-\alpha^{2}\beta c_{0}+\alpha^{4}b_{0},6\beta+\alpha^{2}c _{0},2\beta^{3}+\alpha^{2}\beta^{2}c_{0}-2\beta\alpha^{4}b_{0}+\alpha^{6}d_{0}),\] where \(\alpha>0\) and \(\beta\in\mathbb{R}\) are arbitrary parameters. Thus, without loss of generality, we can consider the normalized, 1-parameter family of periodic traveling waves \(\phi_{0}(x)=2k^{2}\mathrm{cn}^{2}(x,k)\) for which the values of \((b_{0},c_{0},d_{0})\) are determined in the following proposition. **Proposition 2**.: _The normalized cnoidal wave \(\phi_{0}(x)=2k^{2}\mathrm{cn}^{2}(x,k)\) is a periodic solution of system (8) and (9) with_ \[b_{0}:=4k^{2}(1-k^{2}),\quad c_{0}:=4(2k^{2}-1),\quad d_{0}=0,\] _where \(k\in(0,1)\) is an arbitrary parameter._ Proof.: Since \(\min\limits_{x\in\mathbb{R}}\phi_{0}(x)=0\), it follows from (9) that \(d_{0}=U(0)=0\). On the other hand, by using the following fundamental relations between Jacobi elliptic functions \[\mathrm{sn}^{2}(x,k)+\mathrm{cn}^{2}(x,k)=1,\quad\mathrm{dn}^{2}(x,k)+k^{2} \mathrm{sn}^{2}(x,k)=1 \tag{10}\] and their derivatives \[\frac{d}{dx}\left[\begin{array}{c}\mathrm{sn}(x,k)\\ \mathrm{cn}(x,k)\\ \mathrm{dn}(x,k)\end{array}\right]=\left[\begin{array}{c}\mathrm{cn}(x,k) \mathrm{dn}(x,k)\\ -\mathrm{sn}(x,k)\mathrm{dn}(x,k)\\ -k^{2}\mathrm{sn}(x,k)\mathrm{cn}(x,k)\end{array}\right], \tag{11}\] we obtain from (9) with \(d_{0}=0\) that \(b_{0}=4k^{2}(1-k^{2})\) and \(c_{0}=4(2k^{2}-1)\). ## 4. Lame equation as the spectral problem The spectral problem (2) with the normalized cnoidal wave (4) is known as the Lame equation [24, p.395]. It can be written in the form \[v^{\prime\prime}(x)-2k^{2}\mathrm{sn}^{2}(x,k)v(x)+\eta v(x)=0,\quad\eta:= \lambda+2k^{2}, \tag{12}\] where the single variable \(x\) stands for \(x-c_{0}t\). By using (10) and (11), we obtain the following three particular solutions \(v=v_{1,2,3}(x)\) of the Lame equation (12) with \(\lambda=\lambda_{1,2,3}(k)\): \[\lambda_{1}(k) :=-k^{2}, v_{1}(x) :=\mathrm{dn}(x,k),\] \[\lambda_{2}(k) :=1-2k^{2}, v_{2}(x) :=\mathrm{cn}(x,k),\] \[\lambda_{3}(k) :=1-k^{2}, v_{3}(x) :=\mathrm{sn}(x,k),\] which correspond to the three remarkable values of \(\eta\): \(\eta_{1}=k^{2}\), \(\eta_{2}=1\), and \(\eta_{3}=1+k^{2}\). For \(k\in(0,1)\), the three eigenvalues are sorted as \(\lambda_{1}(k)<\lambda_{2}(k)<\lambda_{3}(k)\). Figure 3 shows the Floquet spectrum of the Lame equation (12), which corresponds to the admissible values of \(\lambda\) for which \(v\in L^{\infty}(\mathbb{R})\). The bands are shaded and the band edges shown by the bold solid curves corresponding to \(\lambda=\lambda_{1,2,3}(k)\) for \(k\in(0,1)\). Figure 3. Floquet spectrum of the Lame equation (12) for different values of \(k\in(0,1)\). The cnoidal wave is the periodic potential with a single finite gap (the so-called _one-zone_ potential) [25] so that the Floquet spectrum consists of the single finite band \([\lambda_{1}(k),\lambda_{2}(k)]\) and the semi-infinite band \([\lambda_{3}(k),\infty)\). As is well-known (see [24, p. 395]), the two linearly independent solutions of the Lame equation (12) for \(\lambda\neq\lambda_{1,2,3}(k)\) are given by the functions \[v_{\pm}(x)=\frac{H(x\pm\alpha)}{\Theta(x)}e^{\mp xZ(\alpha)}, \tag{13}\] where \(\alpha\in\mathbb{C}\) is found from \(\lambda\in\mathbb{R}\) by using the characteristic equation \(\eta=k^{2}+\mathrm{dn}^{2}(\alpha,k)\) and the Jacobi zeta function is \(Z(\alpha):=\frac{\Theta^{\prime}(\alpha)}{\Theta(\alpha)}=Z(\varphi_{\alpha},k)\) with \(\varphi_{\alpha}=\mathrm{am}(\alpha,k)\)[21, 144.01], see Table 1. Since \(\eta=\lambda+2k^{2}\), the characteristic equation can be written in the form \[\lambda=1-2k^{2}+k^{2}\mathrm{cn}^{2}(\alpha,k). \tag{14}\] The following proposition clarifies how \(\alpha\) is defined from the characteristic equation (14) when \(\lambda\) is decreased from \(\lambda_{3}(k)\) to \(-\infty\). Figure 4 illustrates the path of \(\alpha\) in the complex plane. **Proposition 3**.: _Fix \(k\in(0,1)\). We have_ * \(\alpha=F(\varphi_{\alpha},k)\in[0,K(k)]\) _for_ \(\lambda\in[\lambda_{2}(k),\lambda_{3}(k)]\)_, where_ \(\varphi_{\alpha}\in[0,\frac{\pi}{2}]\) _is given by_ \[\sin\varphi_{\alpha}=\frac{\sqrt{1-k^{2}-\lambda}}{k}.\] (15) Figure 4. Left: Floquet spectrum with orange, blue, and green dots corresponding to \(\lambda_{3}(k)\), \(\lambda_{2}(k)\), and \(\lambda_{1}(k)\), respectively, for a fixed value of \(k\in(0,1)\). Right: The complex plane for the parameter \(\alpha\) indicating the path of \(\alpha\) corresponding to the path of \(\lambda\) in (14). * \(\alpha=K(k)+i\beta\) _with_ \(\beta=F(\varphi_{\beta},k^{\prime})\in[0,K^{\prime}(k)]\) _for_ \(\lambda\in[\lambda_{1}(k),\lambda_{2}(k)]\)_, where_ \(\varphi_{\beta}\in[0,\frac{\pi}{2}]\) _is given by_ \[\sin\varphi_{\beta}=\frac{\sqrt{1-2k^{2}-\lambda}}{\sqrt{(1-k^{2})(1-k^{2}- \lambda)}}.\] (16) * \(\alpha=K(k)+iK^{\prime}(k)+\gamma\) _with_ \(\gamma=F(\varphi_{\gamma},k)\in[0,K(k))\) _for_ \(\lambda\in(-\infty,\lambda_{1}(k)]\)_, where_ \(\varphi_{\gamma}\in[0,\frac{\pi}{2})\) _is given by_ \[\sin\varphi_{\gamma}=\frac{\sqrt{-k^{2}-\lambda}}{\sqrt{1-2k^{2}-\lambda}},\] (17) _where \(k^{\prime}=\sqrt{1-k^{2}}\) and \(K^{\prime}(k)=K(k^{\prime})\)._ Proof.: When \(\lambda\in[\lambda_{2}(k),\lambda_{3}(k)]\), it follows from (14) that \(\operatorname{cn}^{2}(\alpha,k)\in[0,1]\) and hence \(\alpha\in[0,K(k)]\bmod K(k)\). Solving (14) for \(\sin\varphi_{\alpha}=\operatorname{sn}(\alpha,k)\) using (10) yields (15). As \(\lambda\) is decreased from \(\lambda_{3}(k)\) to \(\lambda_{2}(k)\), \(\varphi_{\alpha}\) is monotonically increasing from \(0\) to \(\pi/2\) and so \(\alpha=F(\varphi_{\alpha},k)\) monotonically increases from \(0\) to \(K(k)\). See the orange and blue dots in Figure 4. When \(\lambda\in[\lambda_{1}(k),\lambda_{2}(k)]\), we use the special relations (see [22, 8.151 and 8.153]), \[\operatorname{cn}(K(k)+i\beta,k)=-k^{\prime}\frac{\operatorname{sn}(i\beta,k )}{\operatorname{dn}(i\beta,k)}=-ik^{\prime}\frac{\operatorname{sn}(\beta,k^{ \prime})}{\operatorname{dn}(\beta,k^{\prime})},\] where \(k^{\prime}:=\sqrt{1-k^{2}}\). The characteristic equation (14) is rewritten in the form \[\operatorname{sn}^{2}(\beta,k^{\prime})=\frac{1-2k^{2}-\lambda}{(1-k^{2})(1-k ^{2}-\lambda)},\] from which it follows that \(\operatorname{sn}^{2}(\beta,k^{\prime})\in[0,1]\) and hence \(\beta\in[0,K(k^{\prime})]\bmod K(k^{\prime})\). Setting \(\sin\varphi_{\beta}=\operatorname{sn}(\beta,k^{\prime})\) yields (16). When \(\lambda\) is decreased from \(\lambda_{2}(k)\) to \(\lambda_{1}(k)\), then \(\varphi_{\beta}\) is monotone increasing and so is \(F(\varphi_{\beta},k^{\prime})\). Hence, \(\beta\) increases from \(0\) to \(K^{\prime}(k)\). See blue and green dots on Figure 4. When \(\lambda\in(-\infty,\lambda_{1}(k)]\), we use the special relations (see [22, 8.151]), \[\operatorname{cn}(K(k)+iK^{\prime}(k)+\gamma)=-\frac{ik^{\prime}}{k \operatorname{cn}(\gamma,k)},\] and rewrite the characteristic equation (14) in the form \[\operatorname{cn}^{2}(\gamma,k)=\frac{1-k^{2}}{1-2k^{2}-\lambda}.\] from which it follows that \(\operatorname{cn}^{2}(\gamma,k)\in[0,1]\) and hence \(\gamma\in[0,K(k))\bmod K(k)\). Setting \(\sin\varphi_{\gamma}=\operatorname{sn}(\gamma,k)\) and using (10) yield (17). When \(\lambda\) is decreased from \(\lambda_{1}(k)\) to \(-\infty\), then \(\varphi_{\gamma}\) is monotone increasing and so is \(F(\varphi_{\gamma},k)\). Hence, \(\gamma\) increases from \(0\) to \(K(k)\). See the green and black dots in Figure 4. ## 5. Time evolution of the eigenfunctions Let \(u(x,t)=\phi_{0}(x-c_{0}t)\) be the normalized cnoidal wave (4) and \(v(x,t)=v_{\pm}(x,t)\) be solutions of the system (2) and (3) such that \(v_{\pm}(x,0)=v_{\pm}(x)\) is given by (13). The time dependence of \(v_{\pm}(x,t)\) can be found by separation of variables: \[v_{\pm}(x,t)=\frac{H(x-c_{0}t\pm\alpha)}{\Theta(x-c_{0}t)}e^{\mp(x-c_{0}t)Z( \alpha)\mp t\omega(\alpha)}, \tag{18}\] where \(\omega(\alpha)\) is to be found. After substituting (18) into (3) and dividing by \(v_{\pm}(x,t)\), we obtain \[\omega(\alpha)=(c_{0}+4\lambda-2\phi_{0}(x))\left[Z(\alpha)\pm Z(x)\mp\frac{H^ {\prime}(x\pm\alpha)}{H(x\pm\alpha)}\right]\mp\phi_{0}^{\prime}(x), \tag{19}\] where \(x\) stands again for \(x-ct\). Equation (19) holds for every \(x\in\mathbb{R}\) due to the compatibility of the system (2) and (3). Hence, we obtain \(\omega(\alpha)\) by substituting \(c_{0}=4(2k^{2}-1)\) and evaluating (19) at \(x=0\): \[\omega(\alpha)=4(\lambda+k^{2}-1)\left[\frac{\Theta^{\prime}(\alpha)}{\Theta( \alpha)}-\frac{H^{\prime}(\alpha)}{H(\alpha)}\right], \tag{20}\] where we have used the parity properties [22, 8.192]: \[H(-x)=-H(x)\qquad\text{and}\qquad\Theta(-x)=\Theta(x).\] The following proposition ensures that \(\omega(\alpha)\) is real when \(\lambda\) is taken either in the semi-infinite gap \((-\infty,\lambda_{1}(k))\) or in the finite gap \((\lambda_{2}(k),\lambda_{3}(k))\). **Proposition 4**.: _Fix \(k\in(0,1)\). Then, \(\omega(\alpha)\in\mathbb{R}\) if \(\lambda\in(-\infty,\lambda_{1}(k))\cup(\lambda_{2}(k),\lambda_{3}(k))\) and \(\omega(\alpha)\in i\mathbb{R}\) if \(\lambda\in[\lambda_{1}(k),\lambda_{2}(k)]\)._ Proof.: We recall the logarithmic derivatives of the Jacobi theta functions [22, 8.199(3)]: \[\frac{H^{\prime}(x)}{H(x)}=\frac{\pi}{2K(k)}\left[\cot\left(\frac{\pi x}{2K(k) }\right)+4\sin\left(\frac{\pi x}{K(k)}\right)\sum_{n=1}^{\infty}\frac{q^{2n}}{ 1-2q^{2n}\cos\left(\frac{\pi x}{K(k)}\right)+q^{4n}}\right],\] \[\frac{H^{\prime}_{1}(x)}{H_{1}(x)}=-\frac{\pi}{2K(k)}\left[\tan\left(\frac{ \pi x}{2K(k)}\right)+4\sin\left(\frac{\pi x}{K(k)}\right)\sum_{n=1}^{\infty} \frac{q^{2n}}{1+2q^{2n}\cos\left(\frac{\pi x}{K(k)}\right)+q^{4n}}\right],\] \[\frac{\Theta^{\prime}_{1}(x)}{\Theta_{1}(x)}=-\frac{2\pi}{K(k)}\sin\left( \frac{\pi x}{K(k)}\right)\sum_{n=1}^{\infty}\frac{q^{2n-1}}{1+2q^{2n}\cos \left(\frac{\pi x}{K(k)}\right)+q^{4n-2}},\] \[\frac{\Theta^{\prime}(x)}{\Theta(x)}=\frac{2\pi}{K(k)}\sin\left(\frac{\pi x}{K (k)}\right)\sum_{n=1}^{\infty}\frac{q^{2n-1}}{1-2q^{2n}\cos\left(\frac{\pi x}{ K(k)}\right)+q^{4n-2}},\] where \(q:=e^{-\frac{\pi K^{\prime}(k)}{K(k)}}\) is the Jacobi nome, see Table 1. If \(\lambda\in[\lambda_{2}(k),\lambda_{3}(k)]\), then \(\alpha=F(\varphi_{\alpha},k)\in[0,K(k)]\) by Proposition 3 and (20) returns real \(\omega(\alpha)\), where both logarithmic derivatives of the Jacobi theta functions are positive. If \(\lambda\in[\lambda_{1}(k),\lambda_{2}(k)]\), then \(\alpha=K(k)+i\beta\) with \(\beta=F(\varphi_{\beta},k^{\prime})\in[0,K^{\prime}(k)]\) by Proposition 3. The half-period translations [22, 8.183] yield \[H(K(k)+i\beta) =H_{1}(i\beta),\] \[\Theta(K(k)+i\beta) =\Theta_{1}(i\beta),\] so that the logarthmic derivatives in (20) are purely imaginary and \(\omega(K(k)+i\beta)\in i\mathbb{R}\). If \(\lambda\in(-\infty,\lambda_{1}(k)]\), then \(\alpha=K(k)+iK^{\prime}(k)+\gamma\) with \(\gamma=F(\varphi_{\gamma},k)\in[0,K(k))\) by Proposition 3. The half-period translations [22, 8.183] yield \[H(K(k)+iK^{\prime}(k)+\gamma)=e^{\frac{\pi K^{\prime}(k)}{4K(k)}}e^{-\frac{i \pi\gamma}{2K(k)}}\Theta_{1}(\gamma),\] \[\Theta(K(k)+iK^{\prime}(k)+\gamma)=e^{\frac{\pi K^{\prime}(k)}{4K(k)}}e^{- \frac{i\pi\gamma}{2K(k)}}H_{1}(\gamma),\] The purely imaginary part of the logarithmic derivatives cancels in (20) after the transformation and we obtain the real quantity \[\omega(K(k)+iK^{\prime}(k)+\gamma)=4(\lambda+k^{2}-1)\left[\frac{H^{\prime}_{ 1}(\gamma)}{H_{1}(\gamma)}-\frac{\Theta^{\prime}_{1}(\gamma)}{\Theta_{1}( \gamma)}\right], \tag{21}\] where both logarithmic derivatives are negative. ## 6. New solutions via the Darboux transformation We use the standard tool of the one-fold Darboux transformation for the KdV equation [1]. If we fix a value of \(\lambda=\lambda_{0}\) and obtain a solution \(v=v_{0}(x,t)\) of the linear equations (2) and (3) associated with the potential \(u=\phi_{0}(x-c_{0}t)\) of the KdV equation (1), a new solution of the same KdV equation (1) is given by \[\hat{u}(x,t)=\phi_{0}(x-c_{0}t)+2\partial_{x}^{2}\log v_{0}(x,t). \tag{22}\] The new solution \(\hat{u}(x,t)\) is real and non-singular if and only if \(v_{0}(x,t)\neq 0\) everywhere in the \((x,t)\) plane. This is true for \(\lambda_{0}\in(-\infty,\lambda_{1}(k))\), which is below the Floquet spectrum (Figure 3), because Sturm's nodal theorem implies that \(v_{\pm}(x,t)\), given by (18), are sign-definite in \(x\) for every \(t\in\mathbb{R}\). However, if \(\lambda_{0}\in(\lambda_{2}(k),\lambda_{3}(k))\) is in the finite gap, Sturm's nodal theorem implies that \(v_{\pm}(x,t)\) have exactly one zero on the fundamental period of \(\phi_{0}\) for every \(t\in\mathbb{R}\). We will show that this technical obstacle can be overcome with the translation of the new solution \(\hat{u}(x,t)\) with respect to a half-period in the complex plane of \(x\). The following proposition gives an important relation between the Jacobi cnoidal function and the Jacobi theta function. **Proposition 5**.: _For every \(k\in(0,1)\), we have_ \[k^{2}\mathrm{cn}^{2}(x,k)=k^{2}-1+\frac{E(k)}{K(k)}+\partial_{x}^{2}\log \Theta(x). \tag{23}\] Proof.: From [23, 6.6.9] we have that \[P\left(\frac{x}{\sqrt{e_{1}-e_{3}}}\right)=c_{1}-\partial_{z}^{2}\log\theta_{1} \left(\frac{\pi x}{2K(k)}\right), \tag{24}\] where \(c_{1}\) is a specific constant to be determined and \(P(z)\) is Weierstrass' elliptic function that satisfies \[[P^{\prime}(z)]^{2}=4(P(z)-e_{1})(P(z)-e_{2})(P(z)-e_{3}),\] with three turning points \(e_{3}<e_{2}<e_{1}\) such that \(e_{1}+e_{2}+e_{3}=0\). As is well known (see [22, 8.169]), \(P(z)\) is related to the Jacobi elliptic functions by \[P\left(\frac{x}{\sqrt{e_{1}-e_{3}}}\right) =e_{3}+\frac{e_{1}-e_{3}}{\mathrm{sn}^{2}(x,k)}\] \[=e_{3}+(e_{1}-e_{3})k^{2}\mathrm{sn}^{2}(x+iK^{\prime}(k),k)\] \[=e_{3}+(e_{2}-e_{3})\mathrm{sn}^{2}(x+iK^{\prime}(k),k)\] \[=e_{2}-(e_{2}-e_{3})\mathrm{cn}^{2}(x+iK^{\prime}(k),k),\] where we have used the property \(k\mathrm{sn}(x+iK^{\prime}(k),k)=\mathrm{sn}(x,k)\)[22, 8.151], the definition \[k^{2}=\frac{e_{2}-e_{3}}{e_{1}-e_{3}},\] and the first relation in (10). Thus, we obtain, due to the relation (24) that \[k^{2}\mathrm{cn}^{2}(x,k) =\frac{e_{2}-P\left(\frac{x-iK^{\prime}(k)}{\sqrt{e_{1}-e_{3}}} \right)}{e_{1}-e_{3}} \tag{25}\] \[=\frac{e_{2}-c_{1}}{e_{1}-e_{3}}+\partial_{x}^{2}\log\theta_{1} \left(\frac{\pi(x-iK^{\prime}(k))}{2K(k)}\right)\] \[=\frac{e_{2}-c_{1}}{e_{1}-e_{3}}+\partial_{x}^{2}\log\theta_{4} \left(\frac{\pi x}{2K(k)}\right)\] \[=\frac{e_{2}-c_{1}}{e_{1}-e_{3}}+\partial_{x}^{2}\log\Theta(x),\] where we have used the half-period translation [22, 8.183]: \[\theta_{1}\left(u-\frac{i\pi K^{\prime}(k)}{2K(k)}\right)=-ie^{\frac{\pi K^{ \prime}(k)}{4K(k)}}e^{iu}\theta_{4}(u)\] and \(\partial_{x}^{2}\log e^{c_{2}+c_{3}x}=0\) for every \(c_{2},c_{3}\in\mathbb{C}\). To find the specific constant \(\frac{e_{2}-c_{1}}{e_{1}-e_{3}}\), we evaluate the relation (25) at \(x=0\): \[\frac{e_{2}-c_{0}}{e_{1}-e_{3}} =k^{2}-\frac{\Theta^{\prime\prime}(0)}{\Theta(0)}\] \[=k^{2}-1+\frac{E(k)}{K(k)},\] where we have used [22, 8.196]. This yields (23). The following two theorems present the construction of bright and dark breathers in the form (5) with either (6) or (7). These two theorems contribute to the main result of this work. **Theorem 1**.: _There exists an exact solution to the KdV equation (1) in the form (5) with (6), where \(x_{0}\in\mathbb{R}\) is arbitrary and where \(\alpha_{b}\in(0,K(k))\), \(\kappa_{b}>0\), and \(c_{b}>c_{0}\) are uniquely defined from \(\lambda\in(-\infty,\lambda_{1}(k))\) by_ \[\alpha_{b} =F(\varphi_{\gamma},k), \tag{26}\] \[\kappa_{b} =\frac{\sqrt{1-\lambda-k^{2}}\sqrt{-\lambda-k^{2}}}{\sqrt{1-2k^{ 2}-\lambda}}-Z(\varphi_{\gamma},k),\] (27) \[c_{b} =c_{0}+\frac{4\sqrt{1-\lambda-2k^{2}}\sqrt{1-\lambda-k^{2}}\sqrt {-\lambda-k^{2}}}{\kappa_{b}}, \tag{28}\] _with \(\varphi_{\gamma}\in(0,\frac{\pi}{2})\) being found from_ \[\sin\varphi_{\gamma}=\frac{\sqrt{-\lambda-k^{2}}}{\sqrt{1-2k^{2}-\lambda}}. \tag{29}\] Proof.: Consider a linear combination of the two solutions to the linear system (2) and (3) in the form (18) with \(\alpha=K(k)+iK^{\prime}(k)+\gamma\) and \(\gamma=F(\varphi_{\gamma},k)\in(0,K(k))\): \[v_{0}(x,t)=c_{+}\frac{H(x-c_{0}t+\alpha)}{\Theta(x-c_{0}t)}e^{-(x-\alpha_{0}t )Z(\alpha)-\omega(\alpha)t}+c_{-}\frac{H(x-c_{0}t-\alpha)}{\Theta(x-c_{0}t)}e ^{+(x-c_{0}t)Z(\alpha)+\omega(\alpha)t}, \tag{30}\] where \((c_{+},c_{-})\) are arbitary constants. By using the half-period translations of the Jacobi theta functions [22, 8.183], we obtain for \(\alpha=K(k)+iK^{\prime}(k)+\gamma\): \[H(x+\alpha) =e^{\frac{\pi K^{\prime}(k)}{4K(k)}-\frac{i\pi(x+\gamma)}{2K(k)} }\Theta(x+K(k)+\gamma),\] \[H(x-\alpha) =-e^{\frac{\pi K^{\prime}(k)}{4K(k)}+\frac{i\pi(x-\gamma)}{2K(k) }}\Theta(x+K(k)-\gamma),\] and \[Z(\alpha)=\frac{H_{1}^{\prime}(\gamma)}{H_{1}(\gamma)}-\frac{i\pi}{2K(k)}.\] Substituting these expressions into (30) cancels the \(x\)-dependent complex phases. Anticipating (22), we set \[c_{+}=ce^{-(K(k)+x_{0})\frac{H_{1}^{\prime}(\gamma)}{H_{1}(\gamma)}},\quad c_ {-}=-ce^{(K(k)+x_{0})\frac{H_{1}^{\prime}(\gamma)}{H_{1}(\gamma)}}\] with arbitrary parameters \(c,x_{0}\in\mathbb{R}\), from which the constant \(c\) cancels out due to the second logarithmic derivative. Using \(c_{\pm}\) in (30), inserting \(v_{0}\) into (22), and simplifying with the help of (23), we obtain a new solution in the final form \(u(x,t):=\hat{u}(x-K(k),t)\) where \(u(x,t)\) is given by (5) with \(\tau(x,t)\) given by (6) with the following parameters: \(\alpha_{b}:=\gamma\in(0,K(k))\), \(\kappa_{b}:=-\frac{H_{1}^{\prime}(\gamma)}{H_{1}(\gamma)}>0\), and \[c_{b} :=c_{0}-\omega(K(k)+iK^{\prime}(k)+\gamma)\frac{H_{1}(\gamma)}{H_{ 1}^{\prime}(\gamma)}\] \[=4(k^{2}-\lambda)+4(\lambda+k^{2}-1)\frac{\Theta_{1}^{\prime}( \gamma)H_{1}(\gamma)}{\Theta_{1}(\gamma)H_{1}^{\prime}(\gamma)},\] where we have used (21). By using the following identities [21, 1053.02] \[\frac{H_{1}^{\prime}(\gamma)}{H_{1}(\gamma)}=-\frac{\mathrm{sn} (\gamma,k)\mathrm{dn}(\gamma,k)}{\mathrm{cn}(\gamma,k)}+Z(\gamma),\] \[\frac{\Theta_{1}^{\prime}(\gamma)}{\Theta_{1}(\gamma)}=-\frac{k^ {2}\mathrm{sn}(\gamma,k)\mathrm{cn}(\gamma,k)}{\mathrm{dn}(\gamma,k)}+Z( \gamma),\] and the relation formulas \(Z(\gamma)=Z(\varphi_{\gamma},k)\), \[\mathrm{sn}(\gamma,k)=\sin(\varphi_{\gamma})=\frac{\sqrt{-\lambda-k^{2}}}{ \sqrt{1-2k^{2}-\lambda}},\quad\mathrm{cn}(\gamma,k)=\cos(\varphi_{\gamma})= \frac{\sqrt{1-k^{2}}}{\sqrt{1-2k^{2}-\lambda}},\] and \[\mathrm{dn}(\gamma,k)=\frac{\sqrt{1-k^{2}}\sqrt{1-\lambda-k^{2}}}{\sqrt{1-2k^{ 2}-\lambda}},\] we express the parameters \(\alpha_{b}\), \(\kappa_{b}\), and \(c_{b}\) in terms of incomplete elliptic integrals in (26), (27), and (28). Since \(\kappa_{b}>0\), it follows that \(c_{b}>c_{0}\). **Remark 1**.: _The solution \(u(x,t)\) obtained in the proof of Theorem 1 is the half-period translation along the real axis of the solution \(\hat{u}(x,t)\) defined by (22)._ **Remark 2**.: _Since \(\kappa_{b}>0\), it follows from (5), (6), and (23) that_ \[u(x,t)\to 2k^{2}\mathrm{cn}^{2}(x-c_{0}t\pm\alpha_{b},k)\quad\mathrm{ as}\;\;x-c_{b}t\to\pm\infty.\] _A suitably normalized phase shift of the background cnoidal wave can be written in the form:_ \[\Delta_{b}:=-\frac{2\pi\alpha_{b}}{K(k)}=-\frac{2\pi F(\varphi_{\gamma},k)}{K (k)}\in(-2\pi,0).\] _When \(\Delta_{b}\in(-\pi,0)\), the normalized phase shift is negative. When \(\Delta_{b}\in(-2\pi,-\pi]\), the normalized phase shift is considered to be positive by a period translation to \(\Delta_{b}+2\pi\in(0,\pi]\)._ **Theorem 2**.: _There exists an exact solution to the KdV equation (1) in the form (5) with (7), where \(x_{0}\in\mathbb{R}\) is arbitrary and where \(\alpha_{d}\in(0,K(k))\), \(\kappa_{d}>0\), and \(c_{d}<c_{0}\) are _uniquely defined from \(\lambda\in(\lambda_{2}(k),\lambda_{3}(k))\) by_ \[\alpha_{d} =F(\varphi_{\alpha},k), \tag{31}\] \[\kappa_{d} =Z(\varphi_{\alpha},k),\] (32) \[c_{d} =c_{0}-\frac{4\sqrt{(k^{2}+\lambda)(\lambda-1+2k^{2})(1-k^{2}- \lambda)}}{\kappa_{d}}, \tag{33}\] _with \(\varphi_{\alpha}\in(0,\frac{\pi}{2})\) being found from_ \[\sin\varphi_{\alpha}=\frac{\sqrt{1-k^{2}-\lambda}}{k}. \tag{34}\] Proof.: When \(\lambda\in(\lambda_{2}(k),\lambda_{3}(k))\), \(\alpha=F(\varphi_{\alpha},k)\in(0,K(k))\), \(\omega(\alpha)\) and \(Z(\alpha)=Z(\varphi_{\alpha},k)\) are real by Propositions 3 and 4. However, the functions \(H(x\pm\alpha)\) change sign so that we should express them in terms of the functions \(\Theta(x\pm\alpha)\) after complex translation of phases. This is achieved by the half-period translations [22, 8.183]: \[H(x+\alpha) =ie^{-\frac{\pi K^{\prime}(k)}{4K(k)}-\frac{i\pi(x+\alpha)}{2K(k) }}\Theta(x+\alpha-iK^{\prime}(k)),\] \[H(x-\alpha) =ie^{-\frac{\pi K^{\prime}(k)}{4K(k)}-\frac{i\pi(x-\alpha)}{2K(k) }}\Theta(x-\alpha-iK^{\prime}(k)).\] The \(x\)-dependent complex phase is now a multiplier in the linear superposition (30) which vanishes in the result due to the second logarithmic derivative. By using (22) and (23), we set \[c_{+}=ce^{-(x_{0}-iK^{\prime}(k))Z(\alpha)+\frac{i\pi\alpha}{2K(k)}},\quad c_{ -}=ce^{(x_{0}-iK^{\prime}(k))Z(\alpha)-\frac{i\pi\alpha}{2K(k)}},\] and obtain a new solution in the final form \(u(x,t):=\hat{u}(x+iK^{\prime}(k),t)\) with the same \(u(x,t)\) as in (5) and with \(\tau(x,t)\) given by (7) with the following parameters: \(\alpha_{d}:=\alpha\in(0,K(k))\), \(\kappa_{b}:=Z(\alpha)>0\), and \[c_{d} =c_{0}-\frac{\omega(\alpha)}{Z(\alpha)}\] \[=4(k^{2}-\lambda)+4(\lambda+k^{2}-1)\frac{\Theta(\alpha)H^{ \prime}(\alpha)}{\Theta^{\prime}(\alpha)H(\alpha)},\] where we have used (20). Using the following identities [21, 1053.02] \[\frac{H^{\prime}(\alpha)}{H(\alpha)} =\frac{\text{cn}(\alpha,k)\text{dn}(\alpha,k)}{\text{sn}(\alpha, k)}+Z(\alpha),\] \[\frac{\Theta^{\prime}(\alpha)}{\Theta(\alpha)} =Z(\alpha),\] and the relations \(Z(\alpha)=Z(\varphi_{\alpha},k)\), \[\text{sn}(\alpha,k)=\sin(\varphi_{\alpha})=\frac{\sqrt{1-\lambda-k^{2}}}{k}, \quad\text{cn}(\alpha,k)=\cos(\varphi_{\alpha})=\frac{\sqrt{\lambda-1+2k^{2}} }{k},\] and \(\mathrm{dn}(\alpha,k)=\lambda+k^{2}\), we express the parameters \(\alpha_{d}\), \(\kappa_{d}\), and \(c_{d}\) in terms of incomplete elliptic integrals as (31), (32), and (33). Since \(\kappa_{d}>0\), we have \(c_{d}<c_{0}\). **Remark 3**.: _The solution \(u(x,t)\) obtained in the proof of Theorem 2 is the half-period translation along the imaginary axis of the solution \(\hat{u}(x,t)\) defined by (22)._ **Remark 4**.: _Since \(Z(\varphi_{\alpha},k)>0\), it follows from (5), (7), and (23) that_ \[u(x,t)\to 2k^{2}\mathrm{cn}^{2}(x-c_{0}t\mp\alpha_{d},k)\quad\mathrm{as}\;\;x-c_{ d}t\to\pm\infty.\] _A suitably normalized phase shift of the background cnoidal wave can be written in the form:_ \[\Delta_{d}=\frac{2\pi\alpha_{d}}{K(k)}=\frac{\pi F(\varphi_{\alpha},k)}{K(k)} \in(0,2\pi). \tag{35}\] _When \(\Delta_{d}\in(0,\pi]\), the normalized phase shift is positive. When \(\Delta_{d}\in(\pi,2\pi)\), the normalized phase shift is considered to be negative by translation to \(\Delta_{d}-2\pi\in(-\pi,0)\)._ ## 7. Properties of the bright breather Figure 5 plots \(\Delta_{b}\), \(\kappa_{b}\), and \(c_{b}\) for a bright breather as a function of the parameter \(\lambda\), see Theorem 1 and Remark 2. The phase shift \(\Delta_{b}\) increases monotonically while the inverse width \(\kappa_{b}\) and the breather speed \(c_{b}\) decrease monotonically as \(\lambda\) increases from \(-\infty\) towards the band edge \(\lambda_{1}(k)\), shown by the vertical dashed line. Since \(c_{0}=1.12\) for \(k=0.8\), we confirm that \(c_{b}>c_{0}\), which can also be observed in Figure 1. Figure 6 characterizes the family of bright breathers by plotting \(c_{b}-c_{0}\) and \(\kappa_{b}\) versus \(\Delta_{b}\) for three values of \(k\). Profiles of representative breather solutions shown in Figure 6 confirm why we call them bright breathers. Bright breathers are more localized, have larger amplitudes, and move faster for smaller (more negative) values of \(\Delta_{b}\) (smaller values of \(\lambda\)). For sufficiently large amplitude, \(\Delta_{b}\) falls below \(-\pi\) and the breather exhibits a positive phase shift \(\Delta_{b}+2\pi\in(0,\pi]\) (cf. Remark 2). In contrast, for sufficiently small-amplitude breathers, \(\Delta_{b}\in(-\pi,0)\) and the phase shift is negative. Figure 5. Normalized phase shift \(\Delta_{b}\) (left), inverse width \(\kappa_{b}\) (middle), and breather speed \(c_{b}\) (right) versus \(\lambda\) in \((-\infty,\lambda_{1}(k))\) for \(k=0.8\). The band edge \(\lambda_{1}(k)=-k^{2}\) is shown by the vertical dashed line. ### Asymptotic limits \(\lambda\to-\infty\) and \(\lambda\to\lambda_{1}(k)\) It follows from (29) that \[\varphi_{\gamma}=\left\{\begin{array}{ll}\frac{\pi}{2}-\frac{ \sqrt{1-k^{2}}}{\sqrt{|\lambda|}}+\mathcal{O}(\sqrt{|\lambda|^{-3}})&\text{ as }\;\lambda\to-\infty,\\ \frac{\sqrt{|\lambda|-k^{2}}}{\sqrt{1-k^{2}}}+\mathcal{O}(\sqrt{(| \lambda|-k^{2})^{3}})&\text{ as }\;\lambda\to\lambda_{1}(k).\end{array}\right.\] We also use the following asymptotic expansions of the elliptic integrals: \[F(\varphi,k)=\varphi+\mathcal{O}(\varphi^{3}),\quad E(\varphi,k)=\varphi+ \mathcal{O}(\varphi^{3}),\quad\text{as }\;\varphi\to 0\] and \[F(\varphi,k)=K(k)+\mathcal{O}(\tfrac{1}{2}\pi-\varphi),\quad E(\varphi,k)=E(k )+\mathcal{O}(\tfrac{1}{2}\pi-\varphi),\quad\text{as }\;\varphi\to\frac{\pi}{2}\] The itemized list below summarizes the asymptotic results, where we use the asymptotic equivalence for the leading-order terms and neglect writing the remainder terms. Figure 6. Left top (bottom): dependence of \(c_{b}-c_{0}\) (\(\kappa_{b}\)) versus \(\Delta_{b}\) for several values of \(k\). Right: representative bright breather solutions. Representative solutions are marked on the left panel with a unique colored symbol. * The asymptotic values of the normalized phase shift \(\Delta_{b}\) are \[\Delta_{b}\sim\left\{\begin{array}{ll}-2\pi+\dfrac{2\pi}{\sqrt{| \lambda|}K(k)}&\quad\mbox{as}\;\;\lambda\to-\infty,\\ -\dfrac{2\pi\sqrt{|\lambda|-k^{2}}}{\sqrt{1-k^{2}}K(k)}&\quad\mbox{as}\;\; \lambda\to\lambda_{1}(k).\end{array}\right.\] Since \(\partial_{\varphi}F(\varphi,k)=(1-k^{2}\sin^{2}\varphi)^{-1/2}>0\) and \(\partial_{\lambda}\varphi_{\gamma}<0\), the normalized phase shift \(\Delta_{b}\) is a monotonically increasing function of \(\lambda\) from \(-2\pi\) to \(0\). This proves that the map \(\lambda\mapsto\Delta_{b}(\lambda)\) is one-to-one and onto from \((-\infty,\lambda_{1})\) to \((-2\pi,0)\). * The asymptotic values for the inverse width \(\kappa_{b}\) are \[\kappa_{b}\sim\left\{\begin{array}{ll}\sqrt{|\lambda|}&\quad\mbox{as}\;\; \lambda\to-\infty,\\ \sqrt{\dfrac{|\lambda|-k^{2}}{1-k^{2}}}\dfrac{E(k)}{K(k)}&\quad\mbox{as}\;\; \lambda\to\lambda_{1}(k).\end{array}\right.\] The derivative is given by \[\partial_{\lambda}\kappa_{b} =-\dfrac{\sin\varphi_{\gamma}}{2\sqrt{1-\lambda-k^{2}}}\] \[\quad+\left(\sqrt{1-\lambda-k^{2}}\cos\varphi_{\gamma}-\sqrt{1-k^{2} \sin^{2}\varphi_{\gamma}}+\dfrac{E(k)}{K(k)\sqrt{1-k^{2}\sin^{2}\varphi_{ \gamma}}}\right)\partial_{\lambda}\varphi_{\gamma}.\] Since the terms in parentheses are positive and \(\partial_{\lambda}\varphi_{\gamma}<0\), we have \(\partial_{\lambda}\kappa_{b}<0\) so that \(\kappa_{b}\) is a monotonically decreasing function of \(\lambda\). * The asymptotic values for the breather speed \(c_{b}\) are \[c_{b}\sim\left\{\begin{array}{ll}\dfrac{4|\lambda|}{2-k^{2}-E(k)/K(k)}& \quad\mbox{as}\;\;\lambda\to-\infty,\\ c_{0}+4(1-k^{2})\dfrac{K(k)}{E(k)}&\quad\mbox{as}\;\;\lambda\to\lambda_{1}(k). \end{array}\right.\] The breather speed \(c_{b}\) in (28) satisfies \(c_{b}>c_{0}\). Based on the graphs in Fig. 5, we conjecture that the breather velocity \(c_{b}\) is a decreasing function of \(\lambda\). ### Asymptotic limits \(k\to 0\) and \(k\to 1\) In the limit \(k\to 0\), the background cnoidal wave \(\phi_{0}(x)=2k^{2}\mbox{cn}(x;k)\) vanishes since \(\Theta(x)\to 1\) as \(k\to 0\) whereas it follows from (27) and (28) that \[\kappa_{b}\to\sqrt{|\lambda|},\qquad c_{b}\to 4|\lambda|,\] since \(Z(\varphi_{\gamma},k)\to 0\) and \(c_{0}\to-4\) as \(k\to 0\). The breather solution (5) with (6) recovers the one-soliton solution \[u(x,t)\to 2|\lambda|\;{\rm sech}^{2}\left(\sqrt{|\lambda|}(x-4|\lambda|t+x_{0}) \right),\quad k\to 0,\] for every \(\lambda\in(-\infty,0)\). In the limit \(k\to 1\), the background cnoidal wave \(\phi_{0}(x)=2k^{2}{\rm cn}(x;k)\) transforms into the normalized soliton \(\phi_{0}(x)\to 2\,{\rm sech}^{2}(x)\) and we will show that the breather solution (5) with (6) recovers the two-soliton solution. It follows from (27) and (28) that \[\kappa_{b}\to\sqrt{|\lambda|},\qquad c_{b}\to 4|\lambda|,\] since \(Z(\varphi_{\gamma},k)\to 0\) and \(c_{0}\to 4\) as \(k\to 1\). Furthermore, it follows from (29) that \(\varphi_{\gamma}\to\frac{\pi}{2}\) as \(k\to 1\) so that \(\alpha_{b}=F(\varphi_{\gamma},k)\to\infty\) as \(k\to 1\). In order to regularize the solution, we use the translation invariance of the KdV equation, the \(2K(k)\)-periodicity of \(\Theta\), and define the half-period translation of (6) with the transformation \(x\to x-K(k)\), \(x_{0}\to x_{0}+K(k)\): \[\tau(x,t)=\Theta(x-c_{0}t+\alpha_{b}-K(k))e^{\kappa_{b}(x-c_{0}t+x_{0})}+ \Theta(x-c_{0}t-\alpha_{b}+K(k))e^{-\kappa_{b}(x-c_{0}t+x_{0})}. \tag{36}\] Recalling that \(\alpha_{b}=F(\varphi_{\gamma},k)\), for each \(\lambda\in(-\infty,-1)\), let us define the phase parameter \(\delta_{b}\) by evaluating the limit [26, eq. (2.14)]: \[\delta_{b}:=\lim_{k\to 1}\left[K(k)-F(\varphi_{\gamma},k)\right]=\frac{1}{2} \log\left(\frac{\sqrt{-\lambda}+1}{\sqrt{-\lambda}-1}\right). \tag{37}\] It remains to deduce the asymptotic formula for \(\Theta\) as \(k\to 1\). We show that \[\Theta(x)\sim\sqrt{\frac{-2k^{\prime}\log k^{\prime}}{\pi}}\cosh(x),\qquad \mbox{as}\;\;k\to 1, \tag{38}\] by using the Poisson summation formula [27]: \[\Theta(x)=\sum_{n=-\infty}^{\infty}f(n)=\sum_{n=-\infty}^{\infty}\hat{f}(n), \tag{39}\] where \(\hat{f}(m)=\int_{-\infty}^{\infty}f(n)e^{-2\pi inm}dn\). Since \[\Theta(x)=1+2\sum_{n=1}^{\infty}(-1)^{n}q^{n^{2}}\cos\left(\frac{n\pi x}{K(k) }\right),\qquad q:=e^{-\frac{\pi K(k^{\prime})}{K(k)}},\] where \(k^{\prime}=\sqrt{1-k^{2}}\), we obtain from (39) that \[f(n)=q^{n^{2}}e^{in\pi(1+x/K(k))},\quad\hat{f}(n)=\sqrt{\frac{K(k)}{K(k^{ \prime})}}(q^{\prime})^{(n-1/2-x/2K(k))^{2}}, \tag{40}\] where \(q^{\prime}:=e^{-\frac{\pi K(k)}{K(k^{\prime})}}\). As \(k\to 1\), we have \(k^{\prime}\to 0\) and \[\begin{array}{l}K(k)=-\log k^{\prime}+2\log 2+\mathcal{O}((k^{\prime})^{2}), \\ K(k^{\prime})=\frac{\pi}{2}+\frac{\pi}{8}k^{\prime 2}+\mathcal{O}((k^{\prime})^{4}), \\ q^{\prime}=\frac{1}{16}k^{\prime 2}+\frac{1}{32}k^{\prime 4}+\mathcal{O}((k^{ \prime})^{6}).\end{array}\] These expansions simplify (40) to \[\hat{f}(n)=\sqrt{\frac{-2\log k^{\prime}}{\pi}}\left(\frac{k^{\prime}}{4} \right)^{\frac{(2n-1)^{2}}{2}}e^{(2n-1)x}\left(1+\frac{x^{2}-2\log 2}{\log k ^{\prime}}+\cdots\right),\quad\text{as}\;\;k^{\prime}\to 0,\] for every fixed \(x\in\mathbb{R}\). Then, the rightmost summation in (39) yields the asymptotic expansion \(\Theta(x)=\hat{f}(0)+\hat{f}(1)+\cdots\) in the form (38). Using it in (36), we obtain the asymptotic expansion \[\tau(x,t)\sim\sqrt{\frac{-2k^{\prime}\log k^{\prime}}{\pi}}\Big{[}\cosh(\xi_{ 1}-\delta_{b})e^{\sqrt{|\lambda|}\xi_{2}}+\cosh(\xi_{1}+\delta_{b})e^{-\sqrt{| \lambda|}\xi_{2}}\Big{]}, \tag{41}\] where \(\xi_{1}=x-4t\) and \(\xi_{2}=x-4|\lambda|t+x_{0}\) for every \(\lambda\in(-\infty,-1)\). Using (41) with (37) in (5), we obtain the two-soliton solution in the form: \[u(x,t)=2\frac{e^{2\delta_{b}}(1-\sqrt{|\lambda|})^{2}+e^{-2\delta_{b}}(1+ \sqrt{|\lambda|})^{2}+2\cosh(2\sqrt{|\lambda|}\xi_{2})+2|\lambda|\cosh(2\xi_{ 1})}{[e^{\sqrt{|\lambda|}\xi_{2}}\cosh(\xi_{1}-\delta_{b})+e^{-\sqrt{|\lambda| }\xi_{2}}\cosh(\xi_{1}+\delta_{b})]^{2}}.\] The two-soliton solution exhibits the asymptotic behavior \[u(x,t)\sim\;2\;\text{sech}^{2}\left(\xi_{1}\mp\delta_{b}\right)+2|\lambda|\; \text{sech}^{2}\left(\sqrt{|\lambda|}\xi_{2}\pm\delta_{b}\right),\quad\text{ as}\;\;t\to\pm\infty.\] After the interaction, the slower soliton of amplitude \(2\) experiences the negative phase shift \(-2\delta_{b}\), whereas the faster soliton of amplitude \(2|\lambda|\) exhibits the positive phase shift \(2\delta_{b}/\sqrt{|\lambda|}\). ## 8. Properties of the dark breather Figure 7 plots \(\Delta_{d}\), \(\kappa_{d}\), and \(c_{d}\) for dark breathers as a function of the parameter \(\lambda\), see Theorem 2 and Remark 4. The phase shift \(\Delta_{d}\) is monotonically decreasing between the band edges \(\lambda_{2}(k)\) and \(\lambda_{3}(k)\), shown by the vertical dashed lines. The inverse width \(\kappa_{d}\) has a single maximum and vanishes at the band edges. The breather speed \(c_{d}\) is monotonically decreasing. Since \(c_{0}=-0.08\) for \(k=0.7\), we confirm that \(c_{d}<c_{0}\), which is also clear from Figure 2. Figure 8 characterizes the family of dark breathers by plotting \(c_{d}-c_{0}\) and \(\kappa_{d}\) versus \(\Delta_{d}\) for three values of \(k\). The profiles of breather solutions at \(t=0\) subject to the phase shift \(x_{0}=5\) confirm why we refer to them as dark breathers. In contrast to the bright breather case, dark breather solutions exhibit vanishing cnoidal wave modulations for both of the extreme phase shifts \(\Delta_{d}\to 0\) and \(\Delta_{d}\to 2\pi\), with the largest-amplitude breather occurring at an intermediate phase shift, which we will later identify by examining the inverse width \(\kappa_{d}\). Figure 8. Left top (bottom): dependence of \(c_{d}-c_{0}\) (\(\kappa_{d}\)) versus \(\Delta_{d}\) for several values of \(k\). Right: representative dark breather solutions. Representative solutions are marked on the left panel with a unique colored symbol. The dotted curve on the left panel corresponds to points of maximum \(\kappa_{d}\) with the greatest localization. ### Asymptotic limits \(\lambda\to\lambda_{2}(k)\) and \(\lambda\to\lambda_{3}(k)\) It follows from (34) that \[\varphi_{\alpha}=\left\{\begin{array}{ll}\frac{\pi}{2}-\frac{ \sqrt{\lambda-\lambda_{2}(k)}}{k}+\mathcal{O}\left(\lambda-\lambda_{2}\right)& \quad\text{as}\ \,\,\lambda\to\lambda_{2}(k),\\ \frac{\sqrt{\lambda_{3}(k)-\lambda}}{k}+\mathcal{O}\left( \lambda_{3}-\lambda\right)&\quad\text{as}\ \,\,\lambda\to\lambda_{3}(k).\end{array}\right.\] The itemized list below summarizes the asymptotic results, where we use the asymptotic equivalence for the leading-order terms and neglect writing the remainder terms. * The asymptotic values of the normalized phase shift \(\Delta_{d}\) are \[\Delta_{d}=\left\{\begin{array}{ll} 2\pi-\frac{2\pi}{K(k)}\sqrt{\frac{\lambda- \lambda_{2}(k)}{k^{2}(1-k^{2})}}&\quad\text{as}\ \,\,\lambda\to\lambda_{2}(k),\\ \frac{2\pi}{K(k)}\sqrt{\frac{\lambda_{3}(k)-\lambda}{k^{2}}}& \quad\text{as}\ \,\,\lambda\to\lambda_{3}(k).\end{array}\right.\] Since \[\partial_{\lambda}\Delta_{d}=\frac{2\pi}{K(k)}\partial_{\varphi_{\alpha}}F( \varphi_{\alpha},k)\partial_{\lambda}\varphi_{\alpha}\] with \(\partial_{\varphi}F(\varphi,k)>0\) and \(\partial_{\lambda}\varphi_{\alpha}<0\), the phase shift \(\Delta_{d}\) monotonically decrease from \(2\pi\) at \(\lambda=\lambda_{2}(k)\) to \(0\) at \(\lambda=\lambda_{3}(k)\). This proves that the map \(\lambda\mapsto\Delta_{d}(\lambda)\) is one-to-one and onto from \([\lambda_{2}(k),\lambda_{3}(k)]\) to \([0,2\pi]\). * The asymptotic values of the inverse width \(\kappa_{d}\) are \[\kappa_{d}=\left\{\begin{array}{ll}\left(\frac{E(k)}{K(k)}-1 +k^{2}\right)\sqrt{\frac{\lambda-\lambda_{2}(k)}{k^{2}(1-k^{2})}}&\quad\text{ as}\ \,\,\lambda\to\lambda_{2}(k),\\ \left(\frac{K(k)}{E(k)}-1\right)\frac{\sqrt{\lambda_{3}(k)-\lambda}}{k}& \quad\text{as}\ \,\,\lambda\to\lambda_{3}(k).\end{array}\right.\] The inverse width \(\kappa_{d}=Z(\varphi_{\alpha},k)\) exhibits a maximum when [21, eq. 141.25] \[\sin\varphi_{\alpha}=\frac{1}{k}\sqrt{1-\frac{E(k)}{K(k)}}\quad \iff\quad\lambda=\lambda_{\max}(k):=\frac{E(k)}{K(k)}-k^{2}.\] The dark breather with this value of \(\lambda\) can be interpreted as the narrowest (strongest) modulation of the cnoidal wave. Plotting the behavior of \(\Delta_{\max}(k):=\Delta_{d}\) at \(\lambda=\lambda_{\max}(k)\) as a function of \(k\), we find that \[0<\Delta_{\max}(k)<\pi,\] with the upper limit reached as \(k\to 0\). The dotted curve in the left top panel of Fig. 8 shows the graph of \[\left\{(\Delta_{\max}(k),c_{d}(\lambda_{\max}(k))-c_{0})\;\big{|}\;k\in(0,1) \right\}.\] Consequently, the most localized dark breather exhibits a positive phase shift. The phase shift is negative for \(\lambda\) near \(\lambda_{2}(k)\) since \(\Delta_{d}-2\pi\in(-\pi,0)\) (cf. Remark 4) and is positive for \(\lambda\) near \(\lambda_{3}(k)\) since \(\Delta_{d}\in(0,\pi)\). This partitions dark breathers into two branches: the slow (fast) branch for \(0<\Delta_{d}<\Delta_{\max}(k)<\pi\) (\(\Delta_{\max}(k)<\Delta_{d}<2\pi\)). The slow branch exhibits dark breathers with strictly positive phase shifts whose amplitudes increase with increasing phase shift. On the fast branch, dark breathers can have positive or negative phase shift depending on whether \(\Delta_{d}\) is less than or greater than \(\pi\), respectively. Also, an increase in phase shift corresponds to a decrease in amplitude. * The asymptotic values of the breather speed \(c_{d}\) are \[c_{d}=\left\{\begin{array}{ll}c_{0}-\dfrac{4k^{2}(1-k^{2})}{E(k)/K(k)-1+k^{ 2}}&\quad\text{as}\;\;\lambda\to\lambda_{2}(k),\\ c_{0}-\dfrac{4k^{2}}{1-E(k)/K(k)}&\quad\text{as}\;\;\lambda\to\lambda_{3}(k). \end{array}\right.\] Based on the graphs in Fig. 7, we conjecture that the breather velocity \(c_{d}\) is a monotonically decreasing function of \(\lambda\). ### Asymptotic limit \(k\to 1\) We show similarly to Section 7.2 that the dark breather recovers the two-soliton solution in the limit \(k\to 1\). The only difference from the degeneration of the bright breather is that the spectral parameter \(\lambda\) is now defined in \((-1,0)\) rather than in \((-\infty,-1)\). By using (38) in (7), we obtain the asymptotic approximation \[\tau(x,t)\sim\sqrt{\dfrac{-2k^{\prime}\log k^{\prime}}{\pi}}\Big{[}\cosh(\xi_ {1}+\delta_{d})e^{-\sqrt{|\lambda|}\xi_{2}}+\cosh(\xi_{1}-\delta_{d})e^{\sqrt {|\lambda|}\xi_{2}}\Big{]},\quad\text{as}\;\;k\to 1, \tag{42}\] where \(\xi_{1}=x-4t\) and \(\xi_{2}=x-4|\lambda|t+x_{0}\) for \(\lambda\in(-1,0)\) and we have used \(\kappa_{d}\to\sqrt{|\lambda|}\), \(c_{d}\to 4|\lambda|\), and the corresponding limiting phase \(\delta_{d}\) found from [26, eq. (2.7)]: \[\delta_{d}:=\lim_{k\to 1}F(\varphi_{\alpha},k)=\frac{1}{2}\log\left(\frac{1+ \sqrt{|\lambda|}}{1-\sqrt{|\lambda|}}\right),\quad\lambda\in(-1,0). \tag{43}\] Inserting (42) and (43) into (5) results in the two-soliton solution \[u(x,t)=2\frac{e^{2\delta_{d}}(1-\sqrt{|\lambda|})^{2}+e^{-2\delta_{d}}(1+ \sqrt{|\lambda|})^{2}+2\cosh(2\sqrt{|\lambda|}\xi_{2})+2|\lambda|\cosh(2\xi_{1 })}{[e^{-\sqrt{|\lambda|}\xi_{2}}\cosh(\xi_{1}+\delta_{d})+e^{\sqrt{|\lambda| }\xi_{2}}\cosh(\xi_{1}-\delta_{d})]^{2}},\] that exhibits the asymptotic behavior \[u(x,t)\sim 2\;\mathrm{sech}^{2}\left(\xi_{1}\pm\delta_{d}\right)+2|\lambda|\; \mathrm{sech}^{2}\left(\sqrt{|\lambda|}\xi_{2}\mp\delta_{d}\right),\quad t\to \pm\infty.\] After the interaction, the slower soliton of amplitude \(2|\lambda|\) experiences the negative phase shift \(-2\delta_{d}/\sqrt{|\lambda|}\) whereas the faster soliton of amplitude \(2\) exhibits the positive phase shift \(2\delta_{d}\). ### Asymptotic limit \(k\to 0\) We show that the dark breather as \(k\to 0\) can be approximated by a dark soliton solution of the nonlinear Schrodinger (NLS) equation. In the limit \(k\to 0\), the interval \([\lambda_{2}(k),\lambda_{3}(k)]\) shrinks to the point \(\lambda=1\) and the solution \(u(x,t)\) converges to the zero solution such that both the cnoidal wave and the dark breather vanish. For small \(k\), it is well-known (see, e.g., [28]) that the multiple scales expansion \[u(x,t)=2\mathrm{Re}\bigg{[} \epsilon\sqrt{\frac{\ell}{6}}A(\zeta,\tau)e^{i(\ell x-\omega t)}\] \[+\epsilon^{2}\frac{\ell}{6}\Big{(}\frac{1}{4}A(\zeta,\tau)^{2}e^ {2i(\ell x-\omega t)}-\frac{1}{2}|A(\zeta,\tau)|^{2}\Big{)}+\mathcal{O}( \epsilon^{3})\bigg{]} \tag{44}\] leads to the following NLS equation for the slowly varying amplitude \(A(\zeta,\tau)\): \[iA_{\tau}-\frac{1}{2}A_{\zeta\zeta}+|A|^{2}A=0, \tag{45}\] where \(0<\epsilon\ll 1\) is the amplitude parameter, \(\ell>0\) is the carrier wavenumber, \(\omega=-\ell^{3}\) is the KdV linear dispersion relation, and \(\zeta=\frac{\epsilon}{\sqrt{6\ell}}(x+3\ell^{2}t)\) and \(\tau=\epsilon^{2}t\) are slow variables. The NLS equation (45) admits the plane wave solution \[A(\zeta,\tau)=e^{i(1+\frac{v^{2}}{2})\tau+iv\zeta+i\psi_{0}} \tag{46}\] for any \(v,\psi_{0}\in\mathbb{R}\). To determine \(\ell\) and \(\epsilon\), it is necessary to expand the cnoidal wave background of the dark breather solution for small elliptic modulus \(0<k\ll 1\): \[u(x,t) =2k^{2}\mathrm{cn}^{2}(x-c_{0}t)\] \[=2k^{2}\cos^{2}(x-c_{0}t)+\mathcal{O}(k^{4})\] \[=k^{2}+k^{2}\cos 2(x-c_{0}t)+\mathcal{O}(k^{4}),\] where \(c_{0}\to-4\) as \(k\to 0\). The background cnoidal wave's wavenumber \(Q\), frequency \(\Omega\), and mean value \(\overline{\phi}\) expand as \(k\to 0\) in the form: \[Q:=\frac{\pi}{K(k)}=2-\frac{k^{2}}{2}+\mathcal{O}(k^{4}),\] \[\Omega:=c_{0}Q=-8+18k^{2}+\mathcal{O}(k^{4}),\] \[\overline{\phi}:=\frac{1}{2K(k)}\int_{0}^{2K(k)}\phi_{0}(x)\,dx=k ^{2}+\mathcal{O}(k^{4}).\] Comparing (44) with the asymptotic expansion for the background cnoidal wave, we find \(\epsilon=\frac{k^{2}\sqrt{3}}{2}\) and \(\ell=2\) confirming that the limit \(k\to 0\) coincides with the NLS approximation. Since the expansion (44) does not incorporate an \(\mathcal{O}(\epsilon)\) mean term, the Galilean transformation of the KdV equation can be used in (44) and (46) to obtain \[u(x,t)\to k^{2}+u(x-6k^{2}t,t)=k^{2}+k^{2}\cos(\Lambda x-\Upsilon t+\psi_{0})+ \mathcal{O}(k^{4}), \tag{47}\] where \[\begin{array}{l}\Lambda=2+\frac{v}{4}k^{2}+\mathcal{O}(k^{4}),\\ \Upsilon=-8+(12-3v)k^{2}+\mathcal{O}(k^{4}).\end{array}\] The choice \(v=-2\) asymptotically matches \(\Lambda\) and \(\Upsilon\) in (47) with \(Q\) and \(\Omega\). The NLS equation (45) admits two families of dark soliton solutions [29] \[A(\zeta,\tau)=\Big{(}\cos\beta\pm i\sin\beta\tanh\big{(}\sin\beta(\zeta-c_{\pm }\tau)\big{)}\Big{)}e^{i(-2\zeta+3\tau+\psi_{0})}, \tag{48}\] where \(\pm\) corresponds to the fast \((+)\) and slow \((-)\) solution branches with velocities \(c_{\pm}=2\pm\cos\beta\), phase shift parameter \(\beta\in[0,\pi/2]\), and arbitrary phase \(\psi_{0}\in\mathbb{R}\). Since \[A(\zeta,\tau)\to e^{i(-2\zeta+3\tau+\psi_{0}\mp\beta)}\quad\text{as}\;\; \zeta-c_{\pm}\tau\to-\infty\] and \[A(\zeta,\tau)\to e^{i(-2\zeta+3\tau+\psi_{0}\pm\beta)}\quad\text{as}\;\;\zeta -c_{\pm}\tau\to\infty,\] the normalized phase shift is \(\Delta_{\pm}:=\pm 2\beta\) for the fast \((-)\) and slow \((+)\) branch of solutions. Applying the Galilean transformation \(u(x,t)\to k^{2}+u(x-6k^{2}t,t)\) to Eqs. (44) and (48), the dark soliton velocity-phase shift relation \(c_{\pm}\) is \[c_{\pm}=-12+\bigg{(}12+3\operatorname{sgn}(\Delta_{\pm})\cos\Big{(}\frac{ \Delta_{\pm}}{2}\Big{)}\bigg{)}k^{2},\quad\Delta_{+}\in(0,\pi],\quad\Delta_{- }\in(-\pi,0). \tag{49}\] From Eq. (48), the inverse width parameter \(\kappa_{\pm}:=\sin\frac{\beta\epsilon}{\sqrt{12}}\) is given by \[\kappa_{\pm}=\frac{1}{4}\sin\left(\frac{|\Delta_{\pm}|}{2}\right)k^{2}. \tag{50}\] In order to compare the dispersion relation given by (49) and (50) with the dark breather dispersion relation given by (32), (33), (35), we expand the spectral parameter \(\lambda\) as \(\lambda=1-k^{2}(1+\mu)\) with new scaled spectral parameter \(\mu\), ensuring a distinct breather for each \(\mu\in(0,1)\) as \(k\to 0\). The small \(k\) expansion of the dark breather dispersion relation (31), (32), (33), and (35) is given by \[\begin{array}{l}\alpha_{d}=\arcsin(\sqrt{\mu})+\mathcal{O}(k^{2}),\\ \kappa_{d}=\frac{1}{2}\sqrt{\mu(1-\mu)}k^{2}+\mathcal{O}(k^{4}),\\ c_{d}=-12+(9+6\mu)k^{2}+\mathcal{O}(k^{4}),\\ \Delta_{d}=4\arcsin(\sqrt{\mu})+\mathcal{O}(k^{2}),\end{array}\] for \(\mu\in[0,1]\). Substituting \(\mu=\sin^{2}\left(\frac{\Delta_{d}}{4}\right)\) yields \[\begin{split} c_{d}&=-12+\left[12-3\cos\left(\frac{ \Delta_{d}}{2}\right)\right]k^{2}+\mathcal{O}(k^{4}),\\ \kappa_{d}&=\frac{1}{4}\sin\left(\frac{\lfloor\Delta_ {d}\rfloor}{2}\right)k^{2}+\mathcal{O}(k^{4}).\end{split} \tag{51}\] By identifying certain values of the phase shift \(\Delta_{d}\) with the slow \((+)\) and fast \((-)\) branches of the NLS dark soliton solution (48), as given by \[\Delta_{-}=\Delta_{d}, \Delta_{d}\in(0,\pi],\] \[\Delta_{+}=\Delta_{d}-2\pi, \Delta_{d}\in(-\pi,0),\] we find that Eq. (51) coincides with Eqs. (49) and (50) up to and including the \(\mathcal{O}(k^{2})\) terms. The fast and slow branches of the NLS dark soliton (48) coincide with the limiting fast and slow branches of the dark breather. The black soliton solution (48) with \(\beta=\pi/2\) corresponds to the dark breather of maximum localization in which \(\Delta_{\max}(k)\sim\pi\) as \(k\to 0\). ## 9. Conclusion A comprehensive characterization of explicit solutions of the KdV equation, representing the nonlinear superposition of a soliton and cnoidal wave, has been obtained using the Darboux transformation. These solutions are breathers, manifesting as nonlinear wavepackets propagating with constant velocity on a cnoidal, periodic, traveling wave background, subject to a topological phase shift. Breathers of elevation type, called bright breathers, are shown to propagate faster than the cnoidal background. Depression-type breathers are called dark breathers and they move slower than the cnoidal background. A key finding is that each breather on a fixed cnoidal wave background is uniquely determined by two distinct parameters: its initial position and a spectral parameter. We prove that the spectral parameter is in one-to-one correspondence with the normalized phase shift, which it imparts to the cnoidal background, in the interval \((-\pi,\pi]\). Bright breathers with small, negative phase shifts correspond to small-scale amplitude modulations of the cnoidal wave background, which result in the cnoidal wave dominating the solution. Small, positive phase shifts correspond to bright breathers with large-scale amplitude modulations of the cnoidal wave background where the soliton component is dominant. As the phase shift is swept across the interval \((-\pi,\pi]\), all breather amplitudes are attained. In contrast, dark breather amplitudes, being of depression type, are limited. Small phase shifts, positive or negative, correspond to small modulations of the cnoidal wave background and the slow or fast branch of solutions, respectively. For each cnoidal wave background, we find a narrowest dark breather that imparts a positive phase shift. When the amplitude of the cnoidal wave background is small, dark breathers degenerate into dark soliton solutions of the NLS equation (45) derived from the KdV equation (1). When the period of the cnoidal wave background goes to infinity, both bright and dark breather solutions are shown to degenerate into two-soliton solutions of the KdV equation. In this sense, breathers can be viewed as a generalization of two-soliton interactions. While such an interpretation is well-known for the sine-Gordon, focusing NLS, and the focusing modified KdV equations where breathers can be interpreted as bound states of two solitons [9], those breather solutions are localized. In contrast, the topological KdV breathers with an extended, periodic background described here represent a different class of nonlinear wave interaction solutions. We expect that such solutions exist for other integrable nonlinear evolutionary equations with a self-adjoint scattering problem such as the defocusing NLS and defocusing modified KdV equations. An important application of these breather solutions is to the problem of soliton-dispersive shock wave (DSW) interaction [2]. Bright breathers were identified in [4] as being associated with soliton-DSW transmission. Soliton-DSW trapping corresponds to dark breathers embedding within the DSW. The spectral characterization of KdV breathers obtained here can be used in the context of multi-phase Whitham modulation theory [5] to describe the dynamics of breathers subject to large-scale amplitude modulations [4]. In addition to soliton-DSW interaction, the bright breathers resemble the propagation of a soliton through a special kind of deterministic soliton gas, constructed using Riemann-Hilbert methods from primitive potentials of the defocusing modified KdV equation [18]. Similar deterministic soliton gases have been identified as soliton condensates for the KdV equation [19] and provide further applications for the breathers constructed here. **Acknowledgement.** The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme _Dispersive Hydrodynamics_ when work on this paper was undertaken (EPSRC Grant Number EP/R014604/1). The authors thank Y. Kodama and G. El for many useful suggestions on this project. MAH gratefully acknowledges support from NSF DMS-1816934.
2308.04265
FLIRT: Feedback Loop In-context Red Teaming
Warning: this paper contains content that may be inappropriate or offensive. As generative models become available for public use in various applications, testing and analyzing vulnerabilities of these models has become a priority. Here we propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation. Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation. We propose different in-context attack strategies to automatically learn effective and diverse adversarial prompts for text-to-image models. Our experiments demonstrate that compared to baseline approaches, our proposed strategy is significantly more effective in exposing vulnerabilities in Stable Diffusion (SD) model, even when the latter is enhanced with safety features. Furthermore, we demonstrate that the proposed framework is effective for red teaming text-to-text models, resulting in significantly higher toxic response generation rate compared to previously reported numbers.
Ninareh Mehrabi, Palash Goyal, Christophe Dupuy, Qian Hu, Shalini Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, Rahul Gupta
2023-08-08T14:03:08Z
http://arxiv.org/abs/2308.04265v1
# FLIRT: Feedback Loop In-context Red Teaming ###### Abstract _Warning: this paper contains content that may be inappropriate or offensive_. As generative models become available for public use in various applications, testing and analyzing vulnerabilities of these models has become a priority. Here we propose an automatic _red teaming_ framework that evaluates a given model and exposes its vulnerabilities against unsafe and inappropriate content generation. Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation. We propose different in-context attack strategies to automatically learn effective and diverse adversarial prompts for text-to-image models. Our experiments demonstrate that compared to baseline approaches, our proposed strategy is significantly more effective in exposing vulnerabilities in Stable Diffusion (SD) model, even when the latter is enhanced with safety features. Furthermore, we demonstrate that the proposed framework is effective for red teaming text-to-text models, resulting in significantly higher toxic response generation rate compared to previously reported numbers. ## 1 Introduction With the recent release and adoption of large generative models, such as DALL-E [24], ChatGPT [31], and GPT-4 [20], ensuring the safety and robustness of these models has become imperative. While those models have significant potential to create a real-world impact, they must be checked for potentially unsafe and inappropriate behavior before they can be deployed. For instance, chatbots powered by Large Language Models (LLMs) can generate offensive response [21], or provide user with inaccurate information [5]. When prompted with certain input, text-to-image models such as Stable Diffusion (SD) can generate images that are offensive and inappropriate [29]. Recent research has leveraged adversarial probing, also called _red teaming_, for evaluating the vulnerabilities in generative models, where one aims to discover inputs or prompts that will lead the system to generate undesired output. Most previous works in red teaming involve humans in the loop [7; 34] who interact with the system and manually generate prompts for triggering the model in generating undesired outcomes, both for text-to-text [7] and text-to-image models [19]. The human in the loop approach, however, is expensive and not scalable in identifying diverse attack dimensions. Thus, recent work has focused on automating the red teaming process [21; 18]. Although previous works have tried to automate the red teaming approach [21; 18], these approaches are expensive as they require a lot of data to be generated to sample effective few shot prompts from or for expensive fine-tuning of a red model [21]. In addition, others rely on an expensive iterative token replacement approach to probe a target model and find trigger tokens that lead undesired output generation [18]. In this work, we propose a novel and efficient Feedback Loop In-context Red Teaming (FLIRT) framework that does not require a lot of data and works by updating the in-context _exemplar_ (demonstration) prompts according to the feedback it receives from the target model. FLIRT is a generic and automated red teaming framework that uses iterative in-context learning for the red language model (LM) to generate prompts that can trigger unsafe generation. In addition, we propose different selection criteria (attack strategies) that can be used by the red LM in FLIRT to update its in-context exemplar prompts to generate diverse set of adversarial prompts. Some of the proposed selection criteria are based on heuristic and others are more sophisticated approaches that try to optimize for certain objectives, such as diversity and toxicity of the generated adversarial prompts. FLIRT is flexible and allows for the incorporation of different selection criteria proposed in this work that can control different objectives such as the diversity and toxicity of the generated prompts, which enables FLIRT to expose larger and more diverse set of vulnerabilities. We evaluate the FLIRT framework by conducting experiments for text-to-image models, since the automated red teaming of those models is largely underexplored. Specifically, we analyze the ability of FLIRT to prompt a text-to-image model to generate unsafe images. We define an unsafe image as an image that "_if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety_" [9]. We demonstrate that FLIRT is significantly more effective in exposing vulnerabilities in several text-to-image models compared to an existing in-context red teaming approach [21], achieving average attack success rate of ~80% against vanilla stable diffusion and ~60% against different safe stable diffusion models augmented with safety mechanisms. Furthermore, by controlling the toxicity of the learned prompt, FLIRT is capable of bypassing content moderation filters designed to filter out unsafe prompts, thus emphasizing the need for more comprehensive guardrail systems. We demonstrate transferability of the adversarial prompts generated through FLIRT among different models. Finally, we conduct experiments for evaluating text-to-text models and demonstrate the effectiveness of the FLIRT framework in this setting as well. ## 2 FLIRT Framework Our Feedback Loop In-context Red Teaming (FLIRT) framework applies a red LM that generates adversarial prompts aimed at triggering the target model into generating unsafe content. The red LM starts with an initial set of in-context seed prompts and iterates as follows: (1) The red LM generates a new adversarial prompt using in-context learning, which is fed into the target (e.g., text-to-image) model to generate the corresponding output (e.g., image). (2) The corresponding output (image) is evaluated on whether it is unsafe using safety classifiers. (3) The result of this evaluation is fed back to the red LM, which it utilizes as a feedback to decide whether to update its in-context exemplar prompts according to a chosen in-context attack strategy. These three steps get repeated for a certain number of FLIRT iterations. The overall FLIRT framework is illustrated in Figure 1. For simplicity, Figure 1: Our proposed Feedback Loop In-context Red Teaming (FLIRT) framework for generating adversarial prompts. In each FLIRT iteration, the red LM generates an adversarial prompt that is fed into the text-to-image model. Upon text-to-image model generating the image corresponding to the prompt generated by the red LM, the image is evaluated using Q16 and NudeNet classifiers to determine safety of the image. If the image is deemed unsafe, the red LM then updates its in-context exemplars according to one of the adversarial in-context attack strategies (FIFO, LIFO, scoring, Scoring-LIFO) to generate a new and diverse adversarial prompt. The in-context strategies utilized by the red LM to generate adversarial prompts are demonstrated on the left side of the image. Within scoring strategy, the scores in parentheses represent the score associated to each prompt. in our explanations, we consider the target model to be a text-to-image model; however, the target model can be any other model (e.g., text-to-text). Below we discuss each step incorporated in FLIRT in more detail. ### Initialization The red LM starts with a small number of in-context seed prompts (e.g., five prompts as listed in Appendix) hand-engineered by humans that are likely to trigger the text-to-image model to generate unsafe images. It then uses in-context learning to generate a new prompt which is fed to the text-to-image model to generate a corresponding image. ### Evaluation After the image is generated, it is evaluated for safety. The evaluation can be done using existing safety classifiers or using human evaluation. Since one of our main goals is to fully automate the red teaming process, here we use the former approach which was previously used in [29] to evaluate the safety of text-to-image models. We use two classifiers, Q16 [30] and NudeNet1 to evaluate whether the generated image contains unsafe content. We use a similar setup as that of [29] to evaluate images for unsafe content since we also utilize the same definition of unsafe content. To evaluate safety of the generated text in text-to-text experiments, we use the TOXIGEN model for toxic language detection [10]. Footnote 1: [https://github.com/notAI-tech/NudeNet](https://github.com/notAI-tech/NudeNet) ### In-context Adversarial Attack The result of the evaluation step is fed back to the red LM, which incorporates this feedback to update its set of in-context exemplar prompts according to one of several strategies proposed in this work Next, we illustrate the in-context attack strategies with their corresponding exemplar prompts (also depicted in Figure 1). First in First out (FIFO) AttackIn this strategy, we consider the in-context exemplar prompts to be in a queue and update them on a FIFO basis. New LM generated prompt that resulted in an unsafe image generation (henceforth referred to as positive feedback) is placed at the end of the queue and the first exemplar prompt in the queue is removed. Since in FIFO strategy the seed exemplar prompts which are hand engineered by humans get overwritten, the subsequent generations may diverge from the initial intent generating less successful adversarial prompts. To alleviate this challenge, we explore the Last in, First Out (LIFO) strategy that aims to keep the intent intact while generating a diverse set of examples. Last in First out (LIFO) AttackIn this strategy, we consider the in-context exemplar prompts to be in a stack and update them on a LIFO basis. New LM generated prompt with positive feedback is placed at the top of the stack and is replaced by the next successful generation. Note that all the exemplar prompts except the one at the top of the stack remain the same. Thus, the initial intent is preserved and the new generated prompts do not diverge significantly from the seed exemplar prompts. However, this attack strategy may not satisfy different objectives (e.g., diversity and toxicity of prompts) and may not give us the most effective set of adversarial prompts. In order to address these concerns, we next propose the _scoring_ attack strategy. Scoring AttackIn this strategy, our goal is to optimize the list of exemplar prompts based on a predefined set of objectives. Examples of objectives are 1) _attack effectiveness_, aiming to generate prompts that can maximize the unsafe generations by the target model; 2) _diversity_, aiming to generate more semantically diverse prompts, and 3) _low-toxicity_, aiming to generate low-toxicity prompts that can bypass a text-based toxicity filter. Let \(X^{t}=(x^{t}_{1},x^{t}_{2},\dots,x^{t}_{m})\) be the ordered list of \(m\) exemplar prompts at the beginning of the \(t\)-th iteration. \(X^{t}\) is ordered because during in-context learning, the order of the prompts matters. Further, let \(x^{t}_{new}\) be the new prompt generated via in-context learning during the same iteration that resulted in positive feedback, and let \(X^{t}_{i}\) be an ordered list derived from \(X^{t}\) where its \(i\)-th element is replaced by the new prompt \(x^{t}_{new}\), e.g., \(X^{t}_{1}=(x^{t}_{new},x^{t}_{2},\dots,x^{t}_{m})\). Finally, we use \(\mathcal{X}_{t}=\{X^{t}\}\cup\{X^{t}_{i},i=1,\dots,m\}\) to denote a set of size \((m+1)\) that contains the original list \(X^{t}\) and all the derived lists \(X^{t}_{i},i=1,\dots,m\). At the \(t\)-th iteration, red LM updates its (ordered) list of exemplar prompts by solving the following optimization problem: \[X^{t+1}=\operatorname*{argmax}_{X\in\mathcal{X}_{t}}Score(X)=\operatorname*{ argmax}_{X\in\mathcal{X}_{t}}\sum_{i=1}^{n}\lambda_{i}O_{i}(X) \tag{1}\] where \(O_{i}\) is the _ith_ objective that the red LM aims to optimize, and \(\lambda_{i}\) is the weight associated with that objective. While the objectives \(O_{i}\)-s are defined as functions over lists of size \(m\), for the particular set of objectives outlined above, the evaluation reduces to calculating functions over individual and pair-wise combination of the list elements making the computation efficient. Specifically, for the attack effectiveness and low-toxicity criteria, the objectives reduce to \(O(X^{t})=\sum_{l=1}^{m}O(x_{l}^{t})\). In our text-to-image experiments, we define the attack effectiveness objective as \(O_{AE}(X^{t})=\sum_{l=1}^{m}NudeNet(x_{l}^{t})+Q16(x_{l}^{t})\) where \(NudeNet(x)\) and \(Q16(x)\) are probability scores by applying NudeNet and Q16 classifiers to the image generated from the prompt \(x\). In text-to-text experiments, the effectiveness objective is defined as \(O_{AE}(X^{t})=\sum_{l=1}^{m}Toxigen(x_{l}^{t})\) where \(Toxigen(x)\) is the toxicity score on the prompt \(x\) according to the TOXIGEN classifier [10]. The low-toxicity objective is defined as \(O_{LT}(X^{t})=\sum_{l=1}^{m}(1-toxicity(x_{l}^{t}))\) where \(toxicity(x)\) is the toxicity score of prompt \(x\) according to the Perspective API2. As for the diversity objective, we define it as pairwise dissimilarity averaged over all the element pairs in the list, \(O_{Div}(X^{t})=\sum_{l=1}^{m}\sum_{j=l+1}^{m}(1-Sim(x_{l}^{t},x_{j}^{t}))\). We calculate \(Sim(x_{1}^{t},x_{2}^{t})\) using the cosine similarity between the sentence embeddings of the two pairs \(x_{1}^{t}\) and \(x_{2}^{t}\)[26]. For cases where all the objectives can be reduced to functions over individual elements, the update in (1) is done by substituting the prompt with the minimum score (\(x_{min}^{t}=\operatorname*{arg\,min}_{i=1,\dots,m}O(x_{i}^{t})\)) with the generated prompt \(x_{new}^{t}\) if \(O(x_{min}^{t})<O(x_{new}^{t})\). This update is efficient as it only requires storing the scores \(O(x_{i}^{t})\). For the other cases, we solve (1) by computing the \(m+1\) objectives for each element in \(\mathcal{X}_{t}\) and keeping the element maximizing \(Score(X)\) (see Appendix for more details). Footnote 2: [https://www.perspectiveapi.com](https://www.perspectiveapi.com) Scoring-LIFOIn this attack strategy, the red LM combines strategies from scoring and LIFO attacks. The red LM replaces the exemplar prompt that last entered the stack with the new generated prompt only if the new generated prompt adds value to the stack according to the objective the red LM aims to satisfy. In addition, since it is possible that the stack does not get updated for a long time, we introduce a scheduling mechanism. Using this scheduling mechanism, if the stack does not get updated after some number of iterations, the attacker force-replaces the last entered exemplar prompt in the stack with the new generation. ## 3 Experiments We perform various experiments to validate FLIRT's ability in red teaming text-to-image models. We also perform ablation studies to analyze the efficacy of FLIRT under different conditions. Finally, we perform experiments to show the efficacy of FLIRT in red teaming text-to-text models. ### Main Experiments We test various text-to-image models: stable diffusion v1-4 [27]3, weak, medium, strong, and max safe stable diffusion [29]4. For the red LM, we use GPT-Neo 2.7B parameter model [2; 8]5. For each attack strategy, we run the attack for 1k FLIRT iterations using three different initializations (sets of seed prompts listed in the Appendix). The three different sets of seed prompts capture different characteristics and are designed to probe the target model for all the unsafe categories borrowed from [29]. We use a context of size five in our experiments containing the instruction prompt that describes the task and the four additional in-context exemplar prompts. Note that the instruction prompt is kept fixed in each of the 1K iterations and only the in-context exemplar prompts are updated according to each attack strategy. Footnote 3: [https://huggingface.co/CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) Footnote 4: [https://huggingface.co/AIML-TUDA/stable-diffusion-safe](https://huggingface.co/AIML-TUDA/stable-diffusion-safe) Footnote 5: [https://huggingface.co/EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) For the metrics, we utilize _attack effectiveness_ which we define as the percentage of successful prompts generated by the red LM that trigger the text-to-image model towards unsafe generation according to either Q16 or NudeNet classifiers. We adopt the same evaluation strategy to that utilized in [29] to report the amount of unsafe content generation in text-to-image models according to Q16 and NudeNet classifiers as a measure for attack effectiveness. In addition, we use _diversity_ as another metric to report the percentage of unique prompts generated by the red LM that are not repetitive. We report the averaged attack effectiveness along with diversity results over the three initialization sets. As a baseline, we compare our proposed attack strategies in FLIRT to Stochastic Few Shot (SFS) red teaming attack [21]. For SFS, we first generate 1K prompts using the same instruction prompts that we use in our experiments to validate FLIRT. We then sample from the generated prompts with probability \(\propto e^{(NudeNet(x)+Q16(x))/T}\) where \(NudeNet(x)\) and \(Q16(x)\) are the probability of the generated image corresponding to the prompt \(x\) being unsafe according to NudeNet and Q16 classifiers and \(T\) is a temperature hyper-parameter. We include the sampled prompts as few shot exemplar prompts to generate 1K new adversarial prompts. We set \(T=\frac{1}{10}\) and perform the sampling without replacement as suggested in [21]. We report the average results for SFS over using the same three sets of instruction seed prompts that we use to evaluate attack strategies in FLIRT. In terms of efficiency, SFS is more costly than attacks incorporated in FLIRT as SFS needs to generate \(n_{zs}+n_{fs}\) prompts where \(n_{zs}\) is the number of prompts generated during the zero-shot prompting stage (set to 1k) and \(n_{fs}\) is the number of prompts generated during the few shot prompting stage (set to 1k). In contrast, FLIRT only needs to generate \(n_{fs}\) prompts (set to 1k). **Attack Effectiveness** We report the attack effectiveness and diversity results from applying the different attack strategies studied in this work in Table 1. We observe that compared to SFS, FLIRT-based attacks are significantly more effective in triggering vanilla and safe stable diffusion models toward generating unsafe images. Although SFS generates a diverse set of prompts, we observe its weakness in generating effective attacks. This is in part due to the fact that SFS relies on prompts generated by the red LM without any initial demonstrations provided by humans. Thus, SFS relies on less effective prompts to begin with. Table 1 also demonstrates that the scoring-based adversarial in-context attack strategy is the most effective in terms of attack effectiveness compared to other attack strategies. For this set of results, we use a scoring attack that only optimizes for attack effectiveness (\(O_{AE}(X^{t})\)). This entails that the red LM receives the probability scores coming from Q16 and NudeNet classifiers for a given image corresponding to a generated prompt and updates the exemplar prompts according to the probability scores it receives as a feedback for attack effectiveness. Although the scoring strategy gives us the best results in terms of attack effectiveness, we observe that it generates less diverse set of generated prompts in some cases. On the other hand, SFS, LIFO, and Scoring-LIFO strategies produce better results in terms of generating diverse set of prompts. The lack of diverse generations in scoring strategy is in part due to the fact that in scoring attack, the red LM learns an effective prompt that is strong in terms of triggering the text-to-image model in unsafe generation; thus, it keeps repeating the same/similar prompts that are effective which affects diverse output generation. To alleviate this problem, and encourage diverse generations in scoring attack strategy, we attempt to control the diversity of prompts through the addition of diversity as an additional objective (\(O_{Div}(X^{t})\)) in the next set of experiments. **Controlling Diversity** To enhance the diversity of generations by the scoring attack strategy, we add an additional objective to the initial attack effectiveness objective that controls for diversity. For \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Model** & **LIFO\({}_{\text{(afterually)}}\)\({}_{\text{(afterually)}}\)\({}_{ the diversity objective (\(O_{Div}(X^{t})\)), we aim to maximize the averaged pairwise sentence diversity of existing exemplar prompts. We use cosine similarity to calculate pairwise similarity of two sentence embeddings6[26]. Thus, the scoring strategy tries to optimize for \(\lambda_{1}O_{1}+\lambda_{2}O_{2}\) where \(O_{1}\) is the attack effectiveness objective (\(O_{AE}(X^{t})\)), and \(O_{2}\) is the diversity objective (\(O_{Div}(X^{t})\)). To observe the effect of the newly added objective on enhancing the diversity of generations in scoring attack strategy, we fix \(\lambda_{1}=1\) and vary the \(\lambda_{2}\) parameter and report the attack effectiveness vs diversity trade-offs in Figure 2. We demonstrate that by increasing the \(\lambda_{2}\) parameter value, the diversity of generated prompts increase as expected with a trade-off on attack effectiveness. We demonstrate that using the scoring strategy, one can control the trade-offs and that the red LM can learn a strategy to satisfy different objectives to attack the text-to-image model. Footnote 6: [https://huggingface.co/tasks/sentence-similarity](https://huggingface.co/tasks/sentence-similarity) ### Ablation Studies In addition to the main experiments, we perform ablation studies to address the following questions: **Q1:**_Would the results hold if we use a different language model as the red LM?_ **Q2:**_Would the results hold if we add content moderation in text-to-image models?_ **Q3:**_Can we control for the toxicity of the prompts using the scoring attack strategy?_ **Q4:**_Would the attacks transfer to other models?_ **Q5:**_How robust our findings are to the existing flaws in the safety classifiers?_ For the ablation studies, we only use the first set of seed prompts to report the results as the results mostly follow similar patters. All the other setups are the same as the main experiments unless otherwise specified. **Q1: Different Language Model** To answer the question on whether the results hold if we use a different language model as the red LM, we replace the GPT-Neo model utilized in our main experiments with BLOOM 3b parameter model [28]7. We then report the results on attack effectiveness comparing the different attack strategies. From the results reported in Table 2, we observe similar patterns to that we reported previously which suggests that the results still hold even when we use a different language model as our red LM. In our results, we demonstrate that the scoring attack strategy is the most effective attack. However, similar to our previous observations, it suffers from the repetition problem and lack of diverse generations if we only optimize for attack effectiveness without considering diversity as the secondary objective. SFS, LIFO, and Scoring-LIFO generate more diverse outcomes with lower attack effectiveness compared to the scoring strategy similar to our previous findings. Footnote 7: [https://huggingface.co/bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) **Q2: Content Moderation** To answer the question on whether applying content moderation on text-to-image models affects the results, we turn on the built-in content moderation (safety filter) \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **Model** & **LFO\({}_{\text{(\emph{diversity})}}\)[\(\uparrow\)]** & **FIFO\({}_{\text{(\emph{diversity})}}\)[\(\uparrow\)]** & **Scoring\({}_{\text{(\emph{diversity})}}\)[\(\uparrow\)]** & **Scoring-LIFO\({}_{\text{(\emph{diversity})}}\)[\(\uparrow\)]** & **SFS\({}_{\text{(\emph{diversity})}}\)[\(\downarrow\)]** \\ \hline Stable Diffusion (SD) & 71.8 (0.1) & 63.3 (03.9) & **85.5 (00.5)** & 73.5 (05.5) & 41.4 (**07.8**) \\ Weak Safe SD & 66.8 (0.1) & 78.8 (0.1) & **86.6 (0.3)** & **66.7 (0.98)** & 38.0 (05.5) \\ Medium Safe SD & 50.0 (0.5) & 38.0 (12.2) & **69.2 (0.16)** & 53.7 (0.76) & 23.4 (**07.9**) \\ Strong Safe SD & 32.5 (0.6) & 42.3 (05.5) & **55.0 (0.1)** & 38.8 (0.54) & 19.2 (**07.9**) \\ Max Safe SD & 21.9 (05.4) & 28.7 (43.6) & **38.0 (25.5)** & 25.3 (06.5) & 16.6 (**07.0**) \\ \hline \hline \end{tabular} \end{table} Table 2: Attack effectiveness and diversity results when applying BLOOM as the red LM. Figure 2: Diversity-attack effectiveness results with varying the \(\lambda_{2}\) parameter. Attack effectiveness reports the percentage of images generated by the text-to-image model that are labeled as unsafe according to Q16 and NudeNet classifiers. The diversity score reports the percentage of unique prompts generated by the red LM. For results on other stable diffusion models refer to the Appendix. in text-to-image models. This content moderation (safety filter) operationalizes by comparing the clip embedding of the generated image to a set of predefined unsafe topics and filtering the image if the similarity is above a certain threshold [25]. In this set of experiments, we turn on the safety filter in all the text-to-image models studied in this work and report our findings in Table 3. We demonstrate that although as expected the effectiveness of the attacks drop in some cases as we turn on the safety filter, still the attacks are effective and that the scoring strategy for the most cases is the most effective strategy with similar trend on the diversity of the results as we observed previously. These results demonstrate that applying FLIRT can also help in red teaming text-to-image models that have a content moderation mechanism on which can help us red team the text-to-image model as well as the content moderation applied on it and detecting the weaknesses behind each component. Although the main goal of this work is to analyze robustness of text-to-image models irrespective of whether a content moderation is applied on them or not, we still demonstrate that FLIRT is powerful enough to red team models with content moderation applied on them. **Q3: Toxicity of Prompts** In this set of experiments, we are interested in showing whether the red LM can generate prompts that are looking safe (non-toxic), but at the same time can trigger text-to-image models into unsafe generation. This is particularly interesting to study since our motivation is to analyze prompt-level filters that can serve as effective defense mechanisms for text-to-image models. Secondly, we want to analyze robustness of text-to-image models to implicit prompts that might not sound toxic but can be dangerous in terms of triggering unsafe content generation in text-to-image models. Toward this goal, we incorporate a secondary objective in scoring attack strategy in addition to attack effectiveness that controls for toxicity of the generated prompts. Thus, our scoring based objective becomes \(\lambda_{1}O_{1}+\lambda_{2}O_{2}\) where \(O_{1}\) is the attack effectiveness objective (\(O_{AE}(X^{t})\)), and \(O_{2}\) is for the low-toxicity of the prompt (\(O_{LT}(X^{t})\)) which is \((1-toxicity)\) score coming from our utilized toxicity classifier (Perspective API)8. In our experiments, we fix \(\lambda_{1}=1\) and compare results for when we set \(\lambda_{2}=0\) (which is when we do not impose any constraint on the safety of the prompts) vs \(\lambda_{2}=0.5\) (when there is a safety constraint imposed on the prompts). In our results demonstrated in Table 4, we observe that by imposing the safety constraint on the toxicity of the prompts, we are able to drastically reduce the toxicity of the prompts generated and that we can control this trade-off using our scoring strategy by controlling for attack effectiveness vs prompt toxicity. Footnote 8: [https://www.perspectiveapi.com](https://www.perspectiveapi.com) **Q4: Attack Transferability** In transferability experiments, we study whether an attack imposed on one text-to-image model can transfer to other text-to-image models. In this set of experiments, we take successful prompts that are generated through FLIRT using scoring attack strategy optimized for attack effectiveness towards triggering a particular text-to-image model, and apply them to another model. We then report the amount of success and attack transfer in terms of the percentage of prompts that transfer to the other model that \begin{table} \begin{tabular}{c|c|c} \hline \hline **Model** & \(\lambda_{2}=0\) **\(\downarrow\)**(\(\downarrow\))** & \(\lambda_{2}=0.5\) **\(\downarrow\)**(\(\downarrow\))** \\ \hline SD & 82.7 (8.24) & **6.7 (8.16)** \\ Weak & 43.6 (9.47) & **6.0 (8.42)** \\ Medium & 11.5 (8.24) & **0.4 (72.7)** \\ Strong & 1.2 (8.48) & **0.5 (0.10)** \\ Max & 18.8 (3.24) & **1.8 (2.4)** \\ \hline \hline \end{tabular} \end{table} Table 4: Percentage of toxic prompts generated by the red LM before (\(\lambda_{2}=0\)) and after (\(\lambda_{2}=0.5\)) applying low-toxicity constraint in scoring attack. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **Model** & **\(\downarrow\)(\(\downarrow\))** & **FIFO\(\downarrow\)(\(\downarrow\))** & **Scoring\(\downarrow\)(\(\downarrow\))(\(\downarrow\))** & **Scoring\(\downarrow\)(\(\downarrow\))(\(\downarrow\))** & **Scoring\(\downarrow\)(\(\downarrow\))(\(\downarrow\))** & **SSF\(\uparrow\)(\(\downarrow\))(\(\downarrow\))** \\ \hline Stable Diffusion (SD) & 45.7 (97.4) & 25.7 (95.0) & **86.3 (43.3)** & 48.7 (98.8) & 33.2 (98.8) \\ Weak Safe SD & 48.2 (97.3) & **89.9 (5.8)** & **79.6 (9.95)** & 46.1 (99.4) & 29.5 (95.9) \\ Medium Safe SD & 40.0 (97.5) & **17.3 (52.6)** & **57.3 (93.5)** & 40.0 (99.8) & 14.2 (97.9) \\ Strong Safe SD & 37.6 (97.9) & 11.9 (90.8) & **55.0 (93.3)** & 36.9 (98.9) & 12.2 (98.0) \\ Max Safe SD & 28.3 (98.6) & **77.7 (17.5)** & 23.4 (90.6) & 26.2 (97.0) & 8.0 (98.7) \\ \hline \hline \end{tabular} \end{table} Table 3: Attack effectiveness and diversity results with safety filter on in stable diffusion models. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **To** & **SD** & **Weak** & **Medium** & **Strong** & **Max** \\ **From** \(\downarrow\) & & & & \\ \hline SD & 100.0 & 93.8 & 84.6 & 72.1 & 54.7 \\ Weak & 91.1 & 100.0 & 78.3 & 65.5 & 50.2 \\ Medium & 97.3 & 95.2 & 100.0 & 74.9 & 55.8 \\ Strong & 99.4 & 99.3 & 97.9 & 100.0 & 55.6 \\ Max & 86.7 & 84.2 & 73.5 & 62.7 & 100.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Transferability of the attacks from one stable diffusion model to another. result in unsafe generation. As reported in Table 5, we observe that attacks transfer successfully from one text-to-image model to another. As expected, it is harder to transfer attacks to more robust models compared to less robust ones (e.g., it is easier to transfer attacks from SD to weak safe SD compared to SD to max safe SD). **Q5: Noise in Safety Classifiers** Since FLIRT relies on the automatic feedback coming from the safety classifiers, it is possible that existing noise and flaws in the classifier affect our findings. To put this into test and verify that our findings are robust to the existing imperfections in the safety classifiers, we impose different levels of noise to the outcome of the safety classifiers applied on images generated by the stable diffusion model. In our experiments, we randomly flip different \(\epsilon\) percentages (5%, 10%, and 20%) of the output labels produced by the safety classifiers applied on the generated images and report the results in Table 6. In our results, we report that our results and findings still hold. Scoring strategy still outperforms other strategies in terms of attack effectiveness, and SFS, LIFO, and Scoring-LIFO strategies generate more diverse set of prompts. ### Red Teaming Text-to-text Models To demonstrate whether FLIRT can be used to red team text-to-text models, we replace the text-to-image models studied in previous experiments with the GPT-Neo 2.7B parameter language model [2, 8]9. Since in this experiment the output of the target model is text instead of image, we replace NudeNet and Q16 classifiers which are image based safety classifiers with TOXIGEN model which is a toxic language detection model [10]. In this study, the goal is to red team a language model and trigger it to generate toxic responses. Thus, we report the percentage of responses generated by the target model that are toxic. We use a new set of seed prompts that are suitable for language domain to trigger toxic generation (listed in Appendix) and keep the rest of the experimental setups the same. In our results demonstrated in Table 7, we observe that our introduced attack strategies in this paper utilized in FLIRT significantly outperform the SFS baseline that was introduced to specifically red team language models [21]. These results show the flexibility of FLIRT to effectively be applicable to language (text-to-text) space in addition to text-to-image. Footnote 9: [https://huggingface.co/EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) ## 4 Related Work **Adversarial Machine Learning** There has been a significant body of work in the area of adversarial machine learning for robustness improvement in different applications and models [22, 4]. Researchers and pioneers in the field of adversarial machine learning have investigated approaches in terms of proposing different attack and defense strategies to test and enhance robustness of different models [14, 23, 16, 6]. With the rise of foundation models [3], some of the recent adversarial strategies have taken new shapes and forms, such as jail-breaking attacks [15] and red teaming efforts [7] to evaluate and improve safety and robustness of foundation models, such as ChatGPT. **Safety** In addition, with the incorporation of foundation models in different applications [1], improving safety and robustness of these models along with aligning them with moral norms has become \begin{table} \begin{tabular}{c|c|c|c|c} \hline \(\epsilon\) & **LIFO\(\uparrow_{\text{(threshy\uparrow)}}\)** & **FIFO\(\uparrow_{\text{(threshy\uparrow)}}\)** & **Scoring\(\uparrow_{\text{(threshy\uparrow)}}\)** & **SFS\(\uparrow_{\text{(threshy\uparrow)}}\)** \\ \hline 5\% & 75.6 (95.0) & 39.0 (73.6) & **89.0** (45.4) & 77.3 (95.0) & 36.7 (97.5) \\ 10\% & 73.7 (96.9) & 72.6 (55.1) & **87.9** (44.0) & 73.4 (96.9) & 36.9 (97.8) \\ 20\% & 66.1 (98.5) & 39.6 (88.1) & **77.6** (42.1) & 70.5 (98.5) & 40.5 (98.0) \\ \hline \end{tabular} \end{table} Table 6: Attack effectiveness and diversity results when different levels of noise is injected to the feedback coming from Q16 and NudeNet classifiers. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline **LIFO\(\uparrow_{\text{(threshy\uparrow)}}\)** & **FIFO\(\uparrow_{\text{(threshy\uparrow)}}\)** & **Scoring\(\uparrow_{\text{(threshy\uparrow)}}\)** & **Scoring-LIFO\(\uparrow_{\text{(threshy\uparrow)}}\)** & **SFS\(\uparrow_{\text{(threshy\uparrow)}}\)** \\ \hline 46.2 (94.4) & 38.8 (93.8) & 50.9 (84.8) & **52.4** (95.3) & 9.9 (**100.0) \\ \hline \hline \end{tabular} \end{table} Table 7: Attack effectiveness and diversity results for red teaming GPT-Neo language model. critical [11; 12]. Analyzing and improving robustness of AI systems toward safety concerns have been studied previously in language, vision, and multi-modal models [18; 34; 29; 13]. Not only in foundation models, but safety is studied in more general AI applications and models, such as autonomous vehicles [33]. Safety is also widely studied in reinforcement learning for applications in robotics and autonomous vehicles [35; 32; 17]. **Red Teaming** One major contributor to safety analysis constitutes the red teaming efforts that have been practiced against various language and multi-modal models including humans in the loop [7; 19]. Some other efforts in red teaming have tried to automate the setup and utilize a red language model instead of humans in the loop [21; 18]. However, these studies were in the context of language models and not multi-modal. There have been some efforts in red teaming text-to-image models using humans in the loop [19]; however, this area is still underexplored in terms of studies that aim to automate red teaming efforts in text-to-image models. The closest work to red teaming text-to-image models is [29] in which authors manually created a benchmark dataset to asses safety of these models and trained safe text-to-image models that would avoid unsafe image generation utilized in this paper. There have also been studies on red teaming the content moderation or safety filters imposed on text-to-image models [25]. We hope that our studies in this work will encourage more future work in this domain that is relatively new and underexplored. ## 5 Discussion We introduce the feedback loop in-context red teaming framework that aims to red team models to expose their vulnerabilities toward unsafe content generation. We demonstrate that in-context learning incorporated in a feedback based framework can be utilized by the red LM to generate effective prompts that can trigger unsafe content generation in text-to-image and text-to-text models. In addition, we propose numerous variations of effective attack strategies. We perform different experiments to demonstrate the efficacy of our proposed automated framework. Although in this work we introduce and use FLIRT as a red teaming framework, this framework can have different usecases. For instance, FLIRT can be used for synthetic data generation in different domains, it can be used for model enhancement and evaluation according to various aspects not limited to responsible AI practices, and it can be utilized for personalization. **Limitations** Since FLIRT relies on the automatic feedback coming from classifiers, it is possible that existing noise in the classifier affects the outcome. However, we perform ablation studies as reported in Table 6 and verify that our results still hold and are robust to the introduced noise in the outcome of the classifier. Since the results rely on the accuracy of the classifier, it is possible that we get some false positives in the generated examples. To address these issues, it is possible to incorporate human feedback if one is concerned about existing flaws in the trained classifiers. FLIRT is flexible to allow replacement of each component with a substitute of choice. **Broader Impact** Since FLIRT does not require any expensive training or fine-tuning of a language model, it is more efficient and green compared to previous work. In addition to red teaming which is critical in the responsible AI development, FLIRT can be used for synthetic data generation to improve and enhance models. It can also be used to probe and understand various models. Although FLIRT can be used to evaluate and enhance models according to safety and responsible AI concerns, if used by malicious actors, it can result in unsafe content generation which can have negative societal impact. To alleviate this issue in part, we can work on setting up an appropriate license for our framework prohibiting malicious use outside of research. In addition, it is possible that existing biases in the utilized models propagate to the downstream analysis and produced datasets. Thus, careful auditing of these models is recommended.
2310.19718
High-fidelity and polarization insensitive universal photonic processors fabricated by femtosecond laser writing
Universal photonic processors (UPPs) are fully programmable photonic integrated circuits that are key components in quantum photonics. With this work, we present a novel platform for the realization of low-loss, low-power and high-fidelity UPPs based on femtosecond laser writing (FLW) and compatible with a large wavelength spectrum. In fact, we demonstrate different UPPs, tailored for operation at 785 nm and 1550 nm, providing similar high-level performances. Moreover, we show that standard calibration techniques applied to FLW-UPPs result in Haar random polarization independent photonic transformations implemented with average amplitude fidelity as high as 0.9979 at 785 nm (0.9970 at 1550 nm), with the possibility of increasing the fidelity over 0.9990 thanks to novel optimization algorithms. Besides being the first demonstrations of polarization-transparent UPPs, these devices show the highest level of control and reconfigurability ever reported for a FLW circuit. These qualities will be greatly beneficial to applications in quantum information processing.
Ciro Pentangelo, Niki Di Giano, Simone Piacentini, Riccardo Arpe, Francesco Ceccarelli, Andrea Crespi, Roberto Osellame
2023-10-30T16:46:25Z
http://arxiv.org/abs/2310.19718v1
High-fidelity and polarization-insensitive universal photonic processors fabricated by femtosecond laser writing ###### Abstract Universal photonic processors (UPPs) are fully programmable photonic integrated circuits that are key components in quantum photonics. With this work, we present a novel platform for the realization of low-loss, low-power and high-fidelity UPPs based on femtosecond laser writing (FLW) and compatible with a large wavelength spectrum. In fact, we demonstrate different UPPs, tailored for operation at \(785\,\mathrm{nm}\) and \(1550\,\mathrm{nm}\), providing similar high-level performances. Moreover, we show that standard calibration techniques applied to FLW-UPPs result in Haar random polarization-independent photonic transformations implemented with average amplitude fidelity as high as \(0.9979\) at \(785\,\mathrm{nm}\) (\(0.9970\) at \(1550\,\mathrm{nm}\)), with the possibility of increasing the fidelity over \(0.9990\) thanks to novel optimization algorithms. Besides being the first demonstrations of polarization-transparent UPPs, these devices show the highest level of control and reconfigurability ever reported for a FLW circuit. These qualities will be greatly beneficial to applications in quantum information processing. + Footnote †: Corresponding author: [email protected] ## I Introduction Quantum information processing is a rapidly advancing field that aims at harnessing the unique properties of quantum mechanics, such as superposition and entanglement, to perform computation and communication tasks that are impossible or difficult using classical methods. Photonics offers several advantages over other approaches in this framework [1]. Photons are highly stable and can travel long distances without being absorbed or suffering decoherence even at room temperature. Their flying nature make them also the most natural way to transfer quantum information. Furthermore, interest in this approach has recently increased after the experimental demonstrations of quantum supremacy in photonic systems [2; 3]. One promising and scalable approach to implement quantum computing and quantum communication protocols is through the use of photonic integrated circuits (PICs) [4]. Integrated photonics allows to miniaturize optical components and integrate them on the same substrate, leading to high scalability and integration density while guaranteeing an intrinsic optical stability even among a large number of components. Programmability of the PICs operation is typically achieved by actively controlling the phase shifts [5]. The simplest and most widely implemented form of phase shifters are thermal phase shifters, which exploit the thermo-optic effect by dissipating electrical power into heat, reversibly modifying the waveguide refractive index. The simplest fully programmable PIC is the Mach-Zehnder interferometer (MZI), which is a 2-port circuit featuring two balanced directional couplers and two phase shifters. This device can implement any unitary transformation between the input and output modes. The generalization to an \(N\)-mode circuit can be done by employing a mesh of MZIs in triangular [6] or rectangular [7] configuration, thus obtaining a circuit that is able to perform any unitary transformation in \(U(N)\). These universal photonic processors (UPPs) are key components for quantum information processing and have been already demonstrated in various photonic platforms and materials [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Among them, femtosecond laser writing (FLW) of waveguides in silicate glass [19] features low insertion losses and low birefringence over a wide wavelength spectrum ranging from the visible to the near-infrared. This fabrication technique is quite versatile: it not only allows for cost-effective and rapid prototyping of PICs, but also enables to ablate the substrate with femtosecond pulses and thus cut out microstructures. The micro-structuring of the substrate allowed by FLW can be used to fabricate thermal isolation structures [20] that, in conjunction with thermal phase shifters, significantly reduce their power dissipation and crosstalk of orders of magnitude. In this work we demonstrate the potential of the FLW platform by fabricating and calibrating two 6-mode UPPs operating at \(785\,\mathrm{nm}\) and \(1550\,\mathrm{nm}\), respectively. These circuits feature insertion losses at \(785\,\mathrm{nm}\) (\(1550\,\mathrm{nm}\)) lower than \(3\,\mathrm{dB}\) (\(2.5\,\mathrm{dB}\)), average \(2\pi\) power dissipation per phase shifter as low as \(39\,\mathrm{mW}\) (\(63\,\mathrm{mW}\)) and are able to implement unitary transformations with an average amplitude fidelity of \(0.9979\) (\(0.9970\)), which can increase over \(0.9990\) by exploiting optimization algorithms and which does not depend on the H/V polarization state of the input light. These devices are among the few examples of UPPs currently reported in the literature showing such a high level of control and reconfiguration accuracy over a wide set of implemented transformations and, to the best of our knowledge, the first processors featuring a polarization transparent behaviour. ## II Design and fabrication Processors at 785 nm and 1550 nm (UPP A and B respectively from now on) share the same waveguide layout based on a rectangular mesh [7] of 15 MZI-based unit cells, entailing a total number of 30 thermal shifters (Figure 1). The unit cell reported in [20] is here employed for UPP A and depicted in Figure 1 (inset a). The pitch between adjacent waveguides is \(p=80\) um. Balanced directional couplers are realized by bending the waveguides with a minimum curvature radius of \(R_{c}=30\) mm, while MZI arms (and thermal shifters) are \(L_{arm}=1.5\) mm long. The total length of the cell is \(L_{cell}=11.4\) mm. This results in a chip dimension of 80\(\times\)20 mm including also the fan-in and fan-out sections at each end of the circuit added for compatibility with standard 127 um fiber arrays. In order to compensate for the longer operating wavelength and keep the same temperature profile [20] for a given phase shift, UPP B instead features longer MZI arms (\(L_{arm}=3\) mm). Constant-temperature scaling allows us to produce devices sharing the same properties in terms of stability, breakdown power, nonlinearity, etc., paying a small price in terms of unit cell length. However, this penalty is partially compensated by employing more confining waveguides featuring negligible bending losses down to \(R_{c}=15\) mm. The reduced radius leads to a total length of the cell \(L_{cell}=13.2\) mm and, as a result, to a chip dimension of 90\(\times\)20 mm. Fabrication of these devices starts from a 1 mm thin Corning Eagle XG alumino-borosilicate glass substrate. Waveguides are inscribed at a depth of 30 um from the surface, by multi-scan laser irradiation followed by thermal annealing of the substrate [21]. Waveguide irradiation parameters are optimized for single-mode operation at the respective wavelengths for the two processors. Thermal isolation trenches are machined by water-assisted laser ablation on each side of the top arm of each MZI both before and after the first directional coupler, where the thermal shifters will be fabricated [20]. All trenches are 300 um deep, 60 um wide, and either 1.5 mm or 3 mm long respectively for devices A and B. Fabrication of the resistive microheaters of the thermal phase shifters is based on the process reported in [22]. A thin gold layer is deposited on the surface of the device by thermal evaporation and then etched with femtosecond laser pulses so that 10 um wide microheaters are located on top of the desired MZI arms, while larger contact pads allow for their connection at the sides of the die. A large aspect ratio for the contact pads is required to limit their parasitic series resistance, given that both them and the microheaters are fabricated on the same gold film. Figure 1 (inset b) is a micrograph of UPP A showing a column of three MZI cells, in which it is possible to easily identify trenches, microheaters and contact pads. After packaging the die on an aluminum heat sink, the thermal shifters are connected to printed circuit boards by means of electrically conductive epoxy glue, allowing easy interfacing with the external electronics. Final resistance values for the microheaters are 111 \(\pm\) 6 \(\Omega\) (UPP A) and 215 \(\pm\) 15 \(\Omega\) (UPP B). Finally, the input and output ports of the circuits are made available for characterization by standard optical fiber arrays pigtailed with UV-curing glue. At the end of this process, total insertion losses of about 3 dB and 2.5 dB are measured for UPP A and B, respectively. Figure 1: 3D rendering of the UPP. Inset (a) shows the schematic layout of an individual MZI of the device. Inset (b) is a microscope picture of UPP A comprising a column of 3 thermal shifters, where it is possible to see the trench structures and the ablations in the metal film. ## III Modeling and calibration The transfer matrix of the MZI unit cell reported in Figure 1 (inset a) can be expressed as: \[\mathbf{U_{MZI}}=e^{i\left(\frac{\theta}{2}+\frac{\pi}{2}\right)}\begin{bmatrix}e^{i \phi}\sin\left(\frac{\theta}{2}\right)&\cos\left(\frac{\theta}{2}\right)\\ e^{i\phi}\cos\left(\frac{\theta}{2}\right)&-\sin\left(\frac{\theta}{2}\right) \end{bmatrix}, \tag{1}\] where \(\phi\) and \(\theta\) are the phases induced by the external and internal phase shifters respectively (see Figure 1, inset a). Assuming to inject light in one input port of this cell, the normalized optical power \(P_{\text{out}}\) measured at the cross output port will depend only on the internal phase \(\theta\) as: \[P_{out}=\frac{1+\cos\left(\theta\right)}{2}. \tag{2}\] The phase \(\theta\) induced by a thermal shifter can be tuned by controlling either the voltage drop \(V\) across the microheater or the current \(I\) flowing through it. In our case we have decided for the latter in order to prevent the nonlinear crosstalk due to pure electrical phenomena [22]. An example of interference measured on an individual MZI is reported in Figure 1(a), where the optical power \(P_{out}\) is reported as a function of the squared current \(I^{2}\). Indeed, the phase \(\theta\) induced by each shifter can be expressed as follows: \[\theta=\theta_{0}+\alpha_{I}I^{2}(1+\beta I^{2}), \tag{3}\] where the constant phase term \(\theta_{0}\) is an offset present due to fabrication tolerances, \(\alpha_{I}\) is the tuning coefficient of the thermo-optic process and \(\beta\) is a correction factor needed to take into account that the microheater resistance depends on the temperature. Such a nonlinear effect is highlighted in Figure 1(b), where \(\theta\) is reported as a function of the squared current \(I^{2}\). In addition, it is also necessary to consider the thermal crosstalk effects. Indeed, the phase induced on the \(i\)-th MZI in the circuit will be affected by all of the active microheaters and thus: \[\theta_{i}=\theta_{0,i}+\sum_{j}\alpha_{ij}I_{j}^{2}(1+\beta_{j}I_{j}^{2}), \tag{4}\] where the superposition principle is employed in spite of the presence of the correction term thanks to the fact that the latter depends in first approximation only on the \(j\)-th shifter. In addition, it is worth noting that the constants \(\alpha_{ij}\) strongly depend on the distance between the \(i\)-th MZI and \(j\)-th shifter. Due to the large bending radii (relative to the inter-waveguide pitch) of these circuits, horizontally neighboring MZIs are millimeters apart while vertically neighboring MZIs are \(160\,\mathrm{\SIUnitSymbolMicro m}\) apart. This means that we can neglect the coefficients \(\alpha_{ij}\) for pairs of MZIs that are not vertically adjacent, leading to a significant simplification of the calibration process and improved control accuracy. The dataset composed by \(\theta_{0,i}\), \(\alpha_{i,j}\) and \(\beta_{j}\) represents the calibration dataset for the internal shifters. In order to retrieve it, coherent light is injected in each individual MZI following a node isolation algorithm [23]. Then, the output optical power dependence on the electrical power is fitted from Equations 2 and 4 in order to obtain all the parameters for individual shifters and pairs connected by crosstalk effects. During this process internal shifters that are already calibrated are set to obtain behaviors as straight waveguides (\(\theta=\pi\)), crossings (\(\theta=0\)), or balanced beam splitters (\(\theta=\pi/2\)). By surrounding a yet uncalibrated MZI with fully reflective or fully transmissive paths, it is possible to isolate it and proceed with a clean characterization of the phase shifter. Figure 2: Experimental characterization of individual MZIs on UPP A. (a) Optical power \(P_{out}\) measured at the cross output as a function of the squared current \(I^{2}\) when an internal thermal shifter is actuated. Best fit and experimental dataset are both reported, showing the effectiveness of our model, based on Eqs. 2 and 3. (b) Phase \(\theta\) as a function of the squared current \(I^{2}\) obtained from the dataset reported in (a). The solid orange line represents the best nonlinear (polynomial) fit. The dashed black line represents the expected trend without the second-order term (i.e. \(\beta=0\) in Eq. 3). For external shifters the procedure follows both the same modeling and measurement strategy. The only difference is the necessity to enclose the phase shifter in larger interferometric rings formed by multiple MZIs [8; 13]. All of these measurements have been automated with custom Python scripts to control the instrumentation involved and fit the parameters. More information about the calibration apparatus is reported in the Supplementary Materials (Section S1). To set a specific unitary transformation \(U\) on a UPP one can use the decomposition reported in [7] to obtain the corresponding set of phases \(\theta_{i}\) and \(\phi_{i}\). Then, it is possible to invert Equations 4 to find the set of currents \(I_{i}\) that implement the desired phases. Since this problem in general does not have a unique solution, we always look for the set of currents \(I_{i}\) that minimizes the total power budget dissipated on chip. With this method, the measured dissipated power was always lower than \(1.2\,\mathrm{W}\) in UPP A (\(1.9\,\mathrm{W}\) in UPP B). From this calibration procedure it is already possible to estimate the average \(2\pi\) power dissipation of each thermal phase shifter, which is \(39\,\mathrm{mW}\) for UPP A and \(63\,\mathrm{mW}\) for UPP B. ## IV Experimental results ### Implementation of unitary transformations The successful calibration of UPPs A and B was verified with the same experimental setup by implementing two types of unitary transformations: switching transformations, where the device acts as an optical switch linking each input with a given output, and Haar random transformations, corresponding to randomly sampled complex unitary matrices. The former only requires the actuation of internal shifters (specifically to either \(\theta=0\) or \(\theta=\pi\)) while the latter requires the actuation of both internal and external shifters to arbitrary phase values. Each measurement can be summarized as follows: Figure 3: Amplitude fidelity \(\mathcal{F}_{ampl}(U_{set},U_{exp})\) distribution over the 30 randomly chosen switching matrices. (a) Scatter plot of the distribution for UPP A. The average \(0.9963\) is marked by the dashed line. (b) Example of a switching matrix implementation for UPP A with amplitude fidelity \(0.9959\). We compare the amplitudes of the target matrix \(U_{set}\) versus the amplitudes of the measured matrix \(U_{exp}\). (c) Scatter plot of the distribution for UPP B. The average \(0.9956\) is marked by the dashed line. (d) Example of a switching matrix implementation for UPP B with amplitude fidelity \(0.9960\). We compare the amplitudes of the target matrix \(U_{set}\) versus the amplitudes of the measured matrix \(U_{exp}\). 1. Sample a random switching or Haar random matrix \(U_{set}\in U(6)\). 2. Find the set of phases \(\theta_{i}\) and \(\phi_{i}\) corresponding to \(U_{set}\) using the decomposition algorithm reported in [7]. 3. Employ the calibration data to extract and implement electrical currents corresponding to desired phases. 4. Measure the input-output intensity distribution and reconstruct the amplitudes of the experimental matrix \(U_{exp}\)[24]. 5. Evaluate the implementation quality by the amplitude fidelity metric (with \(N=6\) being the number of modes): \[\mathcal{F}_{ampl}(U_{set},U_{exp})=\frac{1}{N}tr(|U_{set}^{\dagger}||U_{exp} |).\] (5) A total of 30 switching unitaries and 1000 Haar random unitaries were implemented on each UPP and the results are summarized in Figure 3 and 4, respectively. The amplitude fidelity for the 30 measured switching unitaries is distributed with \(\mathcal{F}_{ampl}=\mu\pm\sigma=0.9963\pm 0.0009\) for UPP A (Figure (a)a) and \(0.9956\pm 0.0016\) for UPP B (Figure (c)c). An example of implementation is reported in Figure (b)b and (d)d, where we compare the amplitudes of the target matrix \(U_{set}\) with the reconstructed amplitudes of \(U_{exp}\) and we achieve an amplitude fidelity of \(0.9959\) (UPP A) and \(0.9960\) (UPP B). These excellent results not only demonstrate the high accuracy that our calibration protocol can reach on the internal phases, but also that the FLW process is able to achieve remarkable accuracy and reproducibility in the implementation of directional couplers with the required splitting ratio. The amplitude fidelity for the 1000 measured Haar random unitaries is distributed with \(\mathcal{F}_{ampl}=\mu\pm\sigma=0.9979\pm 0.0009\) for UPP A (Figure (a)a) and \(0.9970\pm 0.0017\) for UPP B (Figure (c)c). An example of implementation is Figure 4: Amplitude fidelity \(\mathcal{F}_{ampl}(U_{set},U_{exp})\) distribution over the 1000 Haar random unitary matrices. (a) Scatter plot of the distribution for UPP A. The average \(0.9979\) is marked by the dashed line. (b) Example of a unitary matrix implementation for UPP A with amplitude fidelity \(0.9975\). We compare the amplitudes of the target matrix \(U_{set}\) versus the amplitudes of the measured matrix \(U_{exp}\). (c) Scatter plot of the distribution for UPP B. The average \(0.9970\) is marked by the dashed line. (d) Example of a unitary matrix implementation for UPP B with amplitude fidelity \(0.9964\). We compare the amplitudes of the target matrix \(U_{set}\) versus the amplitudes of the measured matrix \(U_{exp}\). reported in Figure 3(b) and 3(d), where we compare the amplitudes of the target matrix \(U_{set}\) with the reconstructed amplitudes of \(U_{exp}\) and we achieve an amplitude fidelity of 0.9975 (UPP A) and 0.9964 (UPP B). These results demonstrate that the high calibration accuracy reached for the internal shifters was successfully extended also to the external shifters. Universal reconfiguration and high fidelity control is thus demonstrated for both UPPs. ### Fidelity improvement via single unitary optimization After successfully demonstrating the implementation of high-fidelity Haar random unitary transformations on both UPPs, we aim at evaluating whether the employed calibration is indeed reaching the limit in terms of accuracy with which our circuits can implement a given matrix. Namely, we tried to improve the implementation of specific transformations by optimizing the electrical currents used to actuate the microheaters. More in detail, the Nelder-Mead algorithm [25] was employed using the amplitude "infidelity" \(1-\mathcal{F}_{ampl}\) as a loss function and the phases set on all phase shifts as variables to optimize. The starting point for the optimization is the set of phases obtained for the target unitary with the decomposition algorithm discussed in the previous section [7]. After each step of the optimization algorithm the new phases are converted to electrical currents by using the calibration data, the microheaters are actuated and a new amplitude fidelity is computed from the measurement, which is fed back to the optimizer. This procedure was applied to 5 of the 1000 Haar random unitaries measured originally on UPP A, selecting some with high, low, or average amplitude fidelity. The results are shown in Figure 4(a), where it is clear that even unitaries that were originally measured with high amplitude fidelity can be improved over 0.9995, well above the average for UPP A. A visual comparison between the errors obtained before and after the optimization of a single unitary transformation is shown in Figure 4(b), where the amplitude fidelity increased from 0.9936 up to 0.9997. These results indicate that it is possible to optimize specific unitary transformations in case higher fidelity is required. In addition, we tried implementing the same unitary repeatedly on the circuit. Over 100 iterations, the average amplitude fidelity between any two measurements of the same unitary transformation is about 0.9998 with this experimental setup. The values reported for the optimized matrices are very close to this limit and, therefore, the current optimization is already the best that we can currently verify. A further optimization will be possible in the future by improving the experimental reproducibility. ### Polarization measurement In all former measurements the polarization state of light was not controlled. The light polarization at the input of the UPP is determined by the polarization state of light at the output of the laser source and by the action of all the optical elements of the experimental setup. In particular, optical fibers rotate the polarization of light. In performing subsequent measurements, drifts may even have occurred. In the following, we will refer to this as "arbitrary polarization". We now show additional measurements gauging the variation in the performance of UPP A when using an input state of light that has been accurately set as horizontal (H) or vertical (V). The characterization setup for this experiment is the same used Figure 5: Amplitude fidelity \(\mathcal{F}_{ampl}(U_{set},U_{exp})\) improvement of Haar random matrix implementation through Nelder-Mead algorithm on UPP A. (a) Five unitary matrices (black squares) that were chosen for the optimization and their improved implementation after the optimization process (orange circles). The dashed line is the average fidelity of UPP A over the set of Haar random unitaries as in Figure 3(a). (b) Difference between the amplitudes of \(U_{set}\) and \(U_{exp}\) before and after the optimization. This particular matrix was optimized from an amplitude fidelity of 0.9936 to 0.9997. before, with the addition of polarizers and waveplates to arbitrary set the polarization state of the coherent light used for the measurements. A complete description of the experimental setup and methods used for this experiment is reported in the Supplementary Materials (Section S1). As a first step, we sampled a set of 50 Haar random unitary matrices and a randomly chosen set of 6 switching matrices. Then, we implemented each transformation again, measuring the corresponding input-output intensity distribution with controlled H or V polarized light, thus reconstructing the amplitudes of the experimental matrices \(U_{exp,H}\) and \(U_{exp,V}\). To better discuss how the implementation depends on the H/V polarization, we show these data here in two different ways. Figure 5(a) shows the amplitude fidelity of the measured matrix calculated against the target matrix \(U_{set}\) for all three cases: arbitrary, V and H polarization. The graph shows that the H polarization state performs slightly better on average than the other two, with the V state being the worst overall. Nevertheless, no matrix implementation shows an amplitude fidelity lower than 0.9910 and the average values are 0.9971 for the V polarization and 0.9980 for the H one. This is true not only for the Haar random matrices but also for the switching transformations, providing an additional demonstration of the high polarization transparency of the directional couplers. Then, Figure 5(b) shows how similar the two matrices \(U_{exp,V}\) and \(U_{exp,H}\) are by reporting the amplitude fidelity calculated between the two. The average value is 0.9992 and no pair below 0.9984 is reported. Again, it is worth noting that these amplitude fidelities were very close to the experimental limit of our characterization setup, which means that even though the polarization definitely plays a role in the correct implementation of the matrices, it does not have as much of an impact for the purposes of implementation as the calibration and operation of the chip. Figure 6: Amplitude fidelity \(\mathcal{F}_{ampl}\) distribution with different polarization states on UPP A. For all these plots, the vertical line separates the set of 50 random Haar matrices from the 6 switching matrices. (a) Scatter plot of the amplitude fidelity \(\mathcal{F}_{ampl}(U_{set},U_{exp})\) where the experimental matrix \(U_{exp}\) was measured for arbitrary as well as V and H polarized light. The averages 0.9978, 0.9971 and 0.9980 are marked by the black, blue and orange dashed lines for the three polarization states, respectively. (b) Scatter plot of the amplitude fidelity \(\mathcal{F}_{ampl}(U_{exp,V},U_{exp,H})\). The average 0.9992 is marked by the dashed line. Discussion In this work we evaluated the transformations implemented by our UPPs with classical light and intensity measurements, thus reconstructing only the amplitudes of the complex matrix \(U_{exp}\) representing each transformation. Being largely employed in the literature for the benchmarking of UPPs [17; 18; 9; 10; 11], we selected the amplitude fidelity (see Equation 5) as the figure of merit to measure the accuracy reached by our devices in order to guarantee an easy comparison with the literature. However, this topic deserves a deeper discussion. ### Amplitude fidelity For the sake of clarity, let us start by reporting again the definition of the amplitude fidelity \(\mathcal{F}_{ampl}\) for the case of two generic unitary matrices \(U=\{u_{ij}\}\) and \(V=\{v_{ij}\}\): \[\mathcal{F}_{ampl}(U,V)=\frac{1}{N}tr\left(|U^{\dagger}||V|\right)=\frac{1}{ N}\sum_{i,j}|u_{ij}v_{ij}|. \tag{6}\] Being an average over \(N\) scalar products, the amplitude fidelity is a normalized measure of how similar the amplitudes of the two matrices \(U\) and \(V\) are. Indeed, the amplitude fidelity is equal to \(1\) if and only if \(|U|=|V|\), it is always included in the interval \([0,1]\) and it is directly linked to the amplitude variation matrix \(|U|-|V|=\{|u_{ij}|-|v_{ij}|\}\) by the following relation: \[\mathcal{F}_{ampl}(U,V) =1-\frac{1}{2N}\sum_{ij}(|u_{ij}|-|v_{ij}|)^{2}= \tag{7}\] \[=1-\frac{1}{2N}\tau_{ampl}^{2}(U,V),\] where we have defined: \[\tau_{ampl}^{2}(U,V)=\sum_{ij}(|u_{ij}|-|v_{ij}|)^{2}, \tag{8}\] which is the amplitude total squared variation (TSV) calculated between \(U\) and \(V\). The analytical proof of Equation 7 is reported in the Supplementary Materials (Section S2). Although the amplitude fidelity represents an easy way to evaluate the accuracy of a UPP, it is also easy to show that this figure of merit is flawed by a strong bias that reaches its minimum value as \(N\) approaches infinity. More specifically, in the Supplementary Materials (Section S2) we prove that: \[E[\mathcal{F}_{ampl}(U,V)]\sim\frac{\pi}{4}\text{ as }N\rightarrow\infty, \tag{9}\] where the operator \(E[\cdot]\) is the expectation value of the amplitude fidelity calculated over Haar randomly distributed \(U,V\). Besides this, the amplitude fidelity is also not suitable to evaluate the performance of a UPP in a multiphoton experiment, since in this case also the angles of the matrix elements play an important role. ### Fidelity Provided that a reconstruction of both amplitudes and angles of each complex matrix element is possible [24], the actual device fidelity \(\mathcal{F}\) can be evaluated as follows: \[\mathcal{F}(U,V)=\frac{1}{N}|tr(U^{\dagger}V)|=\frac{1}{N}\sum_{i,j}u_{ij}^{ \dagger}v_{ij}, \tag{10}\] where we can remove the absolute value since \(U\) and \(V\) are always known up to a global phase term \(e^{i\psi}\) that can be arbitrarily chosen. As an example, a similar figure of merit was employed in [8] thanks to two-photon measurements allowing the reconstruction of the angles. The fidelity represents the normalized Frobenius inner product between \(U\) and \(V\). Similarly to the amplitude fidelity, it is equal to \(1\) if and only if \(U=V\), it is always included in the interval \([0,1]\) and it is directly linked to the variation matrix \(U-V=\{u_{ij}-v_{ij}\}\) by the following relation: \[\mathcal{F}(U,V) =1-\frac{1}{2N}\sum_{ij}(u_{ij}-v_{ij})^{2}= \tag{11}\] \[=1-\frac{1}{2N}||U-V||^{2},\] where we have defined: \[||U-V||^{2}=\sum_{ij}(u_{ij}-v_{ij})^{2}, \tag{12}\] in which \(||U-V||\) is the Frobenius norm calculated on the variation matrix \(U-V\). The analytical proof of Equation 11 is reported in the Supplementary Materials (Section S2), along with the proof that the quantity \(||U-V||^{2}\) is given by two separate contributions: \[||U-V||^{2}=\tau_{ampl}^{2}(U,V)+\tau_{angle}^{2}(U,V), \tag{13}\] where we have defined: \[\tau_{angle}^{2}(U,V)=4\sum_{ij}|u_{ij}v_{ij}|\sin^{2}\frac{\angle u_{ij}- \angle v_{ij}}{2}. \tag{14}\] The latter is the counterpart of the amplitude TSV and we define it as the angle TSV. Wrapping up the discussion, we can conclude from Equation 11 and 13 that: \[\mathcal{F}(U,V)=1-\frac{1}{2N}(\tau_{ampl}^{2}+\tau_{angle}^{2}). \tag{15}\] From Equation 15 it is clear that, since it takes into account also the angle TSV, the fidelity \(\mathcal{F}\) is always lower than the amplitude fidelity \(\mathcal{F}_{ampl}\) calculated on the same matrix pair \(U,V\). Related to this, it is also worth noting that the fidelity \(\mathcal{F}\) is a quasi-unbiased figure of merit, in the sense that the bias of the expectation value of the fidelity calculated on Haar randomly distributed unitary matrices \(U,V\) vanishes as \(N\) approaches infinity. More specifically, in the Supplementary Materials (Section S2) we prove that: \[E[\mathcal{F}(U,V)]\sim\frac{\sqrt{\pi}}{2N}\text{ as }N\rightarrow\infty. \tag{16}\] ### Numerical simulation Given the doubts raised on the amplitude fidelity \(\mathcal{F}_{ampl}\), we decided to implement a Montecarlo simulator to assess the validity of our experimental results. The simulator goes through the following steps: 1. Sample a random Haar unitary matrix \(U_{set}\in U(6)\). 2. Find the set of phases \(\theta_{i}\) and \(\phi_{i}\) corresponding to \(U_{set}\) using the decomposition algorithm reported in [7]. 3. Introduce a random phase noise \(\varepsilon\) uniformly distributed in the interval \(\varepsilon_{max}[-\pi,\pi]\) on both \(\theta_{i}\) and \(\phi_{i}\). 4. Get \(U_{sim}\) by matrix multiplication of each MZI layer. 5. Evaluate the effect of the noise by employing both the amplitude fidelity \(\mathcal{F}_{ampl}(U_{set},U_{sim})\) and the actual fidelity \(\mathcal{F}(U_{set},U_{sim})\). Figure (a)a reports the results of \(10000\) iterations for three different values of the parameter \(\varepsilon_{max}\). For \(\varepsilon_{max}=1\) phases can be considered completely random. Nevertheless, an average amplitude fidelity \(\overline{\mathcal{F}_{ampl}}=0.7411\) is obtained, consistently with the bias that affects this figure of merit. On the contrary, the fidelity is a good witness of the high error affecting the phase set, since the simulation produces an average value \(\overline{\mathcal{F}}=0.1475\). For \(\varepsilon_{max}=0.2\) errors are reduced and the amplitude fidelity steeply increases up to \(\overline{\mathcal{F}_{ampl}}=0.9399\). However, the statistical dispersion remains quite large and many unitaries display values lower than \(0.9\), clearly indicating that something is not working in the processor. In fact, the fidelity remains on average \(\overline{\mathcal{F}}=0.7619\), and several unitaries with very high amplitude fidelity (\(\mathcal{F}_{ampl}>0.95\)) have poor fidelity (\(\mathcal{F}<0.6\)). Interestingly, the situation looks completely different for \(\varepsilon_{max}=0.034\). This value was chosen to match the amplitude fidelity distribution measured on UPP A both in terms of average and standard deviation, i.e. \(\mathcal{F}_{ampl}=0.9978\pm 0.0008\), compared to \(\mathcal{F}_{ampl}=0.9979\pm 0.0009\) as reported in Section IV.1. In this case, points are all concentrated in a tight spot at the top right corner of the graph in Figure (a)a; with this phase noise, the fidelity is as high as \(\mathcal{F}=0.9921\pm 0.0029\), which is very close to the amplitude fidelity. This suggests that a low statistical dispersion of the amplitude fidelity is a clear witness of low errors also on the angles. These observations strengthen the validity of the experimental characterization performed on our UPPs. Finally, it is worth noting that this simulation allows us also to obtain a rough estimation of the calibration errors, which are evaluated as lower than \(\pm\) 0.1 rad. Secondly, one could also put into question the choice of the amplitude fidelity as a loss function for the optimization process discussed in Section IV.2. Therefore, we decided to modify the simulator in order to implement the following procedure: 1. Sample a random Haar unitary matrix \(U_{set}\in U(6)\). 2. Find a set of phases \(\theta_{i}\) and \(\phi_{i}\) corresponding to \(U_{set}\) using the decomposition algorithm reported in [7]. 3. Introduce a random phase noise \(\varepsilon\) suitably distributed to match average and standard deviation of the amplitude fidelity distribution measured for UPP A (Figure (a)a, orange dots). 4. Apply the minimization algorithm by calculating \(U_{sim}\) by matrix multiplication of each MZI layer and by using the amplitude infidelity \(1-\mathcal{F}_{ampl}(U_{set},U_{sim})\) as a loss function. Figure 7: Simulated scatter plots of fidelity. (a) Scatter plot of the simulated fidelities for different values of the random phase noise \(\varepsilon_{max}\). The dashed line represents the upper bound of the plot, given by \(\mathcal{F}<\mathcal{F}_{ampl}\). (b) Scatter plot of the simulated fidelities after the optimization procedure was performed using the amplitude fidelity as a loss function. The dashed line represents the convergence threshold \(1-\mathcal{F}=1\)E-10. 5. Evaluate the final effect of the optimization algorithm by employing the actual infidelity \(1-\mathcal{F}(U_{set},U_{sim})\). Figure 7b shows the results of 500 iterations of this algorithm, reporting the optimization in terms of infidelity \(1-\mathcal{F}\). Despite being based on the amplitude fidelity as the loss function, the algorithm led to a remarkable improvement of the fidelity, with the 86% of the matrices reaching full convergence (arbitrarily defined as \(1-\mathcal{F}<1\)E-10, dashed line in Figure 7b). Indeed it is worth noting that no matrix showed a fidelity worse than the initial condition, demonstrating the effectiveness of our optimization protocol based only on intensity measurements. ## VI Conclusion In this work, we reported on the design, fabrication and characterization of two 6-mode UPPs fabricated in a FLW integrated photonic platform. Even though larger circuits have been already reported in the literature [18, 26], our processors provide the highest level of control and reconfigurability demonstrated to date in a FLW platform, with the additional feature of producing polarization-independent optical transformations. These devices find their natural application in quantum optics and quantum information experiments. The advantages of our technology for this set of applications are manifold. First of all, they are compatible with quantum sources emitting both in the visible range and at telecom wavelength (here demonstrated at \(785\,\mathrm{nm}\) and \(1550\,\mathrm{nm}\)) with no penalty in terms of photon losses. Secondly, the high precision reached with our calibration protocol allows for the implementation of arbitrary optical transformations with average fidelity higher than \(0.9970\), which can be pushed over \(0.9990\) thanks to an optimization algorithm based only on intensity measurements. Last, the low insertion losses (\(<3\,\mathrm{dB}\)) make them also compatible with state-of-the-art multiphoton experiments. In the future, we believe that the limited power dissipation of our circuits (a few watts), combined with the next generation of thermal phase shifters and programmable MZIs [27], will enable the scaling towards tens of modes with limited technological effort, thus unlocking a new level of complexity for high-fidelity polarization-insensitive UPPs. ## Acknowledgements The authors would like to thank Dr. Simone Atzeni (currently at Paderborn University) for the helpful discussions and the experimental support. Fabrication of the resistive microheaters for the femtosecond laser-written processors was partially performed at PoliFAB, the micro and nano-fabrication facility of Politecnico di Milano [28]. The authors would like to thank Emanuele Urbinati (currently at TU Delft) for the help with the fabrication and the PoliFAB staff for the valuable technical support. ## Research Funding This work is supported by the European Union's Horizon 2020 research and innovation programme under the PHOQUSING project GA no. 899544. R.O. acknowledges funding from the National Centre for HPC, Big Data and Quantum Computing - HPC (CUP B93C22000620006). A.C. acknowledges funding by the PRIN 2017 programme for the Italian Ministry for University and Research, QUSHIP project (Id. 2107SRN-BRK). ## Author contributions All authors have accepted responsibility for the entire content of this manuscript and approved its submission. ## Conflict of interest F.C. and R.O. are co-founders of the company Ephos. The other authors state no conflict of interest. ## Data availability The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
2302.01346
MultiCAM: A multivariable framework for connecting the mass accretion history of haloes with their properties
Models that connect galaxy and halo properties often summarize a halo's mass accretion history (MAH) with a single value, and use this value as the basis for predictions. However, a single-value summary fails to capture the complexity of MAHs and information can be lost in the process. We present MultiCAM, a generalization of traditional abundance matching frameworks, which can simultaneously connect the full MAH of a halo with multiple halo and/or galaxy properties. As a first case study, we apply MultiCAM to the problem of connecting dark matter halo properties to their MAHs in the context of a dark matter-only simulation. While some halo properties, such as concentration, are more strongly correlated to the early-time mass growth of a halo, others, like the virial ratio, have stronger correlations with late-time mass growth. This highlights the necessity of considering the impact of the entire MAH on halo properties. For most of the halo properties we consider, we find that MultiCAM models that use the full MAH achieve higher accuracy than conditional abundance matching models which use a single epoch. We also demonstrate an extension of MultiCAM that captures the covariance between predicted halo properties. This extension provides a baseline model for applications where the covariance between predicted properties is important.
Ismael Mendoza, Philip Mansfield, Kuan Wang, Camille Avestruz
2023-02-02T19:00:00Z
http://arxiv.org/abs/2302.01346v2
MultiCAM: A multivariable framework for connecting the mass accretion history of haloes with their properties ###### Abstract Models that connect galaxy and halo properties often summarize a halo's mass accretion history (MAH) with a single value, and use this value as the basis for predictions. However, a single-value summary fails to capture the complexity of MAHs and information can be lost in the process. We present _MultiCAM_, a generalization of traditional abundance matching frameworks, which can simultaneously connect the full MAH of a halo with multiple halo and/or galaxy properties. As a first case study, we apply MultiCAM to the problem of connecting dark matter halo properties to their MAHs in the context of a dark matter-only simulation. While some halo properties, such as concentration, are more strongly correlated to the early-time mass growth of a halo, others, like the virial ratio, have stronger correlations with late-time mass growth. This highlights the necessity of considering the impact of the entire MAH on halo properties. For most of the halo properties we consider, we find that MultiCAM models that use the full MAH achieve higher accuracy than conditional abundance matching models which use a single epoch. We also demonstrate an extension of MultiCAM that captures the covariance between predicted halo properties. This extension provides a baseline model for applications where the covariance between predicted properties is important. keywords: cosmology: galaxy clusters -- dark matter -- galaxies: haloes -- galaxies: evolution -- methods: numerical ## 1 Introduction Characterizing the properties and growth of dark matter haloes has been an important goal of cosmological N-body simulations (Diemand & Moore, 2011; Frenk & White, 2012). Dark matter haloes are groups of dark matter particles that have gravitationally collapsed into bound structures. In the \(\Lambda\)CDM cosmological model, every galaxy forms within the potential well provided by a dark matter halo (White & Rees, 1978; Blumenthal et al., 1984). Thus, galaxies and their dark matter haloes are closely connected, meaning that models which attempt to predict the properties of galaxies must account for the behaviour and properties of their dark matter haloes (e.g. Hearin & Watson, 2013; Hearin et al., 2016; Wechsler & Tinker, 2018) Previous work has established a deep connection between a halo's present-day (\(z=0\)) properties and its _mass accretion history_ (_MAH_), i.e. this mass growth as a function of time. Properties such as concentration, virial ratio, centre of mass offset, spin, and axis ratio have been studied in relation to MAH. Early-forming haloes tend to have a higher concentration on average than late-forming haloes (e.g. Wechsler et al., 2002), and merger events induce lasting changes in halo structure which are encoded as a universal signatures in the halo's concentration (e.g. Wang et al., 2020). Other properties like the centre of mass offset and virial ratio have strong positive correlations with the halo's recent mass growth history and merging activity (e.g. Power et al., 2012). This joint dependence leads to substantial covariance between halo parameters (e.g. Lau et al., 2021). Much of this dependence comes from long-term growth trends: it has been found that a significant percentage of the variance in the concentration, axis ratio, and spin of a dark matter halo can be explained by the first principal component of the mass assembly history (e.g. Chen et al., 2020). The mass accretion history of a halo directly impacts the dynamical state of a halo, which in turn determines the reliability of structural measurements of its properties. Previous studies have established that haloes that have recently experienced one or more major mergers are more likely out of dynamical equilibrium (Tormen et al., 1997; Hetznecker & Burkert, 2006). These major merger events can cause temporary deviations from a halo's equilibrium state during which its structural properties change rapidly and might not be well-defined (Ludlow et al., 2016). Thus, it is critical that we characterize the dynamical state of haloes so that their structural measurements can be robustly propagated to downstream analysis. Previous work measuring the distribution of halo properties in simulations attempted to address this by selecting a sub-sample of _relaxed haloes_, i.e., those haloes considered to be close to dynamical equilibrium (e.g. Neto et al., 2007; Klypin et al., 2011, 2016). A closely related line of work seeks to identify relaxed galaxy clusters to avoid similar biases in the corresponding measurements (e.g. Cui et al., 2017; Zhang et al., 2022). However, there is a significant ambiguity on how to exactly define this relaxed sample for both cases, which usually rely on hard-cuts. This further highlights the need for increasing our understanding of the relationships between a galaxy's or halo's properties, MAH, and dynamical state. A common way to connect galaxy or halo properties to their MAH is to use a single parameter summary of the MAH, such as the half-mass scale (e.g. Gao et al., 2005; Hearin & Watson, 2013) or the value returned by a single-parameter fit (e.g. Wechsler et al., 2002). This framework leads to a one-to-one parameter correlation analysis called abundance matching, which corresponds to a prediction model that _assumes perfect correlation_ between the two parameters (e.g. half-mass scale and halo concentration) (Kravtsov et al., 2004). Abundance matching and its hierarchical extension, conditional abundance matching (CAM, Hearin et al., 2014, see subsection 3.3.1 for a description of these methods), have been effective models for a range of applications. For example, CAM can predict low-redshift galaxy statistics like two-point correlation functions in SDSS to reasonable accuracy (Hearin et al., 2014). However, the MAH of a dark matter halo is a complex multi-dimensional quantity that contains richer predictive information than single parameter summaries. MAHs are typically made up of a smooth accretion component consisting of an early-fast accretion phase and a late-slow accretion phase, which was successfully captured with a three parameter model in (Hearin et al., 2021). The MAH also includes a non-smooth accretion component in the form of an arbitrary number of discrete major merger events that can significantly change halo properties on a short time-scale (e.g. Hetznecker & Burkert, 2006; Power et al., 2012; Wang et al., 2020). Separately, it has been shown that different present-day halo properties correlate more or less strongly with different parts of the MAH (e.g. Wong & Taylor, 2012). Thus, summarizing the MAH with a single quantity leads to discarding a significant amount of useful information. Another significant drawback of one-to-one parameter models is that they are unable to capture the covariance between predictions. If the same single parameter MAH summary is chosen, CAM-like models necessarily output a perfect correlation between any pair of predicted halo properties. Thus, if one is interested in emulating multiple halo properties from a given MAH, one-to-one models are insufficient. To address the aforementioned limitations we propose a new method for connecting galaxy or halo properties with their formation history: _MultiCAM_. MultiCAM is a generalization of the traditional abundance matching framework that consistently incorporates the full formation history into a prediction of single-epoch properties while preserving the key benefits of CAM. MultiCAM utilizes the full covariance between features and targets in its predictions. Moreover, MultiCAM can predict multiple properties simultaneously and correctly capture the correlations between them. As a first demonstration of our new method, we apply it to connecting dark matter halo properties with their MAH. In the future, our main focus will be in applying this method to predict baryonic properties. This paper is organized as follows. Section 2 describes the simulation suite and halo sample used in our studies. Section 3 presents the parameterizations of MAH we consider in this work, gives an overview of CAM, and a detailed description of MultiCAM. In Section 4 we characterize the covariance of MAH and halo present-day properties, and evaluate MultiCAM on our halo sample. Section 5 discusses future applications of MultiCAM and how it compares to other methods. Finally, in Section 6 we present our conclusions. ## 2 Dataset ### Simulation Suite For our dataset we use the Bolshoi dark matter-only cosmological simulation (Klypin et al., 2011) which was performed with the Adaptive-Refinement-Tree (ART) code described in (Kravtsov et al., 1997). The simulation has outputs at 180 snapshots starting at \(a_{179}=0.07835\) and ending at \(a_{0}=1.00035\approx 1\). The spacing between early snapshots is \(Aa=0.006\) between \(a_{179}=0.07835\) and \(a_{77}=0.80835\), and \(\Delta a=0.003\) between late snapshots \(a_{77}=0.80835\) and \(a_{0}=1.00035\). The cosmological parameters and other simulation details are shown in Table 1. The halo catalogues were generated by the Rockstar halo finder (Behroozi et al., 2013), as run by Rodriguez-Puebla et al. (2016). This catalogue uses both position and velocity information to identify each halo in the simulation. Halo finder comparison projects have found this algorithm to perform well at halo finding tasks, including detecting substructure and tracing mergers (e.g., Knebe et al., 2011). We use catalogues generated by consistent-trees (Behroozi et al., 2013) to construct the merger history that we use for our analysis (Rodriguez-Puebla et al., 2016). Given a merger event, we define the _main progenitor_ halo as the one that contains the most particles that end up in the resulting halo after the merger. Given a present-day (\(z=0\)) halo, a _merger tree_ can be constructed by following its evolution at each snapshot in the simulation going backwards in time. The _main progenitor branch_ of a given present-day halo is the branch in the merger tree resulting from following the main progenitor halo backwards in time at each snapshot. ### Defining the Halo Sample Throughout this work we use the same dataset of a random sample of \(10^{4}\) haloes from the Bolshoi simulation in the mass bin of \(M_{\rm vir}\in[10^{12},\,10^{12.2}]h^{-1}\,M_{\odot}\) which we denote as M12. Here, \(M_{\rm vir}\) is the bound mass within a radius enclosing an average density corresponding to the overdensity threshold defined in Bryan & Norman (1998). We take this radius to be the virial radius \(R_{\rm vir}\). \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \hline Box size & 250 Mpc/\(h\) \\ \hline Number of particles & \(2048^{3}\) \\ \hline Particle mass & \(1.35\times 10^{8}\) M\({}_{\odot}h^{-1}\) \\ \hline Force resolution & \(1.0\,\)kpc/\(h\) \\ \hline Initial redshift & 80 \\ \hline Number of snapshots & 180 \\ \hline Hubble parameter \(h\) & 0.7 \\ \hline \(\Omega_{\rm A}\) & 0.73 \\ \hline \(\Omega_{m}\) & 0.27 \\ \hline \(\Omega_{b}\) & 0.0469 \\ \hline Tilt \(n\) & 0.95 \\ \hline \(\sigma_{8}\) & 0.82 \\ \hline \end{tabular} \end{table} Table 1: Simulation and Cosmological parameters of the Bolshoi dark matter-only cosmological \(\Lambda\)CDM simulation presented in Klypin et al. (2011) which is based on the WMAP5 cosmology (Dunkley et al., 2009). For each of the haloes in this sample, we use the Rockstar catalogue at each snapshot and consistent-trees to extract the corresponding main progenitor branch and the virial masses of progenitors at each snapshot in this branch. We do not use all of the 180 snapshots in the Bolshoi simulation, rather we impose a cutoff based on the mass resolution of our simulation. We pick our first snapshot to be the earliest snapshot out of the 180 where at most 5% of haloes have a virial mass lower than 50 times the particle mass. This ensures that we never attempt to analyze snapshots where a substantial portion of our sample is unresolved. For our M12 sample, we consider a total of \(N_{\rm snap}=165\) scales ranging from \(a_{164}=0.18635\) up to \(a_{0}=1\). For the small percentage \(\leq 1\%\) of haloes in our sample that do not have a corresponding main line progenitor at \(a_{164}\) (or in any subsequent snapshots), we assign them a virial mass at those missing snapshots to be the mass of a single particle of the simulation. This is so that there are no missing values in the mass accretion history for all haloes in our M12 sample. ### Halo properties and their convergence In this study we mainly consider halo concentration, \(c_{\rm vir}\), defined as the ratio of the virial radius to the NFW scale radius; the normalized maximum value of the halo's rotation curve \(V_{\rm max}/V_{\rm vir}\); the offset between the halo's center of mass and its most bound particle \(x_{\rm off}\); the virial ratio, \(T/|U|\); its dimensionless spin parameter, \(\lambda_{\rm bullock}\); and its second minor-to-major axis ratio, \(c/a\). See Mansfield and Avestruz (2021) for the exact definitions of these properties as computed by Rockstar. Mansfield and Avestruz (2021) measured the minimum converged masses for each of these properties in Bolshoi at different levels of acceptable numerical limits. No detectable bias is observed in all \(V_{\rm max}/V_{\rm vir}\) at \(M_{\rm vir}>10^{11.8}\,h^{-1}M_{\odot}\) at \(M_{\rm vir}>10^{11.6}\,h^{-1}M_{\odot}\), \(T/|U|\) at \(M_{\rm vir}>10^{11.1}\,h^{-1}M_{\odot}\), \(\lambda_{\rm bullock}\) at \(M_{\rm vir}>10^{10.2}\,h^{-1}M_{\odot}\), and \(c/a\) at \(M_{\rm vir}>10^{10.9}\,h^{-1}M_{\odot}\). Mansfield and Avestruz (2021) do not report a \(c_{\rm vir}\) convergence limit for Bolshoi, but do report a \(c_{\rm vir}\) convergence limit for Erebos,Cbiol,L125 (Diemer and Kravtsov, 2015) at \(M_{\rm vir}>10^{11.6}\,h^{-1}M_{\odot}\), which has an identical cosmology, identical particle mass, coarser force softening, and coarser timesteps than Bolshoi. Therefore, all the considered properties are converged within our mass window of \([10^{12},10^{12.2}]h^{-1}M_{\odot}\). We also briefly consider several other, more minor halo properties. For example, the average of the first minor-to-major axis ratio \(b/a\) and the second minor-to-major ratio \(c/a\), which we denote with \(q\): \[q=\frac{1}{2}\left(\frac{b}{a}+\frac{c}{a}\right).\] Because this property is derived from \(b/a\) and \(c/a\), it is converged at about \(10^{10.9}\,h^{-1}M_{\odot}\). For all other halo properties, their definitions can be found in Mansfield and Avestruz (2021) and they are also converged within our mass window. ## 3 Methods ### Parameterizations of MAHs First, we introduce the notation that we use to parameterize the mass accretion history and its properties. We measure time through the cosmological scale factor: \[a(z)=\frac{1}{1+z} \tag{1}\] We track mass growth through the normalized peak mass, \[m(a)=\frac{M_{\rm peak}(a)}{M_{\rm peak}(a=1)}, \tag{2}\] where we take the ratio of \(M_{\rm peak}\) values to force monotonicity, \[M_{\rm peak}(a)=\max_{0\leq q^{\prime}\leq a}\big{[}M_{\rm vir}(a^{\prime}) \big{]}. \tag{3}\] The difference between \(M_{\rm peak}(a)\) and \(M_{\rm vir}(a)\) is significant for subhalos due to the large amount of mass loss they experience (e.g. Wechsler and Tinker, 2018), but the difference is less important for the central haloes in M12, since their masses will typically increase over time. The main impact on our sample of host haloes is that it allows \(m(a)\) to be inverted. To that end, we define \[a(m)=m(a)^{-1}. \tag{4}\] Since \(m(a)\) is monotonic, but not strictly increasing, we take \(a(m)\) to be the first scale factor at which the halo reaches a given mass. When inverting \(m(a)\) we use piecewise power-law interpolation between adjacent snapshots of a halo's MAH. In addition, per convention, we sometimes use the notation \(a_{1/n}\) where \(n\) is an integer to mean: \[a_{1/n}=a(m=1/n) \tag{5}\] This notation, usually with \(n=2\), is often used in the literature as a tracer of formation (e.g. Gao et al., 2005). We define a halo's dynamical time \(t_{\rm dyn}\) as the time it takes for a test particle to travel a distance of virial radius \(R_{\rm vir}\) at a speed of \(V_{\rm vir}\), the orbital speed of a particle on a circular orbit at \(R_{\rm vir}\). Since all haloes have the same enclosed density within \(R_{\rm vir}\), \(t_{\rm dyn}\) is only a function of redshift and cosmology: \[t_{\rm dyn} =\frac{R_{\rm vir}}{\sqrt{GM_{\rm vir}/R_{\rm vir}}}=\frac{1}{H(z) }\left(\frac{2\rho_{\rm c}(z)}{\rho_{\rm vir}(z)}\right)^{1/2} \tag{6}\] \[=2.01\,{\rm Gyr}\left(\frac{\rho_{\rm vir}(z)/\rho_{\rm c}(z)}{97.0}\right)^{-1/2}\left(\frac{H(z)}{70\,{\rm km\,s^{-1}\,Mpc^{-1}}}\right)^{- 1}. \tag{7}\] For convenience, Eq. 7 is normalized to the \(z=0\) virial density in the Bolshoi simulation. Following from this definition, \(m_{t_{\rm dyn}}\) is the mass fraction at a time \(t_{\rm dyn}\) before the present day. \(m_{t_{\rm dyn}}\) is a commonly used measure of late-time accretion rates and its unnormalized equivalent is tracked by consistent-trees catalogues by default. We also analyze the best-fitting exponential scale factor of each mass accretion history (Wechsler et al., 2002): \[M(z)/M(z=0)=e^{-\alpha z}. \tag{8}\] ### DiffMAH Model of Smooth Mass Accretion History We also consider the best-fitting parameters of the DiffMAH model of smooth mass accretion histories presented in Hearin et al. (2021). This model consists of the following fitting function: \[M_{\rm peak}(t)/M_{\rm peak}(t=t_{0})=(t/t_{0})^{\alpha(t)} \tag{9}\] where \(t\) is age of the universe, and \(t_{0}\) is the present-day age of the universe. Finally, \(\alpha(t)\) is a sigmoid function defined as: \[\alpha(t;\tau_{\rm c},k,\alpha_{\rm early},\alpha_{\rm late})\equiv\alpha_{\rm early }+\frac{\alpha_{\rm late}-\alpha_{\rm late}}{1+\exp(-k(t-\tau_{\rm c}))} \tag{10}\] and has parameters \(\alpha_{\rm early},\alpha_{\rm late},k\), and \(\tau_{\rm c}\) with an explicit physical meaning. First, \(\alpha_{\rm early},\alpha_{\rm late}\) determine the asymptotic value of the power-law index at early and late times respectively; \(\tau_{\rm c}\) controls the transition time between the early- and late-time indices; and \(k\) determines the speed of transition between the two phases. As in Hearin et al. (2021), we fix \(k=3.5\). ### Statistical Algorithms In this section we introduce the statistical algorithms we use for predictions connecting MAH and present-day halo properties. #### 3.3.1 Conditional Abundance Matching One of the methods we use is an adapted _Conditional-Abundance Matching (CAM)_. The CAM algorithm is a method which was originally developed to study and model the connection between halo ages -- traced through properties like \(a_{1/2}\) -- to observable galaxy properties - like galaxy color or star formation rate (Hearin & Watson, 2013; Hearin et al., 2014; Watson et al., 2015). It is similar to the traditional abundance matching algorithm (Kravtsov et al., 2004), which assigns stellar masses or luminosities to simulated dark matter haloes. Traditional abundance matching evaluates the function \(N_{\bullet}^{-1}(N_{\rm dm}(M_{\rm vir}))\), where \(N_{\bullet}\) and \(N_{\rm dm}\) are some observed cumulative stellar mass function and theoretical cumulative mass function, respectively. Similarly, CAM assigns galaxy properties via \(F_{\rm gal}^{-1}(F_{\rm halo}(X_{\rm mah}|M_{\bullet})|M_{\bullet})\), where \(F_{\rm halo}\) and \(F_{\rm gal}\) are the conditional CDFs at a fixed stellar mass \(M_{\bullet}\) for some theoretical tracer of halo age, \(X_{\rm mah}\), and the CDF for the target observable galaxy property, respectively. The primary application of CAM is generating empirical models of observable properties. But more generally, CAM is a method that Figure 1: **Schematic illustrating the MultiCAM method.** In this diagram we illustrate the novel method presented in this work to connect mass accretion history information to present-day halo properties: ‘MultiCAM’. Each step of our algorithm is marked with a green circle. Each box represents the 1D distribution of one of the \(M\) features or \(T\) targets. The curve of the 1D distribution is delineated so that the blue and red curve intersect at the median. The rhombuses represent algorithms, either a quantile transformer to marginally map variables to Gaussian distributions, or a linear regression prediction model. The algorithm and each of the steps are described in detail in subsection 3.3.2. optimally implements a specific assumption for the connection between halo growth and halo/galaxy properties: a given halo property \(Y_{\rm halo}\) is entirely and monotonically determined by a given feature of a halo's MAH \(X_{\rm mah}\). If this assumption is correct, CAM predictions will be the exact values of the given halo property, and failures in this assumption propagate into inaccuracies in CAM predictions. Therefore, throughout this paper, we use the CAM prediction strength as a measure of how well a given halo property \(Y_{\rm halo}\) can be understood to be determined by a given proxy of halo growth \(X_{\rm mah}\). Moreover, multi-parameter models which have improved predictive power over CAM are evidence that the halo property in question is influenced by multiple features in a halo's MAH. In this work, the CAM algorithm is used to abundance match a given MAH feature \(X_{\rm mah}\) to a given present-day halo property \(Y_{\rm halo}\), at fixed present-day halo mass \(M_{\rm vir}\). Specifically with the equation: \[Y_{\rm halo}=F_{\rm halo}^{-1}\left(F_{\rm mah}(X_{\rm mah}|M_{\rm vir}|M_{\rm vir })\right. \tag{11}\] where \(F_{\rm halo}\) and \(F_{\rm mah}\) are the conditional CDF of the present-day halo property and the MAH feature respectively. Throughout, we condition at a fixed mass bin equal to the one used for constructing the \(\tt{N}\tt{I}\tt{I}\tt{I}\tt{2}\) dataset. We pick the MAH property \(X_{\rm mah}\) for abundance matching to be the scale \(a(m_{\rm opt})\) (see Eq. 4) at a fixed mass bin \(m_{\rm opt}\) that optimally correlates with \(Y_{\rm halo}\) across all \(m\). For example, when \(Y_{\rm halo}=c_{\rm vir}\), we find \(m_{\rm opt}\approx 0.5\) in our \(\tt{N}\tt{I}\tt{I}\tt{2}\) dataset, so that \(X_{\rm mah}=a(m_{\rm opt})=a(0.5)=a_{1/2}\). The optimal mass bin \(m_{\rm opt}\) satisfies the equation: \[\max\,(\rho_{\rm sp}(a(m),Y_{\rm halo}))=\rho_{\rm sp}(a(m_{\rm opt}),Y_{\rm halo }). \tag{12}\] Similarly, we could have chosen \(X_{\rm mah}=m(a_{\rm opt})\) where the optimal scale \(a_{\rm opt}\) satisfies: \[\max\,(\rho_{\rm sp}(m(a),Y_{\rm halo}))=\rho_{\rm sp}(m(a_{\rm opt}),Y_{\rm halo }), \tag{13}\] but we find that \(a(m_{\rm opt})\) has overall higher correlations across all halo properties than \(m(a_{\rm opt})\). We refer to the algorithm that uses \(a(m_{\rm opt})\) to abundance match between MAH and halo properties at a given halo mass as 'CAM \(a(m_{\rm opt})\)'. We use CAM \(a(m_{\rm opt})\) to predict a given halo property \(Y_{\rm halo}\) from the MAH of a halo in subsection 4.3. CAM is a simple, yet powerful empirical non-parametric approach to matching any pair of strongly correlated variables. It however has some important limitations: (1) It is unable to match multiple variables to another set of multiple variables. (2) It does not incorporate the scatter between prediction and target when matching. We address these limitations of CAM in the algorithms described next. #### 3.3.2 MultiCAM We propose the new algorithm _MultiCAM_ to address these limitations of CAM. MultiCAM generalizes CAM to match multiple MAH properties to multiple present-day halo properties simultaneously. To accomplish this, MultiCAM first introduces a multi-variable linear regression between the multiple features and target variables. Then, MultiCAM marginally matches the distribution of outputs to the true distribution of targets. In our context, different halo properties correlate more or less strongly at different time scales of a halo's growth history (e.g. Wang et al.2020). This means that matching multiple variables consistently is essential for exploring the connections in this work. MultiCAM also includes a pre-processing step where all features and target variables are marginally transformed to Gaussian distributions. At the end of the procedure, all variables are transformed to their original space. This pre-processing step is beneficial in the context of linear regression since it allows for a version of MultiCAM that introduces scatter between the features and targets, as discussed in detail in subsection 3.3.3 and subsection 3.3.4. The MultiCAM algorithm is illustrated in Fig. 1 and in detail consists of the following: 1. Collect all desired features for prediction, \(\mathbf{X}\), and targets, \(\mathbf{Y}\), from a given dataset. For example, \(\mathbf{X}\) can be set to the full MAH of all haloes in the dataset: \(\mathbf{X}_{\rm mah}=\{a(m_{i})\}_{i=1}^{N}\), where \(\{m_{i}\}_{i=1}^{N}\) are some pre-defined mass bins with \(m_{N}=1\). Similarly, \(\mathbf{Y}\) can be set to all the halo present-day properties we consider in this work \(\mathbf{Y}_{\rm halo}=\{c_{\rm vir},T/|U|,x_{\rm off},y_{\rm bulk},c/a\}\), which are described in subsection 2.3. 2. We marginally transform each individual feature from its empirical distribution to a normal distribution (top left of figure) to a Gaussian distribution. We do this via the inverse transform method (e.g. Devroye 1986), which can map any 1D dataset of variables to have any other desired empirical distribution without changing the rank-ordering of its points. 3. We then take the subset of marginalized Gaussian features \(\mathbf{\tilde{X}}_{\rm train}\) and targets \(\mathbf{\tilde{Y}}_{\rm train}\) in the training set, and train a linear regression model for prediction in this Gaussianized space for these features and targets. 4. We then use the marginalized Gaussian features in the testing set \(\mathbf{\tilde{X}}_{\rm test}\) and apply linear regression to obtain the corresponding set of predictions \(\mathbf{\tilde{Y}}_{\rm pred}\). 5. The predictions from the linear regression model \(\mathbf{\tilde{Y}}_{\rm pred}\) are not guaranteed to follow the empirical distribution of target features (they tend to be narrower) so we apply one more quantile transformer to \(\mathbf{\tilde{Y}}_{\rm pred}\) and make its distribution (marginally) Gaussian, which then matches the distribution of the Gaussianized training targets \(\mathbf{\tilde{Y}}_{\rm train}\). This is illustrated in the bottom-left corner of Fig. 1. 6. Finally, we transform the Gaussianized predictions \(\mathbf{\tilde{Y}}_{\rm pred}\) back into original target space by applying the inverse of the original quantile transformer used to map training target variables to the Gaussianized space. The result is the final MultiCAM prediction \(\mathbf{Y}_{\rm pred}\). This approach incorporates the multi-variable prediction accuracy from linear regression while preserving the properties of the marginal predictor distributions. Due to the quantile transformations illustrated in Fig. 1, our procedure automatically outputs predictions whose marginal distributions match the marginal distributions of the training data. This means that the outputs from MultiCAM have a correlation strength with the true targets that is at least as high as CAM (see subsection 4.2). In fact, MultiCAM exactly reduces to CAM in the case of 1D features and targets. In summary, MultiCAM also has the added advantage of (1) predicting multiple properties from multiple input properties and (2) taking advantage of the increased accuracy from linear regression. This version of MultiCAM that uses linear regression still faces one key limitation in that it doesn't account for the scatter between target features and predictions and thus will not reproduce the correct correlations between output properties. To address this, we first discuss the relationship between linear regression and sampling from a conditional Gaussian. Second, we discuss a method that maintains the correlation between sampled properties that is based on using conditional Gaussian sampling within MultiCAM instead of linear regression. #### 3.3.3 Linear Regression and Conditional Gaussian Sampling We start by discussing the theoretical framework of conditional Gaussian prediction, and then connect it with linear regression and MultiCAM. Assume that you have some multi-dimensional features \(X\) and multi-dimensional targets \(Y\) that are jointly distributed as a multivariate Gaussian \(P_{X,Y}\). Given a new feature test point \(x^{\star}\), we consider the conditional distribution \(P_{Y|\star^{\star}}\) in order to choose our new prediction based on \(x^{\star}\). The conditional distribution \(P_{Y|\star^{\star}}\) is also Gaussian with mean \(\bar{\mu}(x^{\star})\) and covariance matrix \(\bar{\Sigma}\). The equations to derive the conditional parameters \(\bar{\mu}(x^{\star})\) and \(\bar{\Sigma}\) from empirical estimates of the joint distribution \(P_{X,Y}\) parameters can be found in Appendix A. Given this framework, there are two different goals we could choose to pursue: (1) Minimize (squared) residuals of the prediction \(Y_{\text{pred}}(X)\) relative to the target \(Y\) or (2) Sample points \(Y\) such that their distribution matches the true target distribution \(P(Y)\), including in its correlations between different target variables. The first goal is achieved by using the mode of the conditional distribution directly as the prediction: \[Y_{\text{pred}}(x^{\star})\equiv\bar{\mu}(x^{\star}). \tag{14}\] Based on the expression for \(\bar{\mu}(x^{\star})\) in Eq. 13, we can see how this prediction would not take into account the intrinsic scatter of the target distribution, as there is no term with \(\Sigma_{yy}\) - the covariance matrix between target variables. The second goal can be achieved by sampling the conditional distribution \(P_{Y|X}\) after sampling \(P(X)\). Concretely, given a test point \(x^{\star}\sim P(X)\), we choose as our prediction samples directly from the conditional normal distribution \(P_{Y|\star^{\star}}\): \[Y_{\text{pred}}(x^{\star})\sim\mathcal{N}(\bar{\mu}(x^{\star}),\bar{\Sigma}). \tag{15}\] This second approach does incorporate the intrinsic scatter in the target distribution as \(\bar{\Sigma}\) depends on \(\Sigma_{yy}\), as can be seen in Eq. 14 in Appendix A. We denote this approach _conditional Gaussian sampling_. In Appendix B, we prove that the mode of the conditional distribution \(P_{Y|X}\) (Eq. 14) is equivalent to the linear regression output if \(X,Y\) are jointly normal distributed. Additionally, MultiCAM already includes a pre-processing step (step 2 of the algorithm in subsection 3.3.2) where we try to bring features \(X\) and targets \(Y\) close to a joint Gaussian. These two facts combined imply that the conditional Gaussian sampling approach (Eq. 15) is a natural replacement for the linear regression prediction algorithm within MultiCAM that could allow us to account for the scatter between targets. Finally, note that we restrict analysis in this paper to simulation data, where we can train the entirety of \(\Sigma\) and account for the explicit covariance between all features and predicted quantities. However, conditional Gaussian sampling provides an avenue to use MultiCAM as an interpretable empirical model. In the simplest case, if we consider traditional CAM as such an empirical model, the "fit" procedure would consist of \(\Sigma\) containing one row for \(X_{\text{mab}}\), one row for the target galaxy observable, and off-diagonal terms artificially fixed to assume perfect correlation. In the more general case using MultiCAM with conditional Gaussian sampling, we would perform an analogous "fit" procedure by taking any subset of the elements in \(\Sigma\) as free parameters. #### 3.3.4 MultiCAM with scatter As mentioned previously, the MultiCAM algorithm presented in subsection 3.3.2 cannot correctly capture the correlation between targets. As seen in subsection 3.3.3 this is because the prediction model connecting features and targets, linear regression, does not account for the scatter in the target distribution. However, given that the MultiCAM presented in subsection 3.3.2 already includes a normalizing pre-processing step (step 2), we can replace the prediction model from linear regression (step 3 and 4) to conditional Gaussian sampling (Eq. 15) to solve this problem. As explained in subsection 3.3.3, the pre-processing step allows us to interpret this replacement as using the same joint normal distribution to solve a different goal, that of directly sampling \(P(Y)\). This can be achieved by using the conditional Gaussian sampling approach within MultiCAM, since we will be explicitly incorporating the scatter between targets in our predictions. Therefore, for the rest of this subsection, we denote this new version of MultiCAM as _MultiCAM (with scatter)_ to distinguish it from the method in subsection 3.3.2 which we will denote as _MultiCAM (no scatter)_. Unless otherwise stated, in the rest of the paper 'MultiCAM' refers to MultiCAM (no scatter). Importantly, the MultiCAM (with scatter) approach explicitly models scatter between features and targets, i.e. a given test data point of features can be used to sample multiple predictions from the conditional normal distribution. This means that the point estimate accuracy of MultiCAM (with scatter) will be lower compared to MultiCAM (no scatter), since we are introducing noise into the prediction. However, we will show how this simple extension allows for capturing the lion's share of the covariance between variables while still matching the marginal distributions exactly. To demonstrate this, we first train each of the models presented so far -- CAM a(\(m_{\text{opt}}\)), MultiCAM (no scatter), and MultiCAM (with scatter) -- on 7000 random haloes from the M12 dataset using the full MAH \((a(m_{\text{opt}}))_{i=1}^{N}\) of each halo as features and three present-day halo properties as targets: \(c/a,\lambda_{\text{bullock}}\), and \(x_{\text{off}}\). The three models are then tested on full MAH of remaining 3000 haloes from the M12 dataset and the 2D, 1-sigma, 2-sigma, and 3-sigma contours between each pair of target predicted variables are plotted as shown in Fig. 2. The true contours are shown in orange and the predicted contours by each model in green. In Fig. 2 we see that CAM a(\(m_{\text{opt}}\)) and MultiCAM (no scatter) fail to match the 2D distributions of halo properties. For CAM a(\(m_{\text{opt}}\)), the width of the green contours in each panel directly corresponds to the covariance between the \(a(m_{\text{opt}}\)) of each property, since CAM does a one-to-one matching between these. For example, \(x_{\text{off}}\) and \(\lambda_{\text{bullock}}\) are the target variables with the largest difference in their corresponding \(m_{\text{opt}}\), as shown in Table 3. Fig. 3 demonstrates that a larger difference in mass bins \(m\) between a pair of scales \(a(m)\) implies a lower covariance between them. Thus, we expect a weaker correlation between the CAM a(\(m_{\text{opt}}\))-predicted \(x_{\text{off}}\) and \(\lambda_{\text{bullock}}\) than for the other pairs of variables. This is exactly what we see in the \begin{table} \begin{tabular}{c c c c} \hline Model & \(x_{\text{off}}\), \(\lambda_{\text{bullock}}\) & \(x_{\text{off}}\), \(c/a\) & \(\lambda_{\text{bullock}}\), \(c/a\) \\ \hline \hline True & 0.51 & -0.43 & -0.29 \\ \hline CAM \(a(m_{\text{opt}})\) & 0.62 & -0.79 & -0.88 \\ \hline MultiCAM (no scatter) & 0.93 & -0.96 & -0.95 \\ \hline MultiCAM (with scatter) & 0.50 & -0.45 & -0.31 \\ \hline \end{tabular} \end{table} Table 2: **Correlations between halo properties predicted from each model.** We show the Spearman correlation between each pair of predicted target \(z=0\) halo properties given their MAH using three different methods. The training and ‘true’ sample is equivalent to the one used for Fig. 2 in subsection 3.3.4. leftmost subplot in Fig. 2. MultiCAM (no scatter) has the narrowest contours out of the three methods. This is because the predicted variables use the same sets of MAHs and there is substantial overlap in the relative importance of different epochs (see subsection 4.2). However, MultiCAM (with scatter) has contours that seem to match the true contours more closely. Additionally, Table 2 shows the correlation between each pair of \(z=0\) halo properties for each of the three models. We can quantitatively reach the same conclusions suggested by Fig. 2: the correlations between target properties outputted by MultiCAM (with scatter) agree closely with the true correlations, but this is not the case for CAM \(a(m_{\rm opt})\) and MultiCAM (no scatter). The full triangle plot applying MultiCAM (with scatter) to all the present-day properties considered in this work is shown in Fig. 11 of Appendix C, which shows good agreement in both 1D marginals and 2D contours. As explained in subsection 3.3.3, MultiCAM (with scatter) can successfully capture the covariance between target variables since the sampling scatter depends directly on this covariance (Eq. 15). In summary, Table 2, Fig. 2, and Fig. 11 demonstrate that MultiCAM (with scatter) can be used to successfully emulate present-day halo properties given the full MAH of a dark matter halo. ## 4 Results In this section, we focus on understanding the statistical properties of our M12 dataset through correlations and evaluate the MultiCAM approach. We choose to focus on the following \(z=0\) halo properties for our analysis: concentration, \(c_{\rm vir}\), virial ratio, \(T/|U|\), center of mass displacement, \(x_{\rm off}\), spin Bullock, \(\lambda_{\rm bullock}\), and second minor-axis to major-axis ratio, \(c/a\). We analyze the M12 dataset as defined in Section 2. We divide the M12 halo sample into a training set of 7000 haloes and a test set of 3000 haloes (unless otherwise stated). The performance metrics trained models are evaluated only on the test set. The error bars reported in all our results are standard errors estimated from jackknife resampling over 8 equal volume sub-cubes of the simulation. ### Autocorrelation of Halo Mass Accretion History Figure 3 shows two-dimensional histograms where we color code each pixel (bin) by Spearman correlation strength. The top plot shows the Spearman correlation, \(\rho_{\rm sp}(m(a_{i}),\,m(a_{j}))\), between mass fraction at a given pair of formation times. The bottom plot shows the Spearman correlation, \(\rho_{\rm sp}(a(m_{i}),\,a(m_{j}))\), between the formation time at a given pair of mass fractions in our M12 dataset. In the top panel, we see that \(m(a)\) values are strongly correlated with one another for small (\(\Delta a\approx 0.1\)) changes in \(a\). Similarly, in the bottom plot we see that \(a(m)\) values are strongly correlated with one another for small (\(\Delta m\approx 0.1\)) changes in \(m\). This suggests that we can achieve a similar prediction accuracy with a sparser subset of the MAH information. For example, if we wanted to retain information at a level of \(\rho_{\rm sp}\sim 0.9\) between adjacent bins, we could choose data at approximately a spacing of \(\Delta a=0.05\) which would result in approximately ten times less data. Another takeaway from the top plot is that adjacent snapshots at both early and late times are strongly correlated (see subsection 2.1). The distinct output cadence of Bolshoi should therefore have minimal impact in the following analysis. The takeaways for the bottom plot are similar to those from the top plot. ### Correlations of MAH and present-day halo properties In Fig. 4 we show the Spearman correlation coefficient between several present-day halo properties and the halo accretion history, parameterized as \(m(a)\) (left) and \(a(m)\) (right). We compute the correlation using the full \(10^{4}\) halo sample M12. The colored bands correspond to the uncertainty on each curve as estimated by jackknife resampling. In this figure, solid lines are used to represent positive correlation values and dotted lines represent negative values. Both figures illustrate that present-day halo properties contain information about the growth of haloes back to very early times, \(z\approx 4\), and at times when haloes were \(\approx\)10% to 20% of their current mass. As expected, formation times correlate positively with \(c_{\rm vir}\) (Wechsler et al., 2002), \(V_{\rm max}/V_{\rm vir}\) (this follows directly from the \(c_{\rm vir}\) correlation with growth), \(c/a\)(Allgood et al., 2006; Chen et al., 2019), and negatively with \(T/|U|\), \(x_{\rm off}\)(Maccio et al., 2007), and \(\lambda_{\rm bullock}\)(Vivitska et al., 2002). Inner halo structure, tracked by \(c_{\rm vir}\) and \(V_{\rm max}/V_{\rm vir}\), most strongly correlates with early times, \(\approx 3.4\tau_{\rm dyn}\) in the past, when haloes were roughly half their current mass. This is consistent with models of halo structure in which the inner profile is primarily set by long term growth trends (e.g. Dalal et al., 2010; Ludlow et al., 2013). More recently, Wang et al. (2020) systematically examined the correlation between the present-day concentration and different stages of halo mass assembly. They found that there are extended periods in the assembly history that correlate strongly with the present-day halo structure, which justifies the use of various definitions of halo formation time with which to predict present-day concentrations. These findings are qualitatively consistent with our results. The other properties that we track, \(x_{\rm off}\), \(T/|U|\), \(\lambda_{\rm bullock}\), and \(c/a\) have relatively larger predictive power at late times compared with properties that more closely describe the halo inner structure, such as \(c_{\rm vir}\). All four are expected to be tracers of dynamically unrelaxed haloes that have recently experienced major mergers or rapid, anisotropic smooth accretion from nearby filaments. More relaxed haloes will be more spherical, more centered on its most bound point, and will have a virial ratio closer to 0.5 (Mo et al., 2010). Any deviations would be caused by recent external influences, which are typically mergers for non-subhalos (although mass loss due to tidal stripping can also influence halo properties, e.g., Tucci et al., 2021). The correlation with spin is generally understood to arise because a slowly accreting halo will generally accrete isotropically, reducing its normalized angular momentum over time, while a rapidly accreting halo will experience larger mergers which will inject large amounts of angular momentum into the system (e.g. Vitvitska et al., 2002). However, halo spin also plays a large role in the early collapse of dark matter perturbations prior to forming haloes (e.g. Sheth et al., 2001), meaning that it should not be thought of as a purely late-time phenomenon. Table 3 contains the values of optimal correlations between halo properties and MAH which correspond to the peaks of the curves in Fig. 4. As an example, we include a dashed vertical line in the left panel of Fig. 4 which intersects the peak of the correlation curve for the \(V_{\rm max}/V_{\rm vir}\) property. In other words, the \(x\)-value of the vertical orange line is \(a\)_op_ when \(X=V_{\rm max}/V_{\rm vir}\), which corresponds to the second row of Table 3. We also measured correlations with other measures of triaxiality, \(q\) and semi-minor axis ratio \(b/a\). The \(a_{\rm opt}\) of \(q\) and the ellipticity ratio \(c/a\) are the same, but \(q\) has a slightly higher peak absolute Spearman correlation with MAH of \(|\rho_{\rm sp}|=0.533\) compared to \(|\rho_{\rm sp}|=0.510\) for \(c/a\). The correlation between \(b/a\) and MAH is comparable with that between \(c/a\) and MAH. Analogously, we compared the results for \(\lambda_{\rm{Pobbles}}\), the Peebles spin parameter. This measurement was comparable with \(\lambda_{\rm{Bullock}}\), but \(\lambda_{\rm{Bullock}}\) has a higher peak correlation of \(\rho_{\rm{sp}}=0.473\) compared to \(\lambda_{\rm{Pebbles}}\) which has a peak correlation of \(\rho_{\rm{sp}}=0.384\), likely due to the fact that measurements of internal energy for \(\lambda_{\rm{Pebbles}}\) is less stable leading to weaker signals. We use \(\lambda_{\rm{Bullock}}\) in all subsequent analyses considering the spin of the haloes. Finally, in comparing \(V_{\rm{max}}/V_{\rm{vir}}\) to \(c_{\rm{vir}}\), the peak correlation occurs slightly earlier in the former quantity with comparable correlation strength. ### Predictions of present-day properties based on MAH In Fig. 5, we show the Spearman correlation between several predicted halo properties and their true value for four different models described in subsection 3.3. In blue circles, we show results for our canonical MultiCAM model, using the full mass accretion history of each halo. Under this metric, MultiCAM either outperforms or performs comparably well to the other tested models. With orange squares, we show results of applying MultiCAM to the best-fitting DiffMAH curve for each MAH (see subsection 3.2 for more information). This model is next in predictive power for the target halo properties shown. We highlight the similar performance between this model and MultiCAM trained on the full non-parametrized MAH (blue circles) for most halo properties. The consistency of performance implies that our method leans heavily on information contained within the smooth accretion history. Next, we show the performance of a model that applies MultiCAM to the three best-fitting parameters from DiffMAH in green diamonds. We note that the DiffMAH parameters alone have systematically lower prediction power than the full MAH curve that the DiffMAH parameters describe. This may be due to the non-linear mapping of DiffMAH parameters onto the mass accretion histories that cannot be captured by the linear modeling we employ in MultiCAM. Further investigation might include testing non-linear models to map DiffMAH parameters to halo properties. Relatedly, the decrease in prediction power for MultiCAM on DiffMAH parameters suggests a degeneracy between DiffMAH parameters and present-day halo properties. In that case, the exact parametrization of the DiffMAH curve matters. Indeed, one can show from Eq. 9 and 10 that we can pick a parametrization where we replace \(a_{\rm{early}}\) with \(a_{1/2}\) and still get a complete set of DiffMAH parameters that uniquely characterizes a MAH curve. We find an increase of \(z\geq 0.05\) in the correlation with \(c_{\rm{vir}}\), \(\lambda_{\rm{Bullock}}\), and \(c/a\) with this alternative parametrization. This indicates that the DiffMAH parametrization chosen impacts the predictive power of DiffMAH parameters, which is also further evidence of the aforementioned degeneracy. Finally, in the purple pluses, we show model predictions for CAM evaluated at \(a_{\rm{opt}}\), which only uses the scale at a single mass fraction of a halo that best correlates with that halo property (see Table 3). We see that MulticCAM on the full MAH significantly outperforms CAM \(\rm{a}(m_{\rm{opt}})\) for prediction of most halo properties including: \(c_{\rm{vir}}\), \(T/|U|\), and \(x_{\rm{off}}\). For the other two halo properties, \(\lambda_{\rm{bullock}}\) and \(c/a\), MultiCAM and CAM \(\rm{a}(m_{\rm{opt}})\) have (statistically) the same performance. Moreover, CAM performs significantly better than MultiCAM on DiffMAH parameters, which might be related to the fact that CAM \(a_{\rm{opt}}\) is using (by construction) the best single feature in predicting MAH. Comparing the individual models within different types of halo properties, we notice a few trends. First, the full curve from the DiffMAH fit performs at least as well as the model trained with CAM \(a_{\rm{opt}}\) on all halo properties. The MultiCAM on DiffMAH fit information provides better predictions on properties that are most strongly correlated with overall MAH, e.g. \(c_{\rm{vir}}\) and \(T/|U|\). For properties whose predicted values are more weakly correlated with truth (i.e. \(\lambda_{\rm{Bullock}}\) and \(c/a\)), all models, except for the one using the DiffMAH parameters only, perform similarly. The halo property predictions where CAM applied to \(a_{\rm{opt}}\) performs comparably well tend to be in the "worst" cases of target predictions (e.g. \(x_{\rm{off}}\), \(\lambda_{\rm{Bullock}}\), and \(c/a\)). We surmise that the comparable performance is due to the fact that these halo properties are largely dependent on the most recent MAH of the haloes and that Figure 2: 2D Scatter with contours of samples of \(z=0\) halo properties comparing different models. We show plots of 1-, 2-, and 3-sigma contours for the 2D histograms of 3000 samples of \(z=0\) halo properties given their MAH using three different methods. Each method is applied to \(\lambda_{\rm{bullock}}\), \(c/a\), and \(x_{\rm{off}}\) within our \(\tt\char 12\) dataset. The orange contours in each subplot show the true distributions of these properties. The green contours of each subplot were produced by applying three different prediction methods to these halo properties: CAM \(\rm{a}(m_{\rm{opt}})\) (left), MultiCAM with no scatter (middle), MultiCAM with scatter (right). These models were trained on the remainder of the \(\tt\char 12\) dataset. For more details on the different methods used see subsection 3.3 and for more discussion on the figure see subsection 3.3.4. these properties are even more sensitive to the non-smooth component of the MAH, comprised of moderate and major mergers, which our model does not yet account for. We additionally investigated whether using the gradient of the MAH could successfully capture the missing major merger information. Specifically, we computed the first-order derivative of MAHs using a Savitzky-Golay Filter (Savitzky and Golay, 1964) and used these derivatives as additional features for MultiCAM. However, we found no significant difference between MultiCAM trained on the full MAH and its gradients compared to our canonical MultiCAM model trained only on the full MAH. ### Predictions of MAH summaries based on present-day properties In Fig. 6 we use MultiCAM to perform the inverse of the test shown in Fig. 5: predicting summary statistics of a halo's MAH from its \(z=0\) halo properties. We attempt to predict \(a_{1/2}\), the half-mass scale (Eq. 5), \(\alpha\), the characteristic time in an exponential MAH fit (Eq. 8), \(m(t_{\rm dyn})\), the accretion rate over a dynamical time (Eq. 2), and the three DiffMAH parameters, \(\tau_{c}\), \(\alpha_{\rm late}\), and \(\alpha_{\rm early}\) (Eq. 9 and Eq. 10). We use MultiCAM to predict these values with different combinations of \(c_{\rm vir},T/|U|,x_{\rm off},\cdot_{\rm bulkock}\), and \(c/a\). Using MultiCAM on the full suite of halo properties (purple plus signs) results in strictly more accurate predictions than using a single halo property, as expected from Fig. 4, \(c_{\rm vir}\) (blue circles) does a better job predicting tracers of early accretion history like \(a_{1/2}\), \(\alpha\), and \(\alpha_{\rm early}\) than \(x_{\rm off}\) (orange squares) and \(T/|U|\) (green diamonds). The opposite is true for tracers of late accretion history, like \(m(t_{\rm dyn})\) and \(\alpha_{\rm late}\). CAM in the case of connecting a single feature with a single target variable. This means that the MultiCAM predicted correlations for the models using a single halo property as a feature in Fig. 6 and Fig. 7 are equivalent to the CAM predictions for the corresponding MAH summary. ## 5 Discussion In this work, we have studied the correlations between a halo's present-day properties and multiple intermediate epochs of their MAH. In particular, we investigated the time and mass scales at which different halo properties correlate most strongly with the MAH (see Fig. 4 and Table 3). We find a significant non-zero correlation between all the halo properties we studied and its formation history for most time and mass scales, with most halo properties, including concentration, achieving their strongest correlation with the MAH at intermediate time and mass scales. This is in disagreement with the findings in Wong and Taylor (2012), where the authors find that correlation between concentration and the MAH was strongest when the halo had accumulated only 20% of its mass for a relaxed halo sample. However, we see a high level of agreement both quantitatively and qualitatively for the correlation between concentration and MAH with Wang et al. (2020), who use the same halo finder (Rockstar) as our work. We thus hypothesize that the disagreement with Wong and Taylor (2012) is due to differences in halo finder and halo sample, but leave confirmation of this for future work. We also studied the autocorrelations between different epochs of mass growth. Fig. 3 shows that a sparser representation of the MAH can provide a similar amount of predictive information to model galaxy or halo properties. This conclusion is similar to the one reached in Wong and Taylor (2012), where their principal component analysis of MAHs suggested that only a few principal components explained the majority of the scatter in the MAHs. Physically, this indicates that longer timescales of mass accretion likely set halo properties. Our model and subsequent analysis adds to a growing body of literature that models the connections between galaxies, dark matter haloes, and their mass accretion histories (Wechsler and Tinker, 2018). Such models have ranged from one-to-one mappings of properties in the form of abundance matching (Kravtsov et al., 2004), to complex machine learning approaches (e.g. Hausen et al., 2022; Horowitz et al., 2022; Stiskalek et al., 2022; de Andres et al., 2023). We provide a generalization of CAM and quantify its ability to connect halo properties with their full MAH. Other recent models enable connections between more details of a halo's full MAH with corresponding halo or galaxy properties. For example, Jespersen et al. (2022) builds a graph neural network that directly uses the full dark matter merger tree of a halo to accurately emulate galaxy properties and their scatter. They find that using the full formation history always outperform predictions compared to only using the \(z=0\) halo properties and a traditional abundance matching approach, which is consistent with our conclusions in Fig. 5 and Fig. 6. As another example, Lucie-Smith et al. (2022) uses gradient-boosted-tree algorithms to predict the final mass profiles of cluster-sized haloes based on the initial density field and the MAH. Their model is able to identify time-scales in the MAHs that are most predictive of the final mass profiles. As demonstrated in Wang et al. (2020), one major source of scatter in the concentration mass relation comes from mergers, and the scatter depends on fine grained details of these mergers. However, Fig. 4 shows that the last dynamical time of the halo is not providing much predictive information. This suggests that MultiCAM is not able to successfully extract the relevant merger and non-smooth information from the MAH features given. In addition, we attempted to capture merger information by incorporating gradient features of MAH in MultiCAM's prediction. We found that the prediction performance of MultiCAM remained the same when adding these additional features across all halo properties. We therefore plan to explicitly incorporate major merger information from merger trees in future development and studies with MultiCAM. Additional future applications of our method include (1) applying MultiCAM to connecting DM halo accretion histories to baryonic properties in the context of hydrodynamical simulations such as the Figure 3: **Internal MAH Spearman correlation between formation times as a function of mass fraction.** The color in each 2D bin (pixel) of these plots corresponds to the Spearman correlation \(\rho_{\mathrm{gp}}(m(a_{i}),m(a_{j}))\) between the mass fraction at a given pair of formation times, \((a_{i},a_{j})\) (top), and the Spearman correlation \(\rho_{\mathrm{gp}}(a(m_{i}),a(m_{j}))\) between the formation time at a given pair of mass fractions, \((m_{i},m_{j})\) (bottom), for all the \(10^{4}\) haloes in our M12 dataset. See subsection 4.2 for additional discussion. TheThreeHundred project (Haggar et al., 2021), (2) using MultiCAM to build fast emulators that paste small-scale properties into cheaply generated ensembles of accurate mock halo catalogues (e.g. Tassev et al., 2013; Feng et al., 2016), or parametric models of MAHs (e.g. Hearin et al., 2021), (3) exploring other extensions of MultiCAM that incorporate more advanced non-linear methods, such as neural networks, that could provide higher predictive accuracy, and (4) applying MultiCAM as an empirical method where we can constrain the internal covariance matrix of the model with observational data. Finally, previous work indicates that the mass accretion history closely connects to proxies for the dynamical state of galaxies, galaxy clusters, and their host haloes (e.g. Hetznecker and Burkert, 2006; Gouin et al., 2021). An improved understanding of this connection can better inform the interpretation of measurements of galaxy and galaxy cluster properties (e.g. Ludlow et al., 2012; Mantz et al., 2015; Ludlow et al., 2016). The flexibility of MultiCAM provides a simple and interpretable framework to explore various measures of the dynamical state of DM haloes, galaxies, or galaxy clusters and to see how their dynamical state connects with their structural properties and accretion history. Specifically, MultiCAM provides a framework to study the predictive power of any combination of galaxy or halo properties on the MAH. Such studies could enable optimal combinations of properties that strongly correlate with merger information or other indicators of dynamical state. Thus, MultiCAM complements approaches to classifying the dynamical state of haloes or galaxies similar to the ones proposed in works such as De Luca et al. (2021) and Valles-Perez et al. (2023), which attempt to construct tracers of halo relaxedness from multiple halo properties. ## 6 Conclusion In this study, we present MultiCAM, a generalization of traditional abundance matching algorithms. MultiCAM connects halo and galaxy properties with their mass accretion histories (MAH). As a case study, we apply MultiCAM to connect the present-day properties of dark matter haloes with their full mass accretion histories using the Bolshof dark matter-only cosmological simulation. _Our key result is that we can use the entire MAH with MultiCAM to significantly outperform CAM in such connections._ Our MultiCAM models are particularly successful in connecting the entire MAH with halo properties often used to trace MAH, such as \(c_{\rm vir}\), \(T/|U|\), and \(x_{\rm off}\). For other halo properties considered (e.g. \(\lambda_{\rm ballock}\)), MultiCAM performs at least as well as CAM. See Fig. 5 and Fig. 6 for relevant figures. Our other main results are the following: 1. There is a significant auto-correlation in dark matter haloes' mass accretion history. We find that values of normalized peak masses \(m(a)\) are strongly correlated with one another for small changes in \(a\). This indicates that a subset, or a sparser representation of MAH, might be sufficient to model some galaxy or halo properties with comparable information content. For more details, see Fig. 3. 2. The entire formation history of a halo leaves imprints on present-day properties. We find that all the properties in our subset of present-day halo properties have significant non-zero correlations with their MAH between \(z\approx 4\) and \(z=0\). See Fig. 4. 3. We find that MultiCAM applied to the DiffMAH smooth parametrization of MAH (Hearin et al., 2021) performs comparably with MultiCAM applied on the full MAH for halo properties known to be strongly correlated with late-time merger events such as \(x_{\rm off}\) Figure 4: **Correlation of accretion history with present-day properties.** We show the Spearman correlation coefficient between different present-day halo properties, \(X\), and accretion history, parameterized as \(m(a)\) or \(a(m)\). The correlation is calculated based on our complete \(10^{4}\) halo sample N12. The coloured bands around each curve show the error estimated by jackknife resampling. In both figures, solid lines indicate positive correlation value and dotted lines a negative correlation value. See subsection 4.2 for additional discussion on these plots. See Table 3 for the specific values of optimal correlations between halo properties and MAH (peaks in these plots). The annotated orange dashed vertical line in the left plot illustrates one such optimal correlation \(a_{\rm opt}\) for the \(V_{\rm max}/V_{\rm vir}\) property (whose exact value is the second row in Table 3). and \(\lambda_{\rm bullock}\). This suggests that MultiCAM is not able to fully capture merger information in detail, which we leave for future work. See Fig. 5 for more details. 4. We show how a simple extension of MultiCAM based on conditional Gaussian sampling is able to simultaneously sample multiple halo properties based on the MAH and _capture the true correlation between properties_. See Fig. 2 and Fig. 11. 5. Finally, we apply MultiCAM to the inverse problem of predicting the MAH of a halo from its present-day properties. We show that \(c_{\rm vir}\) is better at predicting the early formation history of a halo, and \(T/|U|\) and \(x_{\rm off}\) are better at predicting the late time formation history. MultiCAM enables simultaneous use of all halo properties for MAH prediction, which outperforms predictions from any individual property. See Fig. 6 and Fig. 7. ## Acknowledgements IM and CA acknowledge support from DOE grant DE-SC009193. IM, KW, and CA acknowledge support from the Leinweber foundation at the University of Michigan. IM acknowledges the support of the Special Interest Group on High Performance Computing (SIGHPC) Computational and Data Science Fellowship. IM acknowledges support from the Michigan Institute for Computational Discovery and Engineering (MICDE) Graduate Fellowship. This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. The CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP). The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de). The Bolsho simulations have been performed within the Bolsho project of the University of California High-Performance AstroComputing Center (UC-HPACC) and were run at the NASA Ames Research Center. We acknowledge the use of the scikit-learn software for linear regression models and quantile transformers (Pedregosa et al., 2011). We also acknowledge the use of numpy (Harris et al., 2020), scipy (Virtanen et al., 2020), colossus (Diemer, 2015), astropy (Astropy Collaboration et al., 2013, 2018, 2022), matplotlib(Hunter, 2007), corner (Foreman-Mackey, 2016), and Imfit (Newville et al., 2016). We thank Andrew Hearin and Daisuke Nagai for feedback on early results of our model and analysis. ## Data availability The code to reproduce all results in this work is publicly available in the following github repository: [https://github.com/ismael-mendoza/nbody-relaxed](https://github.com/ismael-mendoza/nbody-relaxed). The dark matter halo catalogue data is publicly available in [https://www.cosmosim.org](https://www.cosmosim.org). Figure 5: **Correlation between predictions of \(z=0\) properties based on MAH and true halo properties.** We show the Spearman correlation between several true \(z=0\) halo properties and predicted \(z=0\) halo properties using four trained models on the M12 dataset. The first three models are based on MultiCAM trained on full MAH (blue circle), on DiffMAH curve fits to the MAH curves evaluated at the same scale factors as the full MAH (orange square), and on the parameters of the DiffMAH fit (green diamond). The last model (purple plus) is the prediction of the CAM algorithm using the corresponding \(a_{\rm opt}\) (defined in Eq. 13) for each halo property. See subsection 4.3 for additional discussion on this figure. Figure 6: **Correlation between MultiCAM predictions of mass accretion history summaries from \(z=0\) halo properties.** Here we show the Spearman correlation between parameters characterizing the MAH of haloes in our testing set, and their predictions using the MultiCAM algorithm trained on subsets of the \(z=0\) halo properties. The definitions of these MAH properties can be found in subsection 3.1 and subsection 3.2. The last model (purple cross) corresponds to MultiCAM trained on the following \(z=0\) halo properties: \(c_{\rm vir}\), \(V_{\rm max}/V_{\rm vir}\), \(x_{\rm off}\), \(T/|U|\), \(\lambda_{\rm bullock}\), and \(c/a\). The correlation from the first three models (blue circle, orange square, green diamond) is equivalent to the CAM predicted correlation. See subsection 4.4 for additional discussion on this figure.
2301.00508
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine Learning Classification Methodologies
Vocal Bursts -- short, non-speech vocalizations that convey emotions, such as laughter, cries, sighs, moans, and groans -- are an often-overlooked aspect of speech emotion recognition, but an important aspect of human vocal communication. One barrier to study of these interesting vocalizations is a lack of large datasets. I am pleased to introduce the EmoGator dataset, which consists of 32,130 samples from 357 speakers, 16.9654 hours of audio; each sample classified into one of 30 distinct emotion categories by the speaker. Several different approaches to construct classifiers to identify emotion categories will be discussed, and directions for future research will be suggested. Data set is available for download from https://github.com/fredbuhl/EmoGator.
Fred W. Buhl
2023-01-02T03:02:10Z
http://arxiv.org/abs/2301.00508v2
EmoGator: A new open source vocal burst dataset with baseline machine learning classification methodologies ###### Abstract _Vocal Bursts_ - short, non-speech vocalizations that convey emotions, such as laughter, cries, sighs, moans, and groans - are an often-overlooked aspect of speech emotion recognition, but an important aspect of human vocal communication. One barrier to study of these interesting vocalizations is a lack of large datasets. 1 am pleased to introduce the EmoGator dataset, which consists of 32,130 samples from 357 speakers, 16.9654 hours of audio; each sample classified into one of 30 distinct emotion categories by the speaker. Several different approaches to construct classifiers to identify emotion categories will be discussed, and directions for future research will be suggested. Data set is available for download from [https://github.com/fredbuhl/EmoGator](https://github.com/fredbuhl/EmoGator). speech emotion recognition; vocal bursts; affect bursts; nonverbal vocalizations; affective computing; machine learning; dataset ## 1 Introduction Emotions are central to human experience--they motivate & inform much of what we do. Recognizing emotions in others has been a longstanding area of interest. Perhaps the first scientific study of emotion recognition was the work of Duchenne [1] in 1862, who collected photographs of facial expressions elicited via electrically stimulating facial muscles. The question of how many emotions there are remains open. Duchenne identified 13 primary emotions, and 60 combinations, from facial expression. A recent study by Cowen & Keltner found that humans were able to reliably identify 28 distinct emotions from facial expression [2]. Another recent study by the same team [3] indicated that humans self-report as many 27 distinct emotions; these responses were collected from subjects reacting to short video clips. The emotion categories presented as gradients, which occasionally overlapped with other emotion categories; multiple emotions were elicited to varying degrees by a given stimulus. Humans often express emotion vocally by varying speech **prosody**--the audio characteristics of speech. One study [4] found that 12 distinct emotions could be recognized from speech prosody--and this across two cultures--a previous study [5] had found cross-cultural emotion recognition with subjects across five nations, although an in-group advantage was noted. Humans also express emotion via brief, non-speech sounds called **vocal bursts**, also referred to as "affect bursts", "emotional vocalizations", or "nonverbal vocalizations"-sounds like laughter, cries, sighs, moans, and groans--vocalizations that are not speech, and likely predate it, evolutionarily speaking. In [6] humans were found to be able to distinguish 14 emotional states from these vocal bursts. And a recent paper [7] by Cowen, Keltner, and others showed the ability to distinguish 24 emotional states from these brief vocalizations. The ability to detect and express emotion via human vocalization appears early in human development [8, 9, 10, 11, 12]. It is important to language and social development; people who have difficulties in discerning emotions in others, due to brain injury, or conditions like Autism Spectrum Disorder, experience difficulties communicating effectively. People with auditory affective agnosia [13] cannot discern emotional cues in speech, though they can still understand words, while people afflicted with dysprosody [14] speak in a monotone, without intonation or emotional affect; this can also appear in people with Parkinson's disease [15]. Any impairment of these abilities has a severe effect on communication and socialization with others, underlining the importance of evoking and understanding emotional expression. ### The Problem at Hand Interactions with computers via speech recognition is now commonplace via "smart speakers" and their associated virtual assistants such as Siri, Alexa, and Google Assistant. Currently, none of these systems are capable of detecting emotion from the speech audio signal; the signal is converted to text (sometimes with comic results) via speech-to-text deep learning models, but any emotional content present in the speech's prosody is ignored. For some applications, where _how_ a word is said may be as important (or more important) than _what_ word was said, this could be a severe limitation. And, given their non-speech nature, vocal bursts are completely ignored by these systems. Computers capable of emotion recognition from speech have numerous applications; more life-like responses from non-player characters in video games, for example. In early childhood education, awareness of the young user's emotional state would be helpful to gauge interest, frustration, or boredom; they could also be used to assess and improve the child's emotional intelligence (or "EQ") [16]. The ability to detect emotion could detect signs of loneliness, agitation, or depression [17], a special concern for isolated people, such as aging-in-place seniors. Social Robots--robots designed to interact closely with humans--benefit from emotion recognition [18]; such systems can even be used to gauge the robot's appeal to its human users [19]. The argument has been made that we will never claim human level performance in speech recognition until we can achieve human-level speech emotion recognition, since humans are capable of both [20]. (It should be noted that this area is just one aspect of the larger field of Affective Computing pioneered by Rosalind Picard [21], which involve not only emotion recognition, but also emotional expression, and emotionally-aware decision making.) Despite the limitations of current commercial products, Speech Emotion Recognition (SER) is an area of longstanding interest in computer science [22]. In 1996, Cowie et al. [23] developed a technique of automatically detecting landmarks in a speech signal and collect summary statistics, which were then used to quantify speech characteristics for four emotion categories. Various approaches have been used in speech emotion recognition over the years [24]--Mel-Frequency Cepstrum Coefficients (MFCC), Gaussian Mixture Models (GMM), Support Vector Machines (SVM), Hidden Markov Models (HMM), and neural network techniques such as LSTM [25] and, more recently, deep learning neural networks have been used. The research described here examines the largely-neglected area of vocal bursts, enabled by a newly-collected dataset. A number of machine learning techniques will be explored, with varying levels of performance, along with suggested directions for future research. The primary inspiration for this work was [7]; the vocal burst dataset, which the authors graciously provide to other researchers, was the largest vocal burst dataset available when released. That dataset consisted of 2,032 vocal burst samples with 30 emotion categories; as mentioned, humans were able to reliably distinguish 24 categories. The fundamental question at the basis of this current work: if humans can distinguish 24 emotion categories from vocal bursts, can machines do so as well? While the Cowen et al. dataset was the largest available at the time, it was still relatively small, and the categories were not evenly represented; most machine learning approaches benefit greatly from larger numbers of samples, and balanced categories. This author determined that a larger dataset would need to be collected, and several different approaches evaluated, to find the best-performing emotion classifier. ## 2 The dataset, and a spectrum of deep learning and other methodologies for classification ### The Dataset The EmoGator dataset consists of 32,130 vocal bursts, produced by 357 speakers, providing 16.9654 hours of audio; average sample length is 1.901 seconds. Each speaker recorded three samples for each of 30 emotion categories, providing 90 samples per speaker-this provided for an equal number of samples for each category, and for each speaker, assuring equal representation in the dataset. The emotion categories were the same 30 categories used in [7]: _Adoration, Amusement, Anger, Awe, Confusion, Contempt, Contentment, Desire, Disappointment, Disgust, Distress, Ecstasy, Elation, Embarrassment, Fear, Guilt, Interest, Neutral, Pain, Pride, Realization, Relief, Romantic Love, Sadness, Serenity, Shame, Surprise (Negative) Surprise (Positive), Sympathy, and Triumph_. The speakers were provided text prompts with scenarios to help elicit the emotional response; the prompts used were a modified and expanded version used by [7], and listed in the online supplemental materials1. Footnote 1: [https://supp.apa.org/psycarticles/supplemental/amp0000399/amp0000399_Supplemental-Materials.doc](https://supp.apa.org/psycarticles/supplemental/amp0000399/amp0000399_Supplemental-Materials.doc) Data was collected from unpaid volunteers, and also crowd-sourced workers via Mechanical Turk; a website was created where speakers could record and play back their samples using their own computer or mobile device. The audio files were originally recorded at 44100 or 48000 Hz, depending on the participant's hardware, and stored as mp3 files. Each individual recording file is named with a six-digit non-sequential user id, a two-digit emotion ID (1-30), and a single-digit recording number (1,2,3). Since the files are labeled by user ID, researchers can break any train, test, or validation set by speaker, ensuring a given speaker's submission appears in only in one of the sets. (Efforts were taken to avoid a speaker providing more than one contribution, though this cannot be 100% guaranteed). All participants provided informed consent, and all aspects of the study procedures and design were approved by the University of Florida's Institutional Review Board (IRB). Quality assurance was a major part of the data collection process; there were entire submissions that were silent recordings, or only contained random background noise. Some contributors apparently misunderstood the assignment, recording themselves reading the names of the categories, or phrases related to the categories. Many speakers provided a large number of high quality samples, but also submitted problematic ones, usually due to audio issues such as background noises (for example, phone chimes or background traffic sounds); another issue was excessive breath noise picked up on the microphone. In these instances, speakers would be asked to re-record the problematic samples in order to maintain the same number of samples per speaker. In addition, some speakers did not seem to be able to produce evocative speech from the prompts; their responses didn't convey distinct emotions. This last group was omitted from the dataset. As a result of all these factors, this dataset will therefore almost certainly have a bias toward the emotional expressions of North American English-speaking people, as the author, and sole evaluator, shares that personal history. The dataset will be publicly available at the following URL: [https://github.com/fredbuhl/EmoGator](https://github.com/fredbuhl/EmoGator). Several different steps were evaluated to preprocess the data. Normalizing the data so the range of each audio sample was within a [-1,1] range was universally used (for training, validation and testing). Denoising audio files and trimming silence from the beginning and end of audio files was evaluated as well. Augmenting data by creating pitch and time shifted variants of each sample was also explored. While this dataset was being collected, a company named Hume AI collected their own vocal burst dataset, a subset of which was made available for the The ICML 2022 Expressive Vocalizations Workshop and Competition[26] as the Hume-VB dataset. This dataset consists of 59,201 vocalizations from 1702 speakers, with 10 emotion categories (_Amusement_, _Awe_, _Awwdawdawdness_, _Distress_, _Excitement_, _Fear_, _Horror_, _Sadness_, _Surprise_, _and_ Triumph_). Each sample has been rated by reviewers, with [0:100] intensity scores for every emotion category provided for each sample. This Hume-VB dataset was also used for the ACII 2022 Affective Vocal Bursts Workshop and Competition[27] There are several differences between the EmoGator dataset to Hume-VB dataset: 1. EmoGator has 30 distinct emotion categories, with each sample belonging to a single category determined by the speaker's intent. Hume-VB has 0-100 ratings for all 10 of its categories provided by reviewers for each sample-the listener's interpretation, which may in some cases be very different than the speaker's intent. 2. EmoGator contributors were provided text prompts describing situations that would elicit a given category of vocal burst. Hume-VB contributors were provided'seed' vocal burst audio samples to imitate-which could reduce the range of expression for a given category. 3. EmoGator only permitted one 90-sample submission per speaker; Hume-VB allowed for multiple submissions per speaker. 4. EmoGator has balanced categories; each emotion category has exactly 1,071 samples. In Hume-VB, this varies; for example, "there are fewer samples that differentially convey _Triumph_" [26, p. 2] 5. While Hume-VB has nearly twice as many samples as EmoGator, the dataset is only provided for use in the two sponsored competitions, and requires signing an End User License Agreement (EULA)2; EmoGator is freely available under an open-source license. Footnote 2: [https://www.competitions.hume.ai/exvo2022](https://www.competitions.hume.ai/exvo2022) At time of publication, EmoGator appears to be the largest vocal burst dataset publicly available. ### Classification Methodologies A number of different techniques used in speech emotion recognition, sound classification, and elsewhere have been used for these sorts of audio classification problems. ### Spectrogram approaches Some approaches to audio classification involve creating a time-frequency spectrogram (or spectrogram-like) representation of the audio signals, which can be created a number of ways. Typically, the Short-Time Fourier Transform, or STFT [28] is used, which provides the amplitude of different frequencies over time; a variant, the Mel spectrogram, modifies the frequencies to correspond to the Mel scale [29], which closely matches human perception of differences in pitch. MFCC provide a spectrum-like "cepstrum" [30], which, while using Mel frequencies, provides the log of the amplitude in decibels over the phase shift, instead of the time domain used for spectrograms. The resulting spectrograms or cepstrograms are used as features for other machine learning approaches. ### 1D CNN training on raw waveforms In [31], Dai et al. use a direct approach to sound classification; one-dimensional CNNs that work with the raw input waveforms, without using spectrograms or some other representation as an intermediate-step feature detector. networks consisting of layers of one-dimensional convolutional neural networks (1D CNNs) [32] were used for this. [31] worked on the UrbanSound8k dataset [33], which, with its 10 categories and 8,732 samples, is a bit smaller than the EmoGator dataset. Testing various architectures, they reported up to 71.68% accuracy on an 18-layer model, which is competitive with CNNs using spectrograms of the same dataset. For the EmoGator, dataset, we developed an 18-layer network as in [31], and added dropout layers after each 1D convolution to help prevent overfitting. ### Random forests Random forest classifiers [34] were also explored. A random forest is constructed by generating multiple random decision trees, each constructed from a random subset of the dataset, using a random subset of each sample's features. Once constructed, each tree in the forest casts a single vote for a class, and the class with the most votes chosen the winner. This approach can be used on raw data or with spectrogram-like representations. ### Large pre-trained speech models Several teams in the 2022 ICML Expressive Vocalizations Workshop and Competition made use of large pre-trained speech models [35], [36], [37], [38],[39],[40]. Two models were used frequently: WavLM [41] and HuBERT [42]. Both of these are self-supervised speech representation models, which are built using transformer architectures [43]; transformers have been applied successfully to a large number of domains-they are typically very large models, which have been trained on large datasets for significant amounts of time. Having access to these pre-trained models can produce better results then can be achieved by training other (usually smaller) datasets in isolation. WavLM is a large scale self-supervised pre-trained speech model-The "Large" version of WavLM was trained on 94k hours of speech, and has 316.62M parameters. HuBERT is a similar model, the "large" version has 317M parameters, and was trained on 60k hours of audio on 128 Graphic Processing Units (GPUs). Both WavLM and HuBERT are built upon wav2vec 2.0 [44], a "contrastive learning" self-supervised speech model, which itself is trained on 64 GPUs; the output of wav2vec is used as the input to HuBERT or WavLM, providing them higher-level features to build and train upon. WavLM experiments were run by first running the EmoGator training, validation, and test data through a pre-trained WavLM model, storing the last hidden layer as a new representation for each sample, using a 70% / 15% / 15% train-validation-test split. The hidden layers from the training data were then used as input to train a single fully connected network, using validation data to find the appropriate stopping point; once the ideal models were determined, they were run on the test data. The HuBERT model was used in a identical fashion-using the last hidden later of the HuBERT model instead of WavLM as the input to the fully-connected layer. Incorporating WavLM and HuBERT in this work was greatly aided by the HuggingFace transformer libraries [45], which, while initially covering natural language processing, have now expanded into many other areas. The benefit of being able to incorporate an large pre-trained language model with a few lines of code cannot be overstated. ### Ensemble Methods Ensemble methods attempt to improve performance by combining the outputs of multiple models, with suitable training and weighting; the aggregate often outperforms the individual models. Two approaches were used for the EmoGator data: **Ensemble A** took the n-length output (where n was the number of emotion categories) produced by the WavLM-and-HuBERT-single-layer model and averaged them together, using the resulting average to pick the most likely emotion category. **Ensemble B** concatenated the last hidden layers from WavLM and HuBERT, and then trained single fully-connected layer on those inputs. ### Platform & Hardware Requirements Most work on this project was performed on the University of Florida's HiperGator-AI cluster, which uses 80G A100 GPUs; one A100 should be sufficient to run all the models included, but the code may not run directly on systems with lower memory GPUs unless modifications to parameters such as batch size etc. are implemented. ## 3 Results ### 1D CNN training on raw waveforms For one-dimensional convolutional neural networks, the best results against the full dataset were with a 70% / 15% / 15% train/validation/test split, using an 18-layer 1D CNN based on [31], but with dropout layers after each convolution. A relatively low dropout rate of 0.07 was optimal. All experiments were run with a batchsize of 128 and an Adam optimizer with a learning rate of 0.001. Several statistics were calculated; For the full 30-category dataset, the average F1 score was 0.270. F1 scores and other accuracy metrics, with breakdowns by category, are shown in Table 1; a confusion matrix is provided in Figure 1 based on the run with the highest F1 score. The experiments above were all run with normalized audio data, but without denoising the audio signal or trimming silence from the beginning and end; earlier experiments with a 70%/30% train/test split revealed that denoising or trimming the audio signal reduced performance. Data augmentation was also explored; two-to-three times larger "stretched" version of the 70% / 15% / 15% training set were produced by creating new samples by performing independent pitch and tempo shifts of the audio samples; however the stretched training sets produced lower performance than the original training set, despite making adjustments to the amount of pitch and tempo scaling. In reviewing these results, it is clear that some categories are much harder (or easier) to identify; for example, the F1 score (0.056) for Embarrassment, the worst performing category, is much lower than the highest performing category, Amusement (0.627). The confusion matrix illustrates the problem well; it shows that certain types of vocal bursts are simply difficult to place in the correct category. Per the confusion matrix, Embarrassment (with only 7 samples correctly identified) was more likely to be interpreted as Shame (16) or Guilt (10); all closely related concepts that can produce similar vocalizations. This is an inherently difficult problem, which helps explain why humans could only reliably distinguish 24 emotion categories in [7]. By selectively removing emotion categories that performed poorly, it would be expected that overall performance should improve. Using the F1 score as a metric, the lowest scoring categories were removed, creating 24-count, 16-count, and 10-count subsets of the dataset. Interestingly, three of the bottom-scoring six categories removed to make the 24-count subset were also not identifiable by humans in [7]; two other categories unidentifiable by humans were removed in the 16-count subset-showing some commonality between the two datasets, and also illustrating the difficulties humans and algorithms have with certain emotion categories, even across studies. The same 1D CNN model architecture, hyperparameters, and validation approaches were used. Results are in Table 2; we do see improvement as the more ambiguous categories are eliminated. By creating binary 1D CNN classifiers, with one classifier for each possible pair of emotion categories, we can illustrate which pairs are the easiest to distinguish. Using the same model architecture and 70%/15%/15% split, and using the F1 score as a similarity metric (on a [0,1] scale, where 1 is least similar), a similarity matrix was created based on the 435 permutations for the 30 categories, and a dendrogram displaying relationships between each category was generated from that matrix (Figure 2). The dendrogram illustrates the most easily confused or distinguished categories. For example, it shows how easily the Amusement category is distinguished from all other categories, and shows Realization and Contempt as the most similar-and therefore most confused-categories, despite being very different emotions. \begin{table} \begin{tabular}{r c c c c} \hline \hline & **Precision** & **Recall** & **F1 score** & **Support** \\ \hline **Adoration** & 0.407 & 0.488 & 0.444 & 162 \\ **Amusement** & 0.561 & 0.710 & 0.627 & 162 \\ **Anger** & 0.405 & 0.327 & 0.362 & 162 \\ **Awe** & 0.220 & 0.296 & 0.253 & 162 \\ **Confusion** & 0.354 & 0.574 & 0.438 & 162 \\ **Contempt** & 0.236 & 0.296 & 0.263 & 162 \\ **Contentment** & 0.193 & 0.272 & 0.226 & 162 \\ **Desire** & 0.253 & 0.309 & 0.278 & 162 \\ **Disappointment** & 0.144 & 0.093 & 0.113 & 162 \\ **Disgust** & 0.376 & 0.580 & 0.456 & 162 \\ **Distress** & 0.243 & 0.111 & 0.153 & 162 \\ **Ecstasy** & 0.187 & 0.123 & 0.149 & 162 \\ **Elation** & 0.190 & 0.074 & 0.107 & 162 \\ **Embarrassment** & 0.078 & 0.043 & 0.056 & 162 \\ **Fear** & 0.341 & 0.179 & 0.235 & 162 \\ **Guit** & 0.175 & 0.105 & 0.131 & 162 \\ **Interest** & 0.288 & 0.420 & 0.342 & 162 \\ **Neutral** & 0.397 & 0.568 & 0.467 & 162 \\ **Pain** & 0.276 & 0.438 & 0.339 & 162 \\ **Pride** & 0.175 & 0.086 & 0.116 & 162 \\ **Realization** & 0.351 & 0.241 & 0.286 & 162 \\ **Relef** & 0.294 & 0.432 & 0.350 & 162 \\ **Romantic Love** & 0.121 & 0.074 & 0.092 & 162 \\ **Sadness** & 0.355 & 0.302 & 0.327 & 162 \\ **Serenity** & 0.209 & 0.191 & 0.200 & 162 \\ **Shame** & 0.197 & 0.154 & 0.173 & 162 \\ **Surprise (Negative)** & 0.296 & 0.364 & 0.327 & 162 \\ **Surprise (Positive)** & 0.248 & 0.198 & 0.220 & 162 \\ **Sympathy** & 0.233 & 0.370 & 0.286 & 162 \\ **Triumph** & 0.378 & 0.228 & 0.285 & 162 \\ \hline **Accuracy** & & & & 0.288 & 4860 \\ **Macro Average** & 0.273 & 0.288 & 0.270 & 4860 \\ **Weighted Average** & 0.273 & 0.288 & 0.270 & 4860 \\ \hline \hline \end{tabular} \end{table} Table 1: Precision, Recall, and F1 scores from a best run of the 18 layer 1D CNN, with dropout layers. \begin{table} \begin{tabular}{l c} \hline \hline **1D CNN Dataset size** & **F1 score (avg.)** \\ 30-Count Full Dataset & 0.267 \\ 24-Count Subset & 0.344 \\ 16-Count Subset & 0.459 \\ 10-Count Subset & 0.597 \\ \hline \hline \end{tabular} \end{table} Table 2: 1D CNN runs with 24, 16, and 10 category subsets of the EmoGator dataset, compared to the 30 category full dataset. ### Random Forests As shown in [34], an approach known as Random Forests has been used on a number of small-count, small number-of-category datasets, which suggested it might be an apt choice for the EmoGator dataset. The classifier (which is included in the scikit-learn library [46]) was trained against Mel-Frequency Cepstral Coefficients (MFCC) of the audio data; runs were completed for the full 30 category dataset, along with 24, 16, and 10 category subsets. Results all under-performed the 1D CNN results, however (see Table 3). ### Large pre-trained speech models Results were calculated using the last hidden layer of WavLM and HuBERT models connected to a single fully-connected network layer. A variant of Ensemble B incorporated two fully-connected layers (labeled "2-layer FC"), which resulted in a moderate improvement. These results are presented, along with others, in Table 4. Figure 1: The confusion matrix generated by the 18 layer 1D CNN with dropout layers. ### Ensemble Methods Results were calculated using averaged output from the trained fully-connected layers appended on WavLM and HuBERT model runs (Ensemble A), and concatenated last-hidden-layer outputs from both models (Ensemble B), which were then used to train a single fully-connected layer. The WavLM and HuBERT single fully-connected layers that had the highest average F1 scores on the _validation dataset_ were used to keep the test data from tainting the ensemble model. Results for the Ensemble methods are presented in Table 4, along with summary data from all the EmoGator experiments. ## 4 Discussion Returning to our research question-whether, like humans, machines could reliably identify 24 emotion categories-it appears that the results achieved for the 24-emotion category runs did not approach assumed human proficiency, with a top F1 score of only 0.344 via the 1D CNN method on a 24-category subset. Results for the 24, 16, and 10-category subsets were better than the full 30-category runs, with the 10-category runs performing the best, again using the 1D CNN approach, scoring 0.597. (To put these results into perspective, a random guess for a 24-category subset would be right only 4.2% of the time; a 10-category random guess would be right only 10% of the time-so these results are much better than pure chance.) One potential use of this dataset would be to use it to measure how accurate human performance is for vocal bursts-whether the category the speaker intended to convey is correctly identified by listeners. Other studies have used gradient rating scales for each category provided by the listener, without necessarily linking back to the ground truth of the \begin{table} \begin{tabular}{l c} \hline \hline **Random Forest Dataset size** & **F1 score (avg.)** \\ 30-Count Full Dataset & 0.146 \\ 24-Count Subset & 0.180 \\ 16-Count Subset & 0.256 \\ 10-Count Subset & 0.345 \\ \hline \hline \end{tabular} \end{table} Table 3: Random Forest runs with 24, 16, and 10 category subsets of the EmoGator dataset, compared to the 30 category full dataset, using MFCCs. Figure 2: The dendrogram generated from F1 scores (range [0,1]) between pairs of emotion categories. speaker intent. Another question is whether collecting vocal bursts inspired by text-based prompts is better or worse than trying to capture them "in the wild" from recorded conversations, or elicited by other sorts of prompts. Collecting more data would no doubt improve these results; this vocal burst dataset, while (currently) the largest publicly available, is still small by machine learning standards. Evaluating subsets of the dataset makes the situation even worse; when looking at say, 10-category subsets, only \(\frac{1}{3}\) of the dataset is used. Using more complex ensemble methods seems a promising way forward; while the ensemble results here did not exceed the 1D CNN results, it's possible that incorporating more individual models could increase accuracy beyond what we've been able to achieve. One topic that was not explored here is _generating_ vocal bursts; the author will be next exploring methods such as Generative Adversarial Networks (GANs) and Stable Diffusion models to generate vocal bursts; ideally these could be tailored for an individual speaker by providing a few audio samples(the ICML competition had this as one of their challenges). More data will help, but it may be that audio data alone will be insufficient to properly classify vocal bursts. Datasets and models incorporating video as well as audio data-not only to look at facial expressions, but also any visual cues that might evoke a vocal burst-could improve accuracy. The words spoken by the utterer, and others around them, before or after a vocal burst may also aid in identification. (It may be, however, that there are inherent limits far short of certainty for vocal burst classification, regardless of any additional information that can be gathered-often cries of sadness and amusement sound the same, and people sometimes say they are not sure "whether they should laugh or cry".) Another area to explore are the demographics of the speakers; their age, gender, place of origin, and cultural background could all come into play on classifying bursts. These demographic concerns also extend to the person evaluating the quality of the sample; ideally, the demographic aspects of the reviewer should match those of the submitter for best quality. Beyond the demographic aspects, each individual's unique character and personality certainly comes into play when they generative vocal bursts-so prior experience with the utterer could be key in improving accuracy, especially if the model's weights can be fine-tuned based on these experiences. It is hoped that the EmoGator dataset will be introduce researchers to the fascinating area of vocal bursts; hopefully other researchers could incorporate this dataset into still-larger collections in the future, "paying it forward" by making those datasets publicly available. ## Acknowledgement My thanks to Anand Rangarajan for our helpful discussions about the project. \begin{table} \begin{tabular}{l c c} \hline \hline **Approach** & **\# Categories** & **F1 score** \\ 1D CNN & 30 & 0.267 \\ 1D CNN & 24 & 0.344 \\ 1D CNN & 16 & 0.459 \\ 1D CNN & 10 & **0.597** \\ Random Forest & 30 & 0.146 \\ Random Forest & 24 & 0.180 \\ Random Forest & 16 & 0.256 \\ Random Forest & 10 & 0.345 \\ WavLM & 30 & 0.255 \\ WavLM & 10 & 0.563 \\ HuBERT & 10 & 0.531 \\ Ensemble A & 10 & 0.571 \\ Ensemble B & 10 & 0.591 \\ Ensemble B (2-layer FC) & 10 & 0.593 \\ \hline \hline \end{tabular} \end{table} Table 4: All results from the various approaches and dataset subsets used.
2305.14934
GRACE: Discriminator-Guided Chain-of-Thought Reasoning
In the context of multi-step reasoning, e.g., with chain-of-thought, language models (LMs) can easily assign a high likelihood to incorrect steps. As a result, decoding strategies that optimize for solution likelihood often yield incorrect solutions. To address this issue, we propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding approach that steers the decoding process towards producing correct reasoning steps. GRACE employs a discriminator trained with a contrastive loss over correct and incorrect steps, which is used during decoding to score next-step candidates based on their correctness. Importantly, GRACE only requires sampling from the LM, without the need for LM training or fine-tuning. Using models from FLAN-T5 and LLaMA families, we evaluate GRACE over four math and two symbolic reasoning tasks, where it exhibits substantial performance gains compared to greedy decoding, verifiers, and self-consistency in most settings. When further combined with self-consistency, GRACE outperforms all the baselines by sizeable margins. Human and LLM evaluations over GSM8K show that GRACE not only improves the final answer accuracy but also the correctness of the intermediate reasoning. Our implementation can be accessed at \url{https://github.com/mukhal/grace}.
Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
2023-05-24T09:16:51Z
http://arxiv.org/abs/2305.14934v2
# Discriminator-Guided Multi-step Reasoning with Language Models ###### Abstract In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated--solutions with high probabilities are not always correct. Therefore, greedy decoding, which is the standard decoding method for reasoning tasks, often yields incorrect solutions. In addition, methods such as self-consistency and verifiers rely on sampling from the LM distribution and do not tackle the underlying issue. To address this, we introduce **G**uiding Multi-step **Re**A**soning with a **C**orrect**E**ss Discriminator (Grace), a stepwise decoding approach that nudges the model towards producing correct reasoning steps. Grace employs a discriminator model, which is trained to differentiate correct steps from invalid ones, to adjust decoding preferences based on the correctness of each reasoning step. Importantly, Grace does not require fine-tuning or re-training the LMs. When compared with conventional decoding strategies over four popular math reasoning benchmarks, Grace exhibits significant improvements in both final answer accuracy and step correctness, outperforming both greedy decoding and self-consistency.1 Footnote 1: Our code can be found at [https://github.com/mukhal/grace](https://github.com/mukhal/grace). ## 1 Introduction Multi-step reasoning spans a set of tasks where a question is answered via a sequence of reasoning steps until a final answer is reached (Creswell and Shanahan, 2022; Wei et al., 2022b). While pre-trained language models (LMs) have shown impressive performance on a variety of QA tasks, they still struggle with problems that require complex multi-step reasoning (Cobbe et al., 2021; Creswell et al., 2022; Ni et al., 2023). One reason is that the next-word prediction objective used for pre-training does not explicitly encourage the LM toward correct step-by-step reasoning. To boost the reasoning abilities of LMs, supervised fine-tuning (SFT) has been performed on gold step-by-step solutions (Uesato et al., 2022; Ho et al., 2022; Fu et al., 2023). However, SFT can easily lead to model overfitting of the gold solutions seen during training, resulting with an LM that assigns low probabilities to alternative but correct solutions (Ni et al., 2023). These issues with training objectives result in a _miscalibration_ of the LMs' probabilities with respect to the correctness of their outputs: high likelihoods are assigned to incorrect solutions and vice versa (Holtzman et al., 2021). For instance, when prompting a fine-tuned FLAN-T5-Large (Chung et al., 2022) with an unseen problem from GSM8K (Cobbe et al., 2021), the model easily assigns a higher average likelihood to incorrect solutions than correct ones (shown in Figure 1). To elicit correct multi-step reasoning directly from LMs, _prompting_ techniques have been proposed, with the scratchpad or chain-of-thought methods being particularly successful (Nye et al., 2021; Wei et al., 2022b; Wang et al., 2022). However, prompting methods do not directly address Figure 1: A math question from GSM8K (Cobbe et al., 2021) and multi-step solutions sorted in descending order by their average log probability over tokens according to a fine-tuned FLAN-T5-Large (Chung et al., 2022). The model probabilities are miscalibrated with respect to the solution correctness as the incorrect solutions have higher likelihood over the correct one. the miscalibration issue and are only effective when the LM reaches a certain scale so that it can effectively utilize the prompt (Wei et al., 2022). In parallel, _oversampling_-based techniques have been proposed to leverage information from multiple plausible solutions. For instance, the sample-then-rank approach scores a set of randomly sampled solutions based on their correctness, using a verifier model, and selects the one with the highest score (Cobbe et al., 2021; Li et al., 2022). Self-consistency has also been investigated (Wang et al., 2022) to aggregate multiple random samples by majority vote. These approaches, however, do not address the underlying problem as sampling is still performed based on the miscalibrated LM distribution. To mitigate the miscalibration issue, this paper brings the insight that we can sample more correct solutions by _steering the decoding process towards generating correct reasoning steps_, and therefore leading to more accurate answers. Inspired by discriminator-guided controlled generation methods (Yang and Klein, 2021; Dathathri et al., 2020; Krause et al., 2021), we propose **Grace**, which is a guided-decoding method that relies on a correctness discriminator model to nudge the decoding process towards correct steps. Our discriminator is trained at the step level; therefore providing a finer-grained control over the sampling process compared to self-consistency and sample-then-rank methods that act on complete samples. While recent work (Uesato et al., 2022) rely on human annotations to build a step-level reward model, human annotations are expensive and do not scale well. We work around this limitation and propose a 3-step approach to train the correctness discriminator based on access to the correct solutions only, without any step-level human annotations. We compare Grace to greedy decoding, self-consistency, and sample-then-rank and show strong improvements over all of them on four different math reasoning benchmarks with two different language models, namely FLAN-T5 (Chung et al., 2022) and LLaMA (Touvron et al., 2023). For instance, Grace outperforms greedy decoding on GSM8K (Cobbe et al., 2021) by 7.4% accuracy points with FLAN-T5-Large and by 5.4% with LLaMA-7B. In addition, when further combining our approach with self-consistency, we outperform the vanilla self-consistency by 10.2% points on GSM8K and by 15.7% on MultiArith (Roy and Roth, 2015). In summary, our contributions are: * We propose a stepwise decoding strategy that guides the model towards correct multi-step solutions that rely on a step-level correctness discriminator. Grace_does not necessitate any training_ of the LM and only needs access to samples from the LM distribution. * We propose a novel alignment algorithm to align incorrect solutions with correct ones, to automatically create step-level (in)correctness labels. This algorithm avoid the requirement of large amounts of human annotations for Figure 2: **Top:** The three-step process to train the discriminator. **(1) Sampling** solutions from a given language model with different mistakes by keeping the solutions with the incorrect final answers only. **(2) Alignment** involves aligning the sampled solutions with the reference solutions to identify incorrect steps. **(3) Learning** the discriminator with a max-margin loss to assign high scores to correct steps and low scores to incorrect steps. **Bottom:** The guided stepwise decoding process using the trained discriminator. Given the question and the prefix, we sample a pool of candidate next steps and use the discriminator to score steps as in eq. (7). Then the top-scored step is selected and added to the prefix. This process repeats until a final answer is generated. reasoning steps (Uesato et al., 2022). * Grace shows significant improvements over greedy decoding on four different math reasoning benchmarks. We also perform human evaluation and LLM-based evaluation and show that our approach produces 7% more correct steps compared to greedy decoding and 3.8% compared to self-consistency. According to human evaluation, Grace reduces the solution error rate from 9.0% with greedy to 5.0% (about 44% reduction). ## 2 Related Work **Discriminator-guided Controlled Generation** Previous work in controlled generation has employed discriminators during decoding to guide generation towards specific attributes, such as sentiment, topic, or lexical constraints (Holtzman et al., 2018; Dathathri et al., 2020; Yang and Klein, 2021; Krause et al., 2021). These discriminators can either update the hidden states of the language model in real-time (Dathathri et al., 2020) or adjust token probabilities (Holtzman et al., 2018; Yang and Klein, 2021). Our research takes inspiration from these practices but extends them to multi-step reasoning in two key aspects: **control granularity** and **discriminator training**. We direct the decoding of multi-step solutions at the level of reasoning steps to promote their correctness, instead of individual tokens as correctness is often not meaningfully defined at the token level. In terms of discriminator training, we note that training a correctness discriminator is more challenging than training a topic or sentiment discriminator since judging correctness requires checking the given step for logical, mathematical, or factual inconsistencies with respect to the context i.e., the question and the prefix. To address this challenge, we design a novel 3-step process for training discriminators without requiring step-level annotations. **Multi-step reasoning.** Two main types of approaches have been explored: inference-time methods, which do not require additional language model (LM) training, and training-based methods, which require either labeled samples or rewards. Popular inference-time techniques include model prompting such as chain-of-thought prompting (Nye et al., 2021; Wei et al., 2021) and its variants (Zhou et al., 2022; Zhang et al., 2022; Fu et al., 2022). While these input-based techniques modify the input by LM, other methods target the output side, e.g., self-consistency (Wang et al., 2022) employs majority voting on multiple sampled solutions to determine the final answer. An alternative output-based method involves training a verifier model to rank sampled solutions according to correctness. As demonstrated by Cobbe et al. (2021), it is feasible to enhance GPT-3's math reasoning performance by training a verifier model to predict the correctness of sampled solutions, using labels based on known final answer correctness. However, verifiers exhibit no control over solution sampling. We also show in this paper (see section 5) that verifiers trained on samples from smaller LMs perform very poorly. Training-based methods, on the other hand, focus on crafting learning objectives to teach the LM to reason correctly. For instance, Uesato et al. (2022) trained a reward model to assess the correctness of the entire reasoning chain, beyond the final answer and then used it to train the LM via Reinforcement Learning. Human annotations were used to provide step-level labels for training the step-level reward model. Ni et al. (2022) proposed training LMs on sampled partially correct solutions to enhance mathematical reasoning. The work most relevant to ours is by Li et al. (2022), who introduced a step-aware verifier to score sampled solutions. Despite the demonstrated benefits of including step-level information, their technique only applies to fully sampled solutions, unlike our approach which actively guides the decoding process. Yang et al. (2022) use a stepwise verifier to guide the search process for proof generation. They use heuristics to generate negative examples while, on the other hand, we sample incorrect solutions from the model and create examples using an alignment process with the reference solutions. Also, their setting is limited to the task of proof generation while we focus on chain-of-thought style math reasoning. ## 3 Method **Overview.** Our setup follows chain-of-thought reasoning (Nye et al., 2021; Wei et al., 2021), where given a question \(q\) (e.g., a math word problem), our goal is to predict a step-by-step solution or chain of \(T\) intermediate reasoning steps that end with the answer \(s_{1},s_{2},\dots,s_{T}\), where \(s_{T}\) is the final answer. This is typically done using a pretrained language model (LM) that is either fine-tuned or prompted in a few-shot manner. Typical decoding approaches designed for text generation prioritize sequence likelihood and can easily generate invalid steps that ultimately lead to an incorrect answer. Our goal is to improve the reasoning ability of the LM by guiding the solution generation process via a discriminator \(D\) that models the correctness of a given reasoning step with the goal of preventing the LM from generating incorrect steps and ultimately sampling high-quality reasoning chains. We will start by formalizing our approach in section 3.1. We then present a three-step procedure to train the discriminator (section 3.2) and then explain how it is used to guide the stepwise decoding process described in section 3.3. A detailed overview of Grace is shown in fig. 2. ### Formalization Given a problem \(q\) and a correct solution prefix \(s_{1},s_{2},\ldots,s_{t-1}\) we want to sample a correct next step \(s_{t}\) towards the final answer.2 We assume access to a judge or a discriminator model \(D\) that takes in the problem \(q\), the prefix \(s_{1},s_{2},..s_{t-1}\) and a candidate next step \(s_{t}\) and outputs a real-valued score \(D(q,s_{1:t-1},s_{t})\) that indicates whether \(s_{t}\) is a correct candidate step at time-step \(t\). We also assume access to language model distribution \(p_{\text{LM}}(\cdot|q,s_{1:t-1})\) that is either trained or to be used in a few-shot prompting manner to generate \(s_{t}\). Footnote 2: We assume the prefix so far is correct to focus on modeling the next step prediction. An empty prefix is trivially correct. Formally, let \(c\) be a binary variable that indicates the correctness of the generated step with respect to the question and prefix, where we want to sample the next step \(s_{t}\sim p(\cdot|s_{1:t-1},c,q)\). A Bayesian factorization of \(p(s_{t}|s_{1:t-1},c,q)\) is: \[p(s_{t}|s_{1:t-1},c,q) \tag{1}\] \[\propto p(s_{t}|s_{1:t-1},q)\cdot p(c|s_{t},s_{1:t-1},q)\] (2) \[=p(s_{t}|s_{1:t-1},q)\cdot p(c|s_{1:t},q)\] (3) \[=p_{\text{LM}}(s_{t}|q,s_{1:t-1})\cdot p(c|s_{1:t},q)\] (4) \[\propto p_{\text{LM}}(s_{t}|q,s_{1:t-1})\cdot\exp(D(q,s_{1:t-1},s_{t})) \tag{5}\] As depicted in eq. (4), we substitute the probability of the next step without correctness \(p(s_{t}|s_{1:t-1})\) with \(p_{\text{LM}}(s_{t}|q,s_{1:t-1})\). Similarly, in eq. (5), \(p(c|s_{1:t},q)\) is replaced with \(\exp(D(q,s_{1:t-1},s_{t}))\). This substitution is justified as, in accordance with our discriminator's definition, \(\exp(D(q,s_{1:t-1},s_{t}))\) is proportionate to \(p(c|s_{1:t},q)\). Finally, as we assume that the prefix \(s_{1:t-1}\) is correct, \(p(c|s_{1:t},q)\) becomes dependent only on the correctness of \(s_{t}\), modeled by \(D(q,s_{1:t-1},s_{t})\). This form of factorization echoes the controlled generation method used by FUDGE Yang and Klein (2021), but with two notable distinctions. First, we model the next step as opposed to the next token probability, which is ill-defined. Second, unlike FUDGE's discriminator, which predicts _future_ attribute satisfaction, our discriminator focuses on the _present_, evaluating whether the current step \(s_{t}\) will result in a correct prefix \(s_{1:t}\). That is, the discriminator is guiding decoding at the step level as opposed to verifiers and self-consistency, which operate on complete solutions. To summarize, eq. (5) shows that we want to sample \(s_{t}\)**(i)** with high likelihood \(p_{\text{LM}}(s_{t}|q,s_{1:t-1})\) according to the language model and **(ii)** is correct with respect to the question and the prefix. Intuitively, this implies the utilization of the reasoning capabilities of the LM while maintaining correctness. Throughout the rest of the paper, we will refer to the correct prefix \(s_{1:t-1}\) as \(r\) and the next step \(s_{t}\) as \(s\) for simplicity. ### Discriminator Learning We use three steps to learn the discriminator function \(D(q,r,s)\), which are shown in fig. 2 (top). * **Step 1-Negative sampling:** We collect a set of solutions with at least one incorrect step. * **Step 2-Alignment:** We align these solutions with the ground truth to create examples with positive and negative steps to train the discriminator. * **Step 3-Learning:** We train the discriminator with a contrastive objective to distinguish between correct and incorrect steps. Negative Sampling.This step aims to collect a set of solutions with incorrect steps. For each question in the training set, we sample multiple solutions via top-\(k\) sampling and only keep solutions with an incorrect final answer (to make sure the solution has at least one incorrect step). We refer to this set of solutions as the _negative pool_. Although negative examples can be constructed by introducing perturbations in reference steps with a predefined set of edit operations (e.g., Golovneva et al. (2023)), we found that this does not benefit discriminator trainng as the perturbations produce easy negatives with artifacts that do not resemble the type of mistakes an LM will make. Alignment.Our objective is to train the discriminator \(D\) to effectively differentiate between correct and incorrect steps. To achieve this, we require a dataset that consists of both correct and incorrect step examples. To acquire such dataset without step-level annotations of any kind, we align sampled incorrect solutions with the reference solution via dynamic programming using the Needleman-Wunsch (NW) algorithm (Likic, 2008) commonly employed in bioinformatics applications. As the NW algorithm works on sequences with _different_ lengths, it allows us to model both missing and extra steps which prior work does not take into account (Li et al., 2022; Ni et al., 2022). The standard NW algorithm aims to find a minimum-cost alignment between two character sequences and is not directly applicable to our case without defining a notion of step equivalence. For that purpose, we use _cosine distance_ between step embeddings to compute alignment cost and introduce a similarity threshold to determine step equivalence. We compute step embeddings using ROSCOE (Golovneva et al., 2023), a RoBERTa-base (Liu et al., 2019) model based on SimCSE (Gao et al., 2021) and fine-tuned on positive and negative pairs of questions and reference reasoning steps. Our initial experiments found ROSCOE to produce better alignment compared to the vanilla SimCSE. The detailed alignment algorithm is shown in algorithm 2 in Appendix B. We then use the alignment to obtain pairwise examples in the form of \((q,r,s^{+},s^{-})\) where \(s^{+}\) is the correct next step and \(s^{-}\) is the incorrect next step after prefix \(r\). We do that by iterating over pairs of aligned steps and handling the three following cases: missing, extra, and comparable steps i.e., a step that can be directly comparable to its counterpart from the reference solution. In the comparable step case, we compare the intermediate variables computed at each step following prior work (Ni et al., 2022; Li et al., 2022). An example of all three cases is shown in fig. 3. Algorithm 1 details the process we use to construct the discriminator pairwise examples. ``` 0: Question \(q\), aligned sampled solution \(\tilde{t}\), aligned gold solution \(\tilde{g}\). 0: Pairwise examples for discriminator training \(E\). \(P,E\leftarrow\emptyset,\emptyset\) \(m\leftarrow|\tilde{t}|\) for\(i\in\{1,\dots,m\}\)do if\(\tilde{t}_{i}\) = then \(P\gets P\cup\{\tilde{g}_{i}\}\) else if\(\tilde{g}_{i}\) = then \(\tilde{g}_{i}\leftarrow\)next_gold_step(\(\tilde{g},i\)) ifexists(\(\tilde{g}_{j}\))then \(E\gets E\cup\{(q,P,\tilde{g}_{j},\tilde{t}_{i})\}\) else ifsteps_match(\(\tilde{t}_{i},\tilde{g}_{i}\))then \(P\gets P\cup\{\tilde{t}_{i}\}\) else \(E\gets E\cup\{(q,P,\tilde{g}_{i},\tilde{t}_{i})\}\) exit return\(E\) ``` **Algorithm 1** Discriminator training data construction. Learning.For a set of \(M\) pairwise examples \(\{(q_{i},r_{i},s_{i}^{+},s_{i}^{-})\}_{i=1}^{M}\), the training objective for the \(i\)-th example is to maximize the difference \(\delta_{i}=D(q_{i},r_{i},s_{i}^{+})-D(q_{i},r_{i},s_{i}^{-})\). We utilize the max-margin loss objective \(\mathcal{L}_{D}\) in eq. (6) (Rosasco et al., 2004; Li et al., 2020), where \(\zeta>0\) is a hyperparameter. We found that the max-margin loss performs better than other alternatives (see section 6 for an ablation). \[\mathcal{L}_{D}=\sum_{i=1}^{M}\Big{[}\max\{0,-\delta_{i}+\zeta\}\Big{]} \tag{6}\] Figure 3: An example of the alignment produced by our alignment algorithm (described in Algorithm 2). The question and the reference solutions come from GSM8K (Cobbe et al., 2021). The “-” designates a step placeholder. Three are three possible cases when aligning a reference solution with a sampled solution: **extra step, missing step**, and **aligned step**. In the aligned case, the intermediate variables (**underlined**) are compared to determine the correctness of the sampled step. Algorithm 1 describes how each case is handled when constructing the discriminator training data. ### Guided Stepwise Decoding After \(D\) is trained, we can then use it to guide solution sampling. At each time \(t\), we using nucleus sampling to sample a pool of \(J\) candidates for the next steps \(\mathcal{S}=\{s_{t}^{(1)},s_{t}^{(2)},\ldots,s_{t}^{(J)}\}\) from \(p_{\text{LM}}(\cdot|q,r)\).3 These candidates represent multiple possible choices for the next step. Each candidate step \(s_{t}^{(i)}\) is then scored using: Footnote 3: We make sure each sample will contain only one step by halting when a special end-of-step token is reached. \[(1-\beta)\log p_{\text{LM}}(s_{t}^{(i)}|q,r)+\beta\bar{D}(q,r,s_{t}^{(i)}) \tag{7}\] where \(\beta\) is a hyperparameter and \(\bar{D}(q,r,\hat{s})\) is the softmax-normalized discriminator score across the candidate steps: \[\bar{D}(q,r,s_{t}^{(i)})=\frac{\exp(D(q,r,s_{t}^{(i)}))}{\sum_{j=1}^{J}\exp(D(q,r,s_{t}^{(j)}))}\] The discriminator score is normalized so as to make sure we are adding two log probabilities in when computing the score. The process continues until a final answer is generated or until a maximum number of steps is reached. The guided decoding process is shown in Figure 2 (bottom). ## 4 Experimental Setup Tasks.We evaluate our approach on four popular multi-step math reasoning tasks. The **GSM8K** dataset (Cobbe et al., 2021) is one of the most commonly used benchmarks for complex multi-step reasoning. It consists of math word problems for 8th graders, each containing the corresponding step-by-step solution. We use the original split by Cobbe et al. (2021) and use 1K solutions from the training set as the development set. **MathQA-Gain** is a subset of MathQA (Amini et al., 2019) which includes math word problems about gain/loss. Each problem is accompanied by a step-by-step Python program. **VAMP**(Patel et al., 2021) and **Multi-Arith**(Roy and Roth, 2015) consist of elementary-level math word problems. For MathQA-Gain, SVAMP, and MultiArith, we use the splits included in the LILA benchmark (Mishra et al., 2022). As SVAMP and MultiArith do not include reference step-by-step solutions (only the final answer is included for each question) we follow recent work on chain of thought distillation (Ho et al., 2022; Fu et al., 2023; Hsieh et al., 2023) and prompt GPT-3.5 to generate step-by-step solutions. We sample 20 solutions for each question and only keep the questions for which GPT-3.5 was able to reach the correct final answer. More details on this process and exact data statistics are in Appendix E.1. Metrics.We evaluate Grace in terms of final answer accuracy as in prior work in addition to the step correctness as measured by both a LLM and human evaluation. Baselines.We compare Grace to **greedy decoding** at the token level, which is the standard decoding method for reasoning tasks (Wei et al., 2022; Li et al., 2022; Fu et al., 2022; Zhou et al., 2022). We also compare to **self-consistency**(Wang et al., 2022), where multiple solutions are sampled with a temperature of \(T=0.7\) and we pick the most frequent answer as the final answer. We sample \(40\) solutions for experiments with FLAN-T5 and \(20\) with LLaMA. Lastly, we compare to **sample-then-rank** or verifiers (Cobbe et al., 2021; Uesato et al., 2022; Li et al., 2022), where a classifier is trained to read the question and the full solution and then predict a binary label of correct or incorrect. The labels are based on the final answer's correctness. At inference, we sample multiple solutions with temperature \(T=0.7\) and we pick the solution with the highest correctness probability according to the verifier. For a fair comparison with Grace, the verifier is also based on a T5-Large encoder as our discriminator and trained on the same incorrect samples, along with the correct counterparts. We use the verifier checkpoint that achieves the best macro F1 on a held-out set. It is worth noting that while we compare to self-consistency and verifiers, they are not necessarily competitors to our technique and can indeed be combined with Grace i.e., we can sample complete solution using our guided decoding approach then rerank or apply majority voting over the sampled solutions. Indeed, we show in the next section that applying self-consistency on top of samples from Grace performs consistently better across the board than either the vanilla self-consistency or Grace separately. Models.We verify the effectiveness of Grace on two language models from different families and with different sizes, namely FLAN-T5-Large (778M) (Chung et al., 2022) and LLaMA (7B) (Touvron et al., 2023). We fine-tune FLAN-T5 over the training set of each task while LLaMA is used in a few-shot setting with chain-of-thought prompting (Wei et al., 2022b) with 6 demonstrations. **Sampling and Discriminator Training.** For each task, we sample roughly 80K incorrect solutions for discriminator training with top-\(k\) sampling with \(k=50\) and temperature \(T=1.3\) for FLAN-T5 and \(T=0.7\) for LLaMA. The discriminator used in all our experiments is a FLAN-T5-Large encoder (~340M). The step score is computed by applying max-pooling over the hidden states followed by a two-layer MLP with a ReLU and tanh non-linearities. The tanh is applied to constrain the scores in the range \([-1,1]\). We train the discriminator for 10 epochs with a batch size of 32. We use the Adam optimizer with a learning rate of 1e-4 for GSM8K and 6e-5 for other tasks. We use \(\zeta=1.0\) as the margin hyperparameter. We monitor the loss on a held-out development set and choose the best checkpoint. **Decoding.** For step-wise decoding, we sample reasoning steps using nucleus sampling to form the pool of candidate next steps. We continue decoding steps until a final answer is generated or until a maximum number of steps is reached. Table 3 provides concrete hyperparameters used for stepwise decoding for each task. **Calculator.** All our results rely on a calculator during decoding. That is, whenever a formula is encountered, a calculator module is invoked to compute the result, which is then given back to the LM to continue sampling. This is to relieve the LMs from having to do simple math operations and to let them focus on the actual reasoning process. ## 5 Results and Discussion **Can Grace improve final answer correctness?** Here, we focus on comparing the accuracy of the final answer (also known as solve rate). We first discuss the results with FLAN-T5-Large on GSM8K, SVAMP, and MathQA-Gain (shown in Table 1).4 We see that Grace outperforms the baselines on all tasks. For instance, Grace is outperforming greedy decoding by 7.4% and 11.7% points over GSM8K and SVAMP, respectively. Interestingly, combining our approach with self-consistency, where sampling is done using Grace and then majority voting is applied on the samples, further boosts the accuracy improving on vanilla self-consistency by as large as 6.8 points on SVAMP. Footnote 4: We do not show FLAN-T5 results on MultiArith as it already achieves > 90% accuracy. We do not show results of LLaMA on MathQA-Gain since it performs extremely poorly (< 2%). Moving to the results on LLaMA-7B, we see a similar trend where Grace outperforms greedy decode and self-consistency on MultiArith and SVAMP. Grace with self-consistency outperforms self-consistency with random sampling by 10.2% and 15.7% points on GSM8K and MultiArith, respectively. Ultimately, our results show that Grace is indeed able to boost both FLAN-T5 and LLaMA's final answer correctness on all tasks. Interestingly, in the case of LLaMA-7b, we observe such improvements (i) without having to train the LM _at all_ and (ii) with a discriminator that has 20X fewer parameters than the LM. This points to a promising direction of our approach in steering the generations of huge LLMs using significantly smaller discriminator models. One final observation is that the verifier approach performs extremely poorly over all tasks except for MathQA-Gain. This is likely due to the fact that the training examples of the verifier include positive examples (i.e., solutions with correct final answers) but have incorrect or invalid reasoning steps. These correspond to cases where the model reached the correct answer spurious \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{3}{c}{FLAN-T5-Large} & \multicolumn{3}{c}{LLaMA-7B} \\ \cline{2-7} & GSM8K & SVAMP & MathQA-Gain & GSM8K & MultiArith & SVAMP \\ \hline Greedy decoding & 26.9 & 54.5 & 76.5 & 12.9 & 54.0 & 32.8 \\ Self-consistency & 33.3 & 61.8 & 78.9 & 20.7 & 78.9 & 52.4 \\ Sample-then-rank & 20.5 & 45.9 & 83.7 & 9.60 & 46.4 & 26.1 \\ \hline Grace & 34.3 (+7.4) & 66.2 (+11.7) & 84.1 (+6.0) & 16.2 (+3.30) & 84.9 (+30.9) & 49.7 (+17.3) \\ Grace w/ self-consistency & **36.3** (+3.0) & **68.6** (+6.80) & 84.4 (+0.7) & **30.9** (+10.2) & **94.6** (+15.7) & **55.6** (+3.20) \\ \hline \hline \end{tabular} \end{table} Table 1: Final answer accuracy on four multi-step reasoning tasks. Self-consistency and verifier results use 40 samples for FLAN-T5-Large experiments and 20 samples for LLaMA. The discriminator used with Grace is T5-Large encoder. FLAN-T5-Large results are aggregated over 5 runs and LLaMA over 3 runs. The absolute improvement by Grace w.r.t to the corresponding baseline is shown in parentheses. Grace with self-consistency outperforms the baselines on all tasks. ing. Training the verifier on these prevents the verifier from identifying correct from incorrect reasoning. To test this hypothesis, we ran an experiment where we only included the gold trajectories as positive examples and found the verifier performance to improve significantly (although it still under-performed self-consistency and Grace). That explains why the verifier helps with MathQA-Gain since for a solution to have a correct final answer, it must correspond to a correct runnable Python program. These findings may initially seem to conflict with previous findings, where the verifier approach was shown to be indeed beneficial (Cobbe et al., 2021; Li et al., 2022). However, one should note that in these works, the verifier is trained over examples sampled from much larger LMs (e.g., GPT-3). In this case, it is certainly expected that reaching the correct final answer correlates more strongly with correct reasoning (which has been shown in Uesato et al. (2022)) compared to our setting with FLAN-T5-Large, and therefore the verifier model encounters the above issue much less frequently. Does Grace produce more correct steps compared to baselines?Reaching a correct final answer does not always correspond to correct reasoning; The model can reach the correct answer spuriously (Golovneva et al., 2023; Uesato et al., 2022). Here, we want to measure if Grace yields more correct steps compared to the baselines. To do that, we use _prefix correctness_ (PC) following Uesato et al. (2022), which measures whether the steps so far are correct. Inspired by recent work showing that using LLMs for evaluation correlates highly with human judgment (Wang et al., 2023; Liu et al., 2023; Luo et al., 2023), we measure prefix correctness using LLMs in addition to human evaluation. For LLM evaluation, we use GPT-3.5 with a few-shot prompt that lets the model predict a binary label of correct or incorrect after each prefix. Details on LLM evaluation are in Appendix C. In addition to PC, which is computed over all solutions regardless of the final answer we also evaluate the _trace error_, which is computed exclusively on solutions with correct final answer and measures the percentage of these solutions that have at least one major mistake, which is defined as _"A step where the information expressed is incorrect, or it would no longer be possible to reach the correct solution without undoing that step"_ following Uesato et al. (2022). We evaluate trace error using both human and LLM evaluation. As for trace error evaluation with LLM compute the percentage of correct solutions with at least one incorrect prefix. As for human evaluation, we ask annotators to label each solution as to whether it has such a major mistake and to mark the step where the mistake happened and a justification of their decision. Concrete details on the human evaluation are in Appendix D. We conduct this evaluation on GSM8K test set since the reasoning required to answer its questions is considered more complex compared to other tasks. Table 2 shows the LLM and human evaluation results comparing Grace to greedy decoding and self-consistency. Grace scores higher than both greedy decoding and self-consistency by 7.0 and 3.8 points respectively. We observe significant improvements in terms of the trace error as well with Grace. For instance, Grace reduces trace error from 9.0% with greedy decoding to 5.0% (44% reduction), and a similar improvement is seen in the LLM-computed trace error. Our results clearly suggest that guiding the decoding process with Grace not only improves the final answer correctness but also the correctness of the intermediate steps generated by the LM. ## 6 Analysis Alignment.Our hypothesis is that aligning sampled solutions with reference solutions using the Needleman-Wunsch (NW) algorithm allows us to leverage solutions with different lengths than the reference, thereby capturing extra and missing steps. To validate this hypothesis, we compare our alignment approach to a naive version where steps in the sampled solutions are aligned with the corresponding steps in the reference solutions. However, the naive approach only considers samples with the \begin{table} \begin{tabular}{l l l l} \hline \hline & \begin{tabular}{l} **Prefix** \\ **Correctness-** \\ **LLM** (\(\uparrow\)) \\ \end{tabular} & \begin{tabular}{l} **Error-** \\ **LLM** (\(\downarrow\)) \\ \end{tabular} & \begin{tabular}{l} **Error-** \\ **Human** \\ (\(\downarrow\)) \\ \end{tabular} \\ \hline Greedy decode & 46.5 & 7.0 & 9.0 \\ S.C & 51.0 & 9.8 & - \\ \hline Grace & 53.5 (\(\uparrow\)7.0) & 5.2 (\(\downarrow\)-1.8) & 5.0 (\(\downarrow\)-4.0) \\ Grace w/ S.C & 54.8 (\(\downarrow\)3.8) & 6.6 (\(\downarrow\)3.2) & - \\ \hline \hline \end{tabular} \end{table} Table 2: Solution prefix correctness computed over GSM8K. Grace and self-consistency (S.C) metrics are averaged over 3 runs. Prefix correctness is computed over 1.3K questions. Trace error-LLM is computed over \(\sim\)300 questions and trace error-human over 200 questions. All solutions come from GSM8K test set. Details on the evaluations in Appendix C and Appendix D _same_ number of steps as the reference solution, as there is no clear way to align samples with different lengths. Figure 5 presents the final answer accuracy on GSM8K and SVAMP when the discriminator is trained using both alignment methods. We observe a significant gap between our alignment method and the naive approach, with our method achieving better performance by 2.2% and 5.9% points on GSM8K and SVAMP, respectively. These results highlight the advantages of our proposed alignment method in improving discriminator training. Sampling Efficiency.A primary motivation for Grace is to provide more control over the solution sampling process compared to standard temperature sampling. To verify whether Grace samples more correct solutions, we compare it to temperature sampling when used for self-consistency with different numbers of samples. Figure 4 (left) shows a plot of the number of samples against final answer accuracy over GSM8K. We observe Grace is indeed more sample-efficient and yields better accuracy with the same number of samples as temperature sampling. We also observe a drop in the accuracy with \(N=40\) with temperature sampling, which we do not observe with Grace. Step Score.We study the effect of the discriminator score weight \(\beta\) in eq. (7) when computing the score of a candidate step on the reasoning performance. Figure 4 (mid and right) shows final answer accuracy on GSM8K and SVAMP as we vary \(\beta\) from 0.0 to 1.0. We can observe the accuracy improving as \(\beta\) is increased until 0.8, and then drops slightly at 1.0. This emphasizes the benefit brought by integrating \(D(q,r,s)\) into the step scores while also showing that we should not completely omit \(p_{\text{LM}}(s|q,r)\), which represents the LM's learned reasoning abilities. We observe a similar trend for \(\beta\) over all remaining tasks. Discriminator Loss Function.We compare the max-margin objective in eq. (6) to two different discriminator training objectives. The first is a non-contrastive binary objective, where the Figure 4: **Left: Self-consistency performance on 400 examples from GSM8K dev set with Grace compared to temperature sampling (Wang et al., 2022). Grace exhibits better sample efficiency and does not incur a performance drop when using more samples compared to temperature sampling. Mid and Right: Final answer accuracy on GSM8K and SVAMP dev sets as we vary the discriminator score weight \(\beta\) in eq. (7). The model used is FLAN-T5-Large and the discriminator is FLAN-T5-Large encoder. All results except for greedy are averaged over 3 runs. Increasing \(\beta\) improves the final answer accuracy, suggesting the benefit given by steering the decoding process via the discriminator.** Figure 5: Comparison of final answer accuracy on GSM8K and SVAMP between our alignment method and the naive approach. Our alignment method, leveraging the Needleman-Wunsch algorithm, outperforms the naive approach by 2.2 and 5.9 on GSM8K and SVAMP, respectively, demonstrating the effectiveness of our proposed alignment method in improving discriminator training. Figure 6: Guided decoding accuracy on GSM8K when the discriminator is trained with different loss functions. Our max-margin loss outperforms both the non-contrastive version (Uesato et al., 2022) and the pairwise ranking loss (Ouyang et al., 2022). Results are averaged over 3 runs. model is trained to predict correct or incorrect after each step following Uesato et al. (2022), and the probability of correctness is used as the discriminator score in eq. (7). The second is the pairwise ranking loss used to train the reward model for InstructGPT (Ouyang et al., 2022)\(\mathcal{L}_{D}^{\text{inversive}}=-\sum\log\left[\sigma(D(q,r,s^{+})-D(q,r,s^{-}))\right]\). Figure 6 shows final answer accuracy on GSM8K when Grace's discriminator is trained with each of these loss functions. Notably, the non-contrastive loss exhibits the lowest accuracy, emphasizing the importance of enabling the discriminator to correct incorrect steps jointly, rather than evaluating them in isolation. Moreover, our proposed max-margin objective achieves marginally better performance than the pairwise ranking loss. We posit that this enhancement stems from the incorporation of the margin parameter, which prevents the discriminator from excessively optimizing the objective. These results highlight the efficacy of the max-margin objective in training the correctness discriminator. Discriminator Size.We study how the size of the discriminator impacts the final answer accuracy. In addition to the FLAN-T5-Large encoder used so far, we run experiments with a FLAN-T5-Base encoder (110M) and a FLAN-T5-Small encoder (30M) as discriminators on GSM8K and MultiArith and with LLaMA as the backbone LM. Figure 7 shows the accuracy on both datasets with different model sizes. For MultiArith, better performance is brought by larger discriminator models, which is expected. Interestingly, using the T5-base discriminator, Grace can already surpass self-consistency by 0.7 points, and such a boost is achieved using a discriminator that is 63X smaller than LLaMA. As for GSM8K, we observe a very different trend, where smaller models (base and small) do not perform well. This can be understood in the light of GSM8K being a more difficult task with more complex reasoning requirements compared to MultiArith and therefore a discriminator with sufficient capacity is needed. ## Conclusion When solving multi-step reasoning problems, language models are often miscalibrated with respect to correctness, leading to high-probability solutions that are not necessarily correct. Existing methods like self-consistency and verifiers that rely on sampling from the LM distribution do not effectively address this issue. To tackle this problem, we introduce Grace, which utilizes a discriminator model trained to distinguish between correct and incorrect reasoning steps. The discriminator is used to steer the decoding process toward correct steps and thus preventing the language model from generating invalid ones. Experimental results on four popular math reasoning benchmarks demonstrate that Grace significantly improves the correctness of the generated solutions at both the final answer and the intermediate steps levels. We further validate the effectiveness of different components of our method through multiple ablations. ## Limitations and Future Work There is an overhead incurred by incorporating the discriminator model during decoding as we pause decoding at each step to compute the discriminator scores. Also, our approach requires access to reference step-by-step solutions for the alignment process. In this paper, we use an LLM to obtain these for two tasks, LLMs can make mistakes yielding incorrect reference solutions, especially for more complex reasoning tasks. As for future directions, it is possible to iterate the 3-step process to train the discriminator by sampling solutions using Grace and then performing the alignment and re-training the discriminator and so on. We leave it to future work to explore this direction. Extending this work to logical and symbolic reasoning tasks is also one promising future direction.
2303.01709
Streaming Algorithms for Learning with Experts: Deterministic Versus Robust
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times), given a set of $n$ experts who make predictions on each day (or time). The algorithm is given feedback on the outcomes of each day, including the cost of its prediction and the cost of the expert predictions, and the goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set. Recent work by Srinivas, Woodruff, Xu, and Zhou (STOC 2022) introduced the study of the online learning with experts problem under memory constraints. However, often the predictions made by experts or algorithms at some time influence future outcomes, so that the input is adaptively chosen. Whereas deterministic algorithms would be robust to adaptive inputs, existing algorithms all crucially use randomization to sample a small number of experts. In this paper, we study deterministic and robust algorithms for the experts problem. We first show a space lower bound of $\widetilde{\Omega}\left(\frac{nM}{RT}\right)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes. Our result shows that the natural deterministic algorithm, which iterates through pools of experts until each expert in the pool has erred, is optimal up to polylogarithmic factors. On the positive side, we give a randomized algorithm that is robust to adaptive inputs that uses $\widetilde{O}\left(\frac{n}{R\sqrt{T}}\right)$ space for $M=O\left(\frac{R^2 T}{\log^2 n}\right)$, thereby showing a smooth space-regret trade-off.
David P. Woodruff, Fred Zhang, Samson Zhou
2023-03-03T04:39:53Z
http://arxiv.org/abs/2303.01709v1
# Streaming Algorithms for Learning with Experts: Deterministic Versus Robust ###### Abstract In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of \(T\) days (or times), given a set of \(n\) experts who make predictions on each day (or time). The algorithm is given feedback on the outcomes of each day, including the cost of its prediction and the cost of the expert predictions, and the goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set. Recent work by Srinivas, Woodruff, Xu, and Zhou (STOC 2022) introduced the study of the online learning with experts problem under memory constraints. However, often the predictions made by experts or algorithms at some time influence future outcomes, so that the input is adaptively chosen. Whereas deterministic algorithms would be robust to adaptive inputs, existing algorithms all crucially use randomization to sample a small number of experts. In this paper, we study deterministic and robust algorithms for the experts problem. We first show a space lower bound of \(\widetilde{\Omega}\left(\frac{nM}{RT}\right)\) for any deterministic algorithm that achieves regret \(R\) when the best expert makes \(M\) mistakes. Our result shows that the natural deterministic algorithm, which iterates through pools of experts until each expert in the pool has erred, is optimal up to polylogarithmic factors. On the positive side, we give a randomized algorithm that is robust to adaptive inputs that uses \(\widetilde{O}\left(\frac{n}{R\sqrt{T}}\right)\) space for \(M=O\left(\frac{R^{2}T}{\log^{2}n}\right)\), thereby showing a smooth space-regret trade-off. Introduction Online learning with experts is a problem of sequential prediction. On each of \(T\) days (or times), an algorithm must make a prediction about an outcome, given a set of \(n\) experts who make predictions on the outcome. The algorithm is then given feedback on the cost of its prediction and on the expert predictions for the current day. In the _discrete prediction with experts problem_, the set of possible predictions is restricted to a finite set, and the cost is \(0\) if the prediction is correct, and \(1\) otherwise. More generally, the set of possible predictions need not be restricted but we assume the costs are restricted to be in a range \([0,\rho]\) for some fixed parameter \(\rho>0\), with lower costs indicating better performances of the algorithm or experts. This process continues for the \(T\) days (or times), after which the performance of the algorithm is compared to the performance of the best performing expert. More formally, the goal for the online learning with experts problem is often quantified by achieving the best regret, which is the difference between the total cost of the algorithm and the total cost of the best performing expert, i.e., the expert that incurs the least overall cost, amortized over the total number of days. A well-known folklore algorithm for handling the discrete prediction with experts problem is the weighted majority algorithm [10]. The deterministic variant of the weighted majority algorithm simply initializes "weights" for all experts to \(1\), downweights any incorrect expert on a given day, and selects the prediction supported by the largest weight of experts. The algorithm solves the discrete prediction with experts problem with \(O\left(M+\log n\right)\) total mistakes, where \(M\) is the number of mistakes made by the best expert and the coefficient of \(M\) hidden by the big Oh notation is approximately \(2.41\), thus achieving regret \(O\left(M+\log n\right)\). More generally, a large body of literature has studied optimizations to the weighted majority algorithm, such as a randomized variant where the probability of the algorithm selecting each prediction is proportional to the sum of the weights of the experts supporting the prediction. The randomized weighted majority algorithm achieves regret \(O\left(\sqrt{\log n/T}\right)\)[10], which has been shown to be information-theoretically optimal, up to a constant. There have subsequently been many follow-ups to the weighted and randomized weighted majority algorithms that achieve similar regret bounds, but improve in other areas. For example, on a variety of structured problems, such as online shortest paths, follow the perturbed leader [13] achieves the same regret bound as randomized weighted majority but uses less runtime on each day (or time). In addition, the multiplicative weights algorithm achieves the optimal \(\sqrt{\ln n/(2T)}\) regret, with a tight leading constant [12]. However, these classic algorithms use a framework that maintains the cumulative cost of each expert, which requires the algorithm to store \(\Omega(n)\) bits of information across its runtime. Memory bounds.Recently, [14] considered the online learning with experts problem when memory is a premium for the algorithm. On the hardness side, they showed that any algorithm achieving a target regret \(R\) requires \(\Omega\left(\frac{n}{R^{2}T}\right)\) space, which implies that any algorithm achieving the information-theoretic \(O\left(\sqrt{\log n/T}\right)\) regret must use near-linear space. On the other hand, for random-order streams in which the algorithm may receive the worst-case input, but then the order of the days is uniformly random, [14] gave a nearly matching randomized algorithm that uses \(\widetilde{O}\left(\frac{n}{R^{2}T}\right)\) space and \(R=\Omega\left(\sqrt{\frac{\log^{2}n}{T}}\right)\), i.e., nearly all values of regret down to the information-theoretic limit. Moreover, when the number of mistakes \(M\) made by the best expert is small, i.e., \(M=O\left(R^{2}T\right)\), [14] gave a randomized algorithm that uses \(\widetilde{O}\left(\frac{n}{RT}\right)\) space for arbitrary-order streams, thus showing that the hardness of their lower bound originates from a setting where the best expert makes a large number of mistakes. Subsequently, [14] considered the online learning with experts problem when the algorithm is limited to use memory sublinear in \(n\). They introduced a general framework that achieves \(o(T)\) regret using \(o(n)\) memory, with a trade-off parameter between space and regret that obtains \(O_{n}\left(T^{4/5}\right)\) regret with \(O\left(\sqrt{n}\right)\) space and \(O_{n}\left(T^{0.67}\right)\) regret with \(O\left(n^{0.99}\right)\) space. Adaptive inputs and determinism.Up to now, the discussion has focused on an oblivious setting, where the input to the algorithm may be worst-case, but is chosen independently of the algorithm and its outputs. The online learning with experts problem is often considered in the adaptive setting, where the input to the algorithm is allowed to depend on previous outputs by the algorithm. Formally, we define the adaptive setting as a two-player game between an algorithm \(\mathcal{D}\) and an adversary \(\mathcal{A}\) that adaptively creates the input stream to \(\mathcal{D}\). The game then proceeds in days and on the \(t\)-th day: 1. The adversary \(\mathcal{A}\) chooses the outputs of all experts on day \(t\) as well as the outcome of day \(t\), depending on all previous stream updates and all previous outputs from the algorithm \(\mathcal{D}\). 2. The outputs (i.e., predictions) of all experts are simultaneously given to the algorithm \(\mathcal{D}\), which updates its data structures, acquires a fresh batch \(R_{t}\) of random bits, and outputs a predicted outcome for day \(t\). 3. The outcome of day \(t\) is revealed to \(\mathcal{D}\), while the predicted outcome for day \(t\) by \(\mathcal{D}\) is revealed to the adversary \(\mathcal{A}\). The goal of \(\mathcal{A}\) is to induce \(\mathcal{D}\) to make as many incorrect predictions as possible throughout the stream. It is clear that any deterministic algorithm for the online learning with experts problem will maintain the same guarantees in the adaptive model. Unfortunately, both the algorithms of [13] and [14] are randomized procedures that rely on iteratively sampling "pools" of experts, which can potentially be exploited by an adaptive adversary who learns the experts sampled in each pool. Interestingly, both the randomized weighted majority algorithm [13] and the multiplicative weights algorithm [12] are known to be robust to adaptive inputs. ### Our Contributions In this paper, we study the capabilities and limits of sublinear space algorithms for the online learning with experts problem on adaptive inputs. Tight bounds for deterministic algorithms.First, we provide a simple deterministic algorithm that uses space \(\widetilde{O}\left(\frac{nM}{RT}\right)\). Consider an algorithm that iteratively selects the next pool of \(k=\widetilde{O}\left(\frac{nM}{RT}\right)\) experts and running the deterministic majority algorithm on the experts in the pool, while removing any incorrect experts from the pool until the pool is completely depleted, at which point the next pool of \(\widetilde{O}\left(\frac{nM}{RT}\right)\) experts is selected. The main intuition is that each pool can incur at most \(O\left(\log n\right)\) mistakes before it is completely depleted and the best expert can only make \(M\) mistakes. By the time the pool has cycled through \(nM\) experts, i.e., \(M\) times for each of the \(n\) experts, then the best expert no longer makes any mistakes and will be retained by the pool. Thus, the total number of mistakes made by the deterministic algorithm is \(\frac{nM}{k}\cdot O\left(\log n\right)\). Hence, for a target average regret \(R\), the total number of mistakes by the algorithm must be at most \(M+RT\geq RT\), so it suffices to set \(k=\widetilde{O}\left(\frac{nM}{RT}\right)\) to achieve regret \(R\). Since the algorithm runs deterministic majority on a pool of \(k=\widetilde{O}\left(\frac{nM}{RT}\right)\) experts, then this algorithm uses \(\widetilde{O}\left(\frac{nM}{RT}\right)\) space. However, for \(M=\Omega(RT)\), the algorithm must use space that is near-linear in the number of experts \(n\), which is undesirable when \(n\) is large. (For a detailed formal argument, see Section4.1) Therefore, it is natural to ask whether there exists a deterministic algorithm that is more space-efficient than this straighforward approach. Unfortunately, we first show that this is not the case: **Theorem 1.1** (Memory lower bound for deterministic algorithms; also see Theorem3.8).: _For \(n=o(2^{T})\), any deterministic algorithm that achieves \(R\) regret for the discrete prediction with experts problem must use \(\Omega\left(\frac{nM}{RT}\right)\) space when the best expert makes \(M\) mistakes._ Taken together with the deterministic procedure above, this resolves the deterministic streaming complexity of online learning with experts. At a conceptual level, our lower bound in Theorem1.1 shows that surprisingly, the number \(M\) of the mistakes made by the best expert is an intrinsic parameter that governs the abilities and limitations of deterministic algorithms in this model. In fact, we show a stronger result in Theorem3.8 that any randomized algorithm that succeeds with probability at least \(1-\exp(-T)\) must use \(\Omega\left(\frac{nM}{RT}\right)\) space when the best expert makes \(M\) mistakes. Moreover, we give an alternative proof in the regime when \(M=\Omega(T)\). The proof differs from the proof of Theorem1.1. Instead, it leverages the communication complexity of a new set disjointness problem, recently proposed by [13]. The statement is technically weaker Theorem1.1, and appears in the appendix; see AppendixA. Overview of the proof of Theorem1.1.To prove the theorem, we consider the communication problem of \(\varepsilon\)-DiffDist. It combines \(n\) instances of the distributed detection problem from [1] and was first proposed by the prior work of [11] to prove space lower bounds for expert learning in random order stream. Specifically, for fixed \(T\), the \(\varepsilon\)-DiffDist problem with \(\varepsilon=\frac{M}{T}\) consists of \(T\) players, who each hold \(n\) bits, indexed from \(1\) to \(n\). The players must distinguish between: 1. the NO case \(\mathcal{D}_{\mathsf{NO}}^{(n)}\), in which every bit for every player is drawn i.i.d. from a fair coin and 2. the YES case \(\mathcal{D}_{\mathsf{YES}}^{(n)}\), in which an index \(L\in[n]\) is selected arbitrarily and the \(L\)-th bit of each player is chosen i.i.d. from a Bernoulli distribution with parameter \(\left(1-\frac{M}{T}\right)\), while all other bits for every player are chosen i.i.d. from a fair coin. At a high level, the proof proceeds in two steps: 1. First, we prove a communication complexity lower bound for \(\varepsilon\)-DiffDist against any protocol that succeeds with probability \(1-2^{-\Theta(T)}\), which includes deterministic protocols. 2. Second, we show that the \(\varepsilon\)-DiffDist problem can be reduced to the expert prediction problem in the streaming setting. The second step is straightforward, and the idea was proposed by [11]. In the reduction, each player in an instance of \(\varepsilon\)-DiffDist corresponds to a day of the expert problem. The \(n\) bit input held by each player correspond to the \(n\) expert predictions of each day. Therefore, in the NO case, each expert is correct on roughly half of the days. In the YES case, there is a single expert \(L\in[n]\) that is correct on roughly \(1/2+\delta\) of the days (for \(\delta=1/2-M/T\)), while all other experts randomly guess each day. Suppose that there is a streaming algorithm for the expert prediction problem with average regret \(\delta/2\). Then roughly speaking, in the YES case, the algorithm is correct approximately on \(1/2+\delta/2\) of the days, while in the NO case where every expert is randomly guessing, the algorithm is correct on less than \(1/2+\delta/2\) of the days. This distinguishes the YES and NO case and thus solves \(\varepsilon\)-DiffDist. For the second step, we show that solving the \(\varepsilon\)-DiffDist problem with probability at least \(1-2^{-\Theta(T)}\) requires \(\Omega(nM)\) total communication. Observe that if the input is viewed as a \(T\times n\) matrix, then \(\mathcal{D}_{\mathsf{NO}}^{(n)}\) is a product distribution across columns that can be written as \(\zeta^{n}\), where \(\zeta\) is the distribution over a single column such that all entries of the column are i.i.d. Bernoulli with parameter \(\frac{1}{2}\). We view \(\mathcal{D}_{\mathsf{NO}}^{(n)}\) as a hard distribution and applies an information complexity analysis. By a direct sum argument, it suffices to show that the single column problem, i.e., distinguishing between \(\mathcal{D}_{\mathsf{NO}}^{(1)}\) and \(\mathcal{D}_{\mathsf{YES}}^{(1)}\) (i.e., for \(n=1\)), requires \(\Omega(M)\) total communication. Let \((C_{1},C_{2},\ldots,C_{T})\) be a single column drawn from the hard distribution--namely, the NO case where each player holds one i.i.d. Bernoulli with parameter \(1/2\). Let \(A\) be a fixed protocol with success probability at least \(1-\exp(-\Theta(T))\). For all \(i<T\), let \(M_{i}\) denote the message sent from player \(P_{i}\) to player \(P_{i+1}\) and \(M_{<i}=\{M_{j}:j<i\}\). Let \(\Pi=\Pi(C_{1},\cdots,C_{T})\) be the communication transcript of \(A\) given the input \((C_{i})_{i=1}^{T}\). A standard information complexity argument [1] implies that the total communication is at least the _information cost_, defined as \(I(C_{1},\ldots,C_{T};\Pi(C_{1},\ldots,C_{T}))\), where \(I(X,Y)\) denotes the mutual information between random variables \(X\) and \(Y\). The key step of our proof is therefore to lower bound the information cost by \(\Omega(M)\). The main ideas are the following. For any \(i\in[T]\), we say that \((M_{i},M_{<i})\) is _informative_ for \(i\) with respect to the input \(C\) and the transcript \(\Pi=(M_{1},M_{2},\ldots,M_{T})\) if \[|\Pr\left(C_{i}=0\mid M_{i},M_{<i}\right)-\Pr\left(C_{i}=1\mid M_{i},M_{<i} \right)|\geq c \tag{1.1}\] for some constant \(c>0\). Otherwise, we say that \(M_{i}\) is uninformative so that intuitively, an informative message \(M_{i}\) reveals sufficiently large information about \(C_{i}\) so that the mutual information \(I(M_{i},C_{i}\mid M_{<i})\) would be large. Now for all \(i\in[T]\), let \(p_{i}\) be the probability that \((M_{i},M_{<i})\) is informative (for \(i\) with respect to \(C\) and \(\Pi\)), taken over all possible inputs and randomness used in the protocol. It is straightforward to show that \[I(\Pi;C_{1},C_{2},\ldots,C_{T})=\sum_{j=1}^{T}I\left(M_{j};C_{j}\mid M_{<j} \right)\geq\Omega\left(\sum_{j=1}^{T}p_{j}\right).\] Namely, we first use the chain rule for mutual information to decompose the mutual information into the individual terms in the summation, which can be further decomposed using conditional entropy. Then the desired bound immediately follows from a standard bound on the binary entropy function. From here, it suffices to prove that \[\sum_{j=1}^{T}p_{j}>\gamma\cdot M,\] for some fixed constant \(\gamma>0\). To that end, we observe that if \(\sum_{j=1}^{T}p_{j}=o(M)\) then by Markov's inequality, the probability that the set \(S\) of uninformative indices has size at least \(T-o(M)\) is at least \(\frac{9}{10}\). We show that by modifying \(C\) on the uninformative indices \(S\), we can find an input \(C^{\prime}\) on which \(A\) cannot guarantee correctness with probability at least \(1-\exp(-\Theta(T))\). Let \(C^{\prime}\) be an input that agrees with \(C\) on the informative indices \([T]\setminus S\), so that \(C^{\prime}_{i}=C_{i}\) for \(i\in[T]\setminus S\), and is chosen arbitrarily on uninformative indices \(S\). By definition of informative index, the probability that the protocol \(A\) generates \(\Pi\) on input \(C^{\prime}\) is at least \((1-c)^{T}\geq e^{-cT}\) times the probability that the protocol \(A\) generates \(\Pi\) on input \(C\). However, since \(C\) can differ from \(C^{\prime}\) on \(S\), then \(C\) can differ from \(C^{\prime}\) on \(|S|=T-o(M)\) indices and it follows that there exists a choice of \(C^{\prime}\) that contains fewer than \(\frac{M}{2}\) zeros such that \(A\) will also output \(\Pi\) with probability at least \(\frac{e^{-cT}}{2}\). On the other hand, since \(\Pi\) corresponds to a transcript for which \(A\) will output NO, then \(A\) cannot succeed with probability \(1-\frac{e^{-cT}}{8}\) on the input \(C^{\prime}\). On the other hand, a YES instance will generate \(C^{\prime}\) with probability \(2^{-T}\), which is a contradiction, and thus it follows that \(\sum_{j=1}^{T}p_{j}=\Omega(M)\), as desired. Algorithms for adaptive inputs.On the positive side, we show that there exists a randomized algorithm for the discrete prediction with experts problem that is robust to adaptive inputs: **Theorem 1.2** (Robust algorithms against adaptive inputs).: _Let \(R>\frac{64\log^{2}n}{T}\), and suppose the best expert makes at most \(M\leq\frac{R^{2}T}{128\log^{2}n}\) mistakes. Then there exists an algorithm for the discrete prediction with experts problem that uses \(\widetilde{O}\left(\frac{n}{R\sqrt{T}}\right)\) space and achieves regret at most \(R\), with probability at least \(1-\frac{1}{\operatorname{poly}(n,T)}\)._ We remark that Theorem1.2 provides a smooth trade-off between the space and regret, almost all the way to the information-theoretic limit of \(R=O_{n}\left(\sqrt{\frac{1}{T}}\right)\) for general worst-case input. However, it incurs a multiplicative space overhead of \(\widetilde{O}(\sqrt{T})\) compared to the optimal algorithms for oblivious input. Thus we believe the complete characterization of the space complexity of the discrete prediction with experts problem with adaptive input is a natural open question resulting from our work. Our algorithm for Theorem1.2 uses differential privacy (DP) to hide the internal randomness of our algorithm from the adaptive adversary. The technique was first proposed by the recent work [10, 1, 2] to achieve adversarial robustness in data streaming algorithms. To exploit it for solving our problem of expert learning, we run \(\widetilde{O}\left(\sqrt{T}\right)\) copies of the oblivious algorithm of [26] and then use advanced composition to show that running a private median on each of the \(\widetilde{O}\left(\sqrt{T}\right)\) copies across \(T\) interactions guarantees differential privacy. Correctness then follows from generalization of DP. ### Related Work The experts problem.The experts problem has been extensively studied [13], both in the discrete decision setting [12] and in the setting where costs are determined by various loss functions [10, 11, 12, 13, 14]. Hence, the experts problem can be applied to many different applications, such as portfolio optimization [1, 13], ensemble boosting [15], and forecasting [14]. Given certain assumptions on the expert, such as assuming the experts are decisions trees [16, 17], threshold functions [14], or have nice linear structures [15], additional optimizations have been made to improve the algorithmic runtimes for the experts problem and more generally, existing work has largely ignored optimizing for memory constraints in favor of focusing on time complexity or regret guarantees, thus frequently using \(\Omega(n)\) memory to track the performance of each expert. Recently, [11] introduced the study of memory-regret trade-offs for the experts problem. For \(n\gg T\), [11] showed that the space complexity of the problem is \(\tilde{\Theta}\left(\frac{n}{R^{2}T}\right)\) in the random-order streams, but also gave a randomized algorithm that uses \(\widetilde{O}\left(\frac{n}{RT}\right)\) space for arbitrary-order streams when the number of mistakes \(M\) made by the best expert is "small". Subsequently, [10] considered the online learning with experts problem for \(T\gg n\), introducing a general space-regret trade-off framework that achieves \(o(T)\) regret using \(o(n)\) memory, including \(O_{n}(T^{4/5})\) regret with \(O\left(\sqrt{n}\right)\) space and \(O_{n}(T^{0.67})\) regret with \(O\left(n^{0.99}\right)\) space. Adaptive inputs.Motivated by non-independent inputs and adversarial attacks, adaptive inputs have recently been considered in the centralized model [13, 14, 15, 16], in the streaming model [1, 1, 17, 18, 19, 20, 21, 2, 2, 2, 2, 2, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 289, 288, 289, 291, 285, 286, 287, 288, 289, 280, 287, 289, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 32, 33, 34, 35, 36, 37, 38, 39, 39, 311, 33, 34, 35, 36, 37, 39, 38, 39, 30, 31, 32, 34, 36, 38, 39, 31, 33, 35, 37, 39, 32, 36, 39, 33, 37, 38, 39, 31, 34, 38, 39, 32, 35, 39, 33, 36, 39, 34, 37, 38, 39, 35, 39, 36, 37, 39, 38, 39, 30, 31, 32, 33, 34, 35, 37, 39, 38, 39, 31, 32, 34, 36, 39, 32, 35, 39, 36, 37, 38, 39, 39, 31, 33, 38, 39, 32, 39, 33, 34, 35, 39, 36, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 80, 82, 84, 85, 88, 89, 92, 86, 89, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 109, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 209, 212, 213, 214, 215, 216, 217, 218, 219, 22, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 253, 254, 255, 256, 257, 258, 259, 260, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 275, 276, 278, 279, 280, 281, 282, 285, 286, 287, 288, 289, 290, 289, 291, 292, 294, 295, 296, 297, 298, 299, 300, 310, 311, 32, 34, 35, 36, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52 shows that any algorithm achieving \(R\) amortized regret must use \(\widetilde{\Omega}\left(\sqrt{\frac{R}{R}}\right)\) space, though their lower bound also applies to randomized algorithms. Due to the difference in setting, our algorithmic techniques are quite different from those of [10]. We use a recent idea of [11, 1, 12] to hide the internal randomness of our algorithm from the adversary whereas [10] rotates between groups of experts to prevent an adversary from inducing high regret by making a specific expert bad immediately after it is selected. ## 2 Preliminaries Notations.For any \(t\leq n\) and vector \((X_{1},X_{2},\cdots,X_{n})\), we let \(X_{<t}\) denote \((X_{1},\cdots,X_{t-1})\), \(X_{\leq t}=(X_{1},\cdots,X_{t})\), and \(X_{-t}=(X_{1},\cdots,X_{t-1},X_{t+1},\cdots,X_{n})\). Also, \(X_{>t}\) and \(X_{\geq t}\) are defined similarly. Let \(e_{i}\) denote the \(i\)th standard basis vector, and for any \(S\), \(e_{S}\) the vector that has a \(1\) at index \(i\in S\) and \(0\) everywhere else. For a random variable \(X\), let \(H(X)\) denote its entropy. ### Information Theory For any \(p\in[0,1]\), we slightly abuse notation and let \(H(p)=-p\log_{2}p-(1-p)\log_{2}(1-p)\) be the binary entropy function. The following is a standard upper and lower bound of \(H(p)\). **Lemma 2.1** (Bound on the binary entropy function; see e.g. [13]).: _For \(p\in[0,1]\), the binary entropy function satisfies_ \[4p(1-p)\leq H(p)\leq 2\sqrt{p(1-p)}.\] ### Communication Complexity **Definition 2.2** (Mutual information).: _Let \(X\) and \(Y\) be a pair of random variables with joint distribution \(p(x,y)\). Then the mutual information is defined as \(I(X;Y):=\sum_{x,y}p(x,y)\log\frac{p(x,y)}{p(x)p(y)}\), for marginal distributions \(p(x)\) and \(p(y)\)._ In a multi-party communication problem of \(t\) players, each player is given \(x_{i}\in\mathcal{X}_{t}\). They communicate according to fixed protocol to compute a function \(f:\mathcal{X}_{t}\times\cdots\times\mathcal{X}_{t}\to\mathcal{Y}\). A protocol \(\Pi\) is called a \(\delta\)-error protocol for \(f\) if there exists a function \(\Pi_{\text{out}}\) such that \(\Pr\left[\Pi_{\text{out}}\left(\Pi(x,y)\right)=f(x,y)\right]\geqslant 1-\delta\). For a (multi-party) communication problem, we denote the transcript of all communication in a protocol as \(\Pi\in\{0,1\}^{*}\). The communication cost of a protocol, as a result, is the bit length of the transcript. Let \(R_{\delta}(f)\) denote the communication cost of the best \(\delta\)-error protocol for \(f\). **Definition 2.3** (Information cost).: _Let \(\Pi\) be a randomized protocol that produces a random variable \(\Pi(X_{1},\ldots,X_{T})\) as a transcript on inputs \(X_{1},\ldots,X_{T}\) drawn from a distribution \(\mu\). Then the information cost of \(\Pi\) with respect to \(\mu\) is defined as \(I(X_{1},\ldots,X_{T};\Pi(X_{1},\ldots,X_{T}))\)._ **Definition 2.4** (Information complexity).: _The information complexity of a function \(f\) with respect to a distribution \(\mu\) and failure probability \(\delta\) is the minimum information cost of a protocol for \(f\) with respect to \(\mu\) that fails with probability at most \(\delta\) on every input and denoted by \(\mathsf{IC}_{\mu,\delta}(f)\)._ **Lemma 2.5** (Information cost decomposition lemma, Lemma 5.1 in [1]).: _Let \(\mu\) be a mixture of product distributions and suppose \(\Pi\) is a protocol for inputs \((X_{1},\ldots,X_{T})\sim\mu^{n}\). Then \(I(X_{1},\ldots,X_{T};\Pi(X_{1},\ldots,X_{T}))\geq\sum_{i=1}^{n}I(X_{1,i}, \ldots,X_{T,i};\Pi(X_{1},\ldots,X_{T}))\), where \(X_{i,j}\) denotes the \(j\)-th component of \(X_{i}\)._ **Lemma 2.6** (Information complexity lower bounds communication complexity; Proposition 4.3 [1]).: _For any distribution \(\mu\) and error \(\delta\), \(R_{\delta}(f)\geq\mathsf{IC}_{\mu,\delta}(f)\)._ ### Differential Privacy Our algorithmic results rely on the following tools from differential privacy. **Definition 2.7** (Differential privacy, [16]).: _Given a privacy parameter \(\varepsilon>0\) and a failure parameter \(\delta\in(0,1)\), a randomized algorithm \(\mathcal{A}:\mathcal{X}^{*}\to\mathcal{Y}\) is \((\varepsilon,\delta)\)-differentially private if, for every pair of neighboring streams \(S\) and \(S^{\prime}\) and for all \(E\subseteq\mathcal{Y}\),_ \[\mathbf{Pr}\left[\mathcal{A}(S)\in E\right]\leq e^{\varepsilon}\cdot\mathbf{ Pr}\left[\mathcal{A}(S^{\prime})\in E\right]+\delta.\] **Theorem 2.8** (Private median, e.g., [14]).: _Given a database \(\mathcal{D}\in X^{*}\), a privacy parameter \(\varepsilon>0\) and a failure parameter \(\delta\in(0,1)\), there exists an \((\varepsilon,0)\)-differentially private algorithm PrivMed that outputs an element \(x\in X\) such that with probability at least \(1-\delta\), there are at least \(\frac{|S|}{2}-m\) elements in \(S\) that are at least \(x\), and at least \(\frac{|S|}{2}-m\) elements in \(S\) in \(S\) that are at most \(x\), for \(m=O\left(\frac{1}{\varepsilon}\log\frac{|X|}{\delta}\right)\)._ **Theorem 2.9** (Advanced composition, e.g., [13]).: _Let \(\varepsilon,\delta^{\prime}\in(0,1]\) and let \(\delta\in[0,1]\). Any mechanism that permits \(k\) adaptive interactions with mechanisms that preserve \((\varepsilon,\delta)\)-differential privacy guarantees \((\varepsilon^{\prime},k\delta+\delta^{\prime})\)-differential privacy, where \(\varepsilon^{\prime}=\sqrt{2k\ln\frac{1}{\delta^{\prime}}}\cdot\varepsilon+2k \varepsilon^{2}\)._ **Theorem 2.10** (Generalization of DP, e.g., [13, 14]).: _Let \(\varepsilon\in(0,1/3)\), \(\delta\in(0,\varepsilon/4)\), and \(n\geq\frac{1}{\varepsilon^{2}}\log\frac{2\varepsilon}{\delta}\). Suppose \(\mathcal{A}:X^{n}\to 2^{X}\) is an \((\varepsilon,\delta)\)-differentially private algorithm that curates a database of size \(n\) and produces a function \(h:X\to\{0,1\}\). Suppose \(\mathcal{D}\) is a distribution over \(X\) and \(S\) is a set of \(n\) elements drawn independently and identically distributed from \(\mathcal{D}\). Then_ \[\mathop{\mathbf{Pr}}_{S\sim\mathcal{D},h\leftarrow\mathcal{A}(S)}\left[\left| \frac{1}{|S|}\sum_{x\in S}h(x)-\mathop{\mathbb{E}}_{x\sim\mathcal{D}}\left[h( x)\right]\right|\geq 10\varepsilon\right]<\frac{\delta}{\varepsilon}.\] ## 3 Lower Bounds for Arbitrary-Order Streams In this section, we give space lower bounds for the experts problem on arbitrary-order streams. As a warm-up, we first show in Section 3.1 a general space lower bound for randomized algorithms when the best expert makes a "small" number of mistakes. We then give our main lower bound result in Section 3.2, showing that any deterministic algorithm achieving regret \(R\) must use space \(\Omega\left(\frac{nM}{RT}\right)\) when the best expert makes \(M\) mistakes. ### Warm-up: Lower Bound for Accurate Best Expert In this section, we show that any randomized algorithm that achieves regret \(R\) must use \(\Omega\left(\frac{n}{RT}\right)\) space, even when the best expert makes \(\Theta(RT)\) mistakes. In contrast, [11] give an \(\Omega\left(\frac{n}{R^{2}T}\right)\) space lower bound: **Theorem 3.1** (Memory lower bound; Theorem 1 of [14]).: _Let \(R>0\), \(p<\frac{1}{2}\) be fixed constants, i.e., independent of other input parameters. Any algorithm that achieves \(R\) regret for the experts problem with probability at least \(1-p\) must use at least \(\Omega\left(\frac{n}{R^{2}T}\right)\) space._ _Furthermore, this lower bound holds even when the costs are binary, and expert predictions, as well as the correct answers, are constrained to be i.i.d. across the days, albeit with different distributions across the experts._ The proof of this lower bound exploits a construction where the best expert makes \(\Theta(T)\) mistakes. Thus, it is not clear how the space complexity of the problem behaves when the best expert makes a smaller number of mistakes. In fact, [14] also give an algorithm that uses \(\widetilde{O}\left(\frac{n}{RT}\right)\) space when the best expert makes \(O(RT)\) mistakes, bypassing the aforementioned lower bound. We now prove that in this small mistake regime, this algorithm is tight. Towards this goal, we first define the \(\varepsilon\)-DiffDist problem that reduces to the experts problem. It was proposed by [14] to prove memory lower bounds for the expert problem in random order stream. **Definition 3.2** (The \(\varepsilon\)-DiffDist Problem).: _We have \(T\) players, each of whom holds \(n\) bits, indexed from \(1\) to \(n\). We must distinguish between two cases, which we refer to as "\(V=0\)" and "\(V=1\)". Let \(\mu_{0}\) be a Bernoulli distribution with parameter \(\frac{1}{2}\), i.e., a fair coin, and let \(\mu_{1}\) be a Bernoulli distribution with parameter \(\frac{1}{2}+\varepsilon\)._ * _(NO Case, "_\(V=0\)") Every index for every player is drawn i.i.d. from a fair coin, i.e.,_ \(\mu_{0}\)_._ * _(YES Case, "_\(V=1\)") An index_ \(L\in[n]\) _is selected arbitrarily--the_ \(L\)_-th bit of each player is chosen i.i.d. from_ \(\mu_{1}\)_. All other bits for every player are chosen i.i.d. from_ \(\mu_{0}\)_._ Any protocol that successfully solves the \(\varepsilon\)-DiffDist problem with a constant probability greater than \(\frac{1}{2}\) must use at least \(\Omega\left(\frac{n}{\varepsilon^{2}}\right)\) communication, a result due to [14]: **Lemma 3.3** (Communication complexity of \(\varepsilon\)-DiffDist; Lemma 3 of [14]).: _The communication complexity of solving the \(\varepsilon\)-DiffDist problem with a constant \(1-p\) probability, for any \(p\in[0,0.5)\), is \(\Omega\left(\frac{n}{\varepsilon^{2}}\right)\)._ The proof of Theorem 3.1 by [14] uses \(n\) coin flips across each of the \(T\) players to form the \(n\) expert predictions over each of the \(T\) days. In the NO case, each expert will be correct on roughly \(\frac{T}{2}\) days, while in the YES case, a single expert will be correct on roughly \(\frac{T}{2}+\varepsilon T\) days, so that an algorithm with regret \(R=O(\varepsilon)\) will be able to distinguish between the two cases. There is a slight subtlety in the proof that uses a masking argument to avoid "trivial" algorithms that happen to succeed on a "lucky" input, but for the purposes of our proof in this section, the masking argument is not needed. It then follows that the total communication is \(\Omega\left(\frac{n}{R^{2}}\right)\) across the \(T\) players, so that any streaming algorithm must use at least \(\Omega\left(\frac{n}{R^{2}T}\right)\) bits of space. Suppose we instead consider the \(\varepsilon\)-DiffDist problem over \(RT\) players, representing \(RT\) days in the experts problem. Moreover, suppose we set \(\varepsilon=\Theta(1)\) in the \(\varepsilon\)-DiffDist problem, so that in the NO case, each of the experts will be correct on roughly \(\frac{RT}{2}\) days, while in the YES case, a single expert will be correct on roughly \(\frac{RT}{2}+CRT\) days, for some constant \(C>0\). Suppose we further pad all of the experts with incorrect predictions across an additional \(T-RT\) days, so that the total number of days is \(T\), but the number of correct expert predictions remains the same. Then an algorithm achieving regret \(O(R)\) will be able to distinguish between the two cases, so that the total communication is \(\Omega\left(\frac{n}{R}\right)\), so that any streaming algorithm must use at least \(\Omega\left(\frac{n}{RT}\right)\) bits of space. **Corollary 3.4**.: _Let \(R\), \(p<\frac{1}{2}\) be fixed constants, i.e., independent of other input parameters. Any algorithm that achieves \(R\) regret for the experts problem with probability at least \(1-p\) must use at least \(\Omega\left(\frac{n}{RT}\right)\) space even when the best expert makes as few as \(\Theta(RT)\) mistakes. This lower bound holds even when the costs are binary and expert predictions, as well as the correct answer, are constrained to be i.i.d. across the days, albeit with different distributions across the experts._ Proof.: The claim follows from setting \(T=RT\) and \(R=\Theta(1)\) in the proof of Theorem3.1. ### Lower Bound for Deterministic Algorithms We now prove our main space lower bound for deterministic algorithms (Theorem1.1). We first set up some basic notations and introduce a hard distribution. Let \(T\) be any fixed positive integer. Let \(\mathcal{D}_{\mathsf{NO}}^{(n)}\) be the distribution over matrices \(A\) with size \(T\times n\) such that all entries of the matrix are i.i.d. Bernoulli with parameter \(\frac{1}{2}\), i.e., each entry of \(A\) is \(0\) with probability \(\frac{1}{2}\) and \(1\) with probability \(\frac{1}{2}\). Let \(\mathcal{D}_{\mathsf{YES}}^{(n)}\) be the distribution over matrices \(M\) with size \(T\times n\) such that there is a randomly chosen column \(L\in[n]\), which is i.i.d. Bernoulli with parameter \(\left(1-\frac{M}{T}\right)\) and all other columns are i.i.d. Bernoulli with parameter \(\frac{1}{2}\). Let BiasDetect\({}_{n}\) be the problem of detecting whether \(A\) is drawn from \(\mathcal{D}_{\mathsf{YES}}^{(n)}\) or \(\mathcal{D}_{\mathsf{NO}}^{(n)}\). Let \(\Pi\) be a communication protocol for BiasDetect\({}_{n}\) that is correct with probability at least \(1-\exp(-\Theta(T))\). Since \(\mathcal{D}_{\mathsf{NO}}^{(n)}\) is a product distribution across columns, then it can be written as \(\zeta^{n}\), where \(\zeta\) is the distribution over a single column such that all entries of the column are i.i.d. Bernoulli with parameter \(\frac{1}{2}\). Let BiasDetect\({}_{1}\) denote the problem of distinguishing between \(\mathcal{D}_{\mathsf{NO}}^{(1)}\) and \(\mathcal{D}_{\mathsf{YES}}^{(1)}\) on a single column, i.e., \(n=1\). Using \(\mathcal{D}_{\mathsf{NO}}^{(n)}\) as the hard distribution, we have the following direct sum theorem. **Lemma 3.5** (Direct sum for BiasDetect).: _The information complexity of BiasDetect\({}_{n}\) satisfies_ \[\mathsf{IC}_{\mathcal{D}_{\mathsf{NO}}^{(n)},2^{-\Theta(T)}}(\textsc{BiasDetect }_{n})\geq n\cdot\mathsf{IC}_{\mathcal{D}_{\mathsf{NO}}^{(1)},2^{-\Theta(T)}}( \textsc{BiasDetect}_{1}).\] Proof.: By definition, \(\mathcal{D}_{\mathsf{NO}}^{(n)}=\zeta^{n}\) is a product distribution over \(n\) columns. The lemma follows from the standard direct sum lemma of information cost (Lemma2.5). With the above direct sum theorem for BiasDetect\({}_{n}\), it now suffices to provide a single-coordinate information cost lower bound against BiasDetect\({}_{1}\). The proof is delayed to Section3.3. **Lemma 3.6** (Single-coordinate information cost lower bound).: _Let \(c\in(0,1)\) and \(\Pi\) be any protocol with error \(\delta=2^{-\Theta(T)}\) for BiasDetect\({}_{1}\). We have that the information cost of \(\Pi\) with respect to \(\zeta\) is at least_ \[I(\Pi(C_{1},C_{2},\ldots,C_{T});C_{1},C_{2},\ldots,C_{T})\geq\Omega\left(M \right), \tag{3.1}\] _where the bits \(C_{i}\sim\zeta\) are i.i.d. single coordinates._ Combining Lemma3.6 with the direct sum theorem (Lemma3.5), we immediately get the following information complexity lower bound for BiasDetect\({}_{n}\): **Theorem 3.7** (\(n\)-Coordinate information complexity lower bound).: _Let \(c\in(0,1)\). Then_ \[\mathsf{IC}_{\mathcal{D}_{\mathsf{NO}}^{(n)},2^{-\Theta(T)}}(\textsc{BiasDetect }_{n})=\Omega(nM).\] Proof.: This follows by applying the direct sum theorem (Lemma 3.5) to the single-coordinate bound Lemma 3.6. This implies that any algorithm with \(R\) regret and success rate at least \(1-2^{-\Theta(T)}\) requires \(\Omega\left(\frac{nM}{RT}\right)\) memory, where \(M\) is the mistake bound on the best expert. **Theorem 3.8** (Memory lower bound for expert learning).: _Let \(R,M\) be fixed and independent of other input parameters. Any streaming algorithm that achieves \(R\) regret for the experts problem with probability at least \(1-2^{-\Theta(T)}\) must use at least \(\Omega(\frac{nM}{RT})\) space, for \(n=o\left(2^{T}\right)\), where the best expert makes \(M\) mistakes._ Proof.: We now consider the problem BiasDetect\({}_{n}\) on a matrix of size \(RT\times n\). Note that in the NO case, at any fixed column \(i\in[n]\), the probability that there are more than \(\frac{3RT}{5}-\frac{M}{2}\) instances of \(0\), for \(M\leq\frac{RT}{8}\), is at most \(2\exp(-c_{1}RT)\), for a sufficiently small constant \(c_{1}\in(0,1)\). Thus, by a union bound, the probability that there exists an index \(i\in[n]\) with more than \(\frac{3RT}{4}-\frac{M}{2}\) instances of \(0\) is at most \(2n\exp(-c_{1}RT)\). Similarly in the YES case, the probability that there are fewer than \(\frac{4RT}{5}-\frac{M}{2}\) instances of \(0\) for a fixed \(i\in[n]\) and for \(M\leq\frac{RT}{8}\) is at most \(2\exp(-c_{2}RT)\), for a sufficiently small constant \(c_{2}\in(0,1)\) and so by a union bound, the probability that there exists an index \(i\in[n]\) with fewer than \(\frac{3RT}{4}-\frac{M}{2}\) instances of \(0\) is at most \(2n\exp(-c_{2}RT)\). Hence, for \(n=o(2^{T})\), there exists a constant \(c\in(0,1)\) such that any algorithm that achieves total regret at most \(\frac{RT}{5}\) with probability at least \(1-\exp(-cT)\) can distinguish between the YES and NO cases with probability \(1-\exp(-\Theta(T))\). By Theorem 3.7 and Lemma 2.6, the total communication across the \(RT\) players must be at least \(\Omega(nM)\). Therefore, any streaming algorithm that achieves average \(R\) regret for the experts problem with probability at least \(1-2^{-\Theta(T)}\) must use at least \(\Omega(\frac{nM}{RT})\) space. ### Proof of the Single-Coordinate Information Cost Lower Bound We now show the single-coordinate lower bound of Lemma 3.6. Proof of Lemma 3.6.: Consider a protocol that is correct with probability \(1-2^{-\Theta(T)}\) and let \((C_{1},C_{2},\ldots,C_{T})\sim\zeta^{T}\) be a single column drawn from the NO case, where each coordinate is i.i.d. Bernoulli with parameter \(1/2\). For notational convenience, let \(\Pi=\Pi(C_{1},\cdots,C_{T})\) denote the transcript given the input \((C_{1},C_{2},\cdots,C_{T})\). We consider the one-way message-passing model, where each player \(P_{i}\) holds the input \(C_{i}\). For all \(i<T\), let \(M_{i}\) denote the message sent from player \(P_{i}\) to player \(P_{i+1}\). By the chain rule of mutual information, the information cost of the transcript, the left-side of Equation 3.1 that we need to bound, can be written as \[I(\Pi;C_{1},C_{2},\ldots,C_{T})=\sum_{j=1}^{T}I\left(M_{j};C_{1},C_{2},\ldots, C_{T}\mid M_{<j}\right). \tag{3.2}\] By the independence of one-way communication, we have \[I\left(M_{j};C_{1},C_{2},\ldots,C_{T}\mid M_{<j}\right)=I\left(M_{j};C_{j}\mid M _{<j}\right). \tag{3.3}\] Combining the two equalities above, the information cost equals \[I(\Pi;C_{1},C_{2},\ldots,C_{T})=\sum_{j=1}^{T}I\left(M_{j};C_{j}\mid M_{<j} \right). \tag{3.4}\] We now lower bound the right-side. First, we make the following definition. For any \(i\in[T]\), we say that \((M_{i},M_{<i})\) is _informative_ for \(i\) with respect to the input \(C\) and the transcript \(\Pi=(M_{1},\ldots,M_{T})\) if \[\left|\Pr(C_{i}=0\mid M_{i},M_{<i})-\Pr(C_{i}=1\mid M_{i},M_{<i})\right|\geq c \tag{3.5}\] for some constant \(c>0\); and uninformative otherwise. Intuitively, an informative index \(i\) with respect to \((M_{i},M_{<i})\) means that conditional on the past messages \(M_{<i}\), the message \(M_{i}\) reveals much information about \(C_{i}\). Hence, in this case, \(I(M_{i},C_{i}\mid M_{<i})\) would be large. Now for all \(i\in[T]\), let \(p_{i}\) be the probability that \((M_{i},M_{<i})\) is informative (for \(i\) with respect to \(C\) and \(\Pi\)). Conceptually, we need to show that \(\sum_{i}p_{i}\) is large, since then there would be sufficiently many informative messages, and so the information cost in the left-side of Equation3.4 is high. We formalize this idea in the following lemma. **Lemma 3.9**.: _In the setting above, where \(c>0\) is a constant, the information cost can be lower bounded by_ \[I(\Pi;C_{1},C_{2},\ldots,C_{T})=\sum_{j=1}^{T}I\left(M_{j};C_{j}\mid M_{<j} \right)\geq\Omega\left(\sum_{j=1}^{T}p_{j}\right) \tag{3.6}\] Proof.: We start by expanding the definition of the mutual information terms. For each \(j\in T\), we have \[I\left(M_{j};C_{j}\mid M_{<j}\right)=H\left(C_{j}\mid M_{<j}\right)-H\left(C_ {j}\mid M_{j},M_{<j}\right) \tag{3.7}\] For the first term, notice that \(C_{j}\) and \(M_{<j}\) are independent by one-way communication. Moreover, by definition \(C_{j}\) is Bernoulli with parameter \(1/2\). Therefore, \[H(C_{j}\mid M_{<j})=H(C_{j})=H(1/2)=1.\] For the second term, * either \((M_{j},M_{<j})\) is informative, which holds with probability \(p_{j}\), and in this case, the conditional entropy is upper bounded by \(H\left(C_{j}\mid M_{j},M_{<j}\right)\leq H(1/2+c/2)\); * or \((M_{j},M_{<j})\) is uninformative, and in this case, we trivially upper bound the conditional entropy by \(H\left(C_{j}\mid M_{j},M_{<j}\right)\leq 1\); Putting the observations together and using 3.7, it follows that \[I\left(M_{j};C_{j}\mid M_{<j}\right) =H\left(C_{j}\mid M_{<j}\right)-H\left(C_{j}\mid M_{j},M_{<j}\right)\] \[\geq 1-(p_{j}\cdot H(1/2+c/2)+(1-p_{j})\cdot 1)\] \[=p_{j}-p_{j}\cdot H(1/2+c/2)\] \[\geq p_{j}\left(1-\sqrt{1-c^{2}}\right)\] \[\geq c^{2}\cdot\Omega(p_{j}),\] where the second last step uses the upper bound of Lemma 2.1 and the last step follows since \(1-\sqrt{1-x^{2}}\geq x^{2}/5\) for \(x\in[0,1]\). Summing over \(j=1,2,\ldots,T\) in Equation 3.6 finishes the proof. To prove the claimed information cost inequality Equation 3.1, we show that \(\sum_{i}p_{i}=\Omega(M)\). **Lemma 3.10**.: _There exists a constant \(\gamma>0\) such that_ \[\sum_{j=1}^{T}p_{j}>\gamma\cdot M.\] Proof.: Suppose by way of contradiction that \(\sum_{j=1}^{T}p_{j}=o(M)\). Let \(A\) be a protocol that sends (possibly random) messages \(M_{1},\ldots,M_{T}\) on a random input \(C\in\{0,1\}^{T}\sim\zeta^{T}\) drawn uniformly from the NO distribution, i.e., each coordinate of \(C:=C_{1},\ldots,C_{T}\) is picked to be \(0\) with probability \(\frac{1}{2}\) and \(1\) with probability \(\frac{1}{2}\). Moreover, suppose \(A\) is a protocol that distinguishes between a YES instance and a NO instance with probability at least \(1-\frac{e^{-cT}2^{-T}}{8}\), for some constant \(c>0\). Since \(p_{i}\) is the probability that \(M_{i}\) is informative, then by assumption, the expected number of informative indices \(i\) over the messages \(M_{1},\ldots,M_{T}\) is \(f(M)\) for some \(f(M)=o(M)\). Thus by Markov's inequality, the probability that the number of informative indices is at most \(10f(M)=o(M)\) with probability at least \(\frac{9}{10}\). Let \(S\) be the set of the uninformative indices so that \(|S|=T-10f(M)=T-o(M)\). Let \(C^{\prime}\) be an input that agrees with \(C\) on the informative indices \([T]\setminus S\) and is chosen arbitrarily on uninformative indices \(S\), so that \(C^{\prime}_{i}=C_{i}\) for \(i\in[T]\setminus S\). By definition, each uninformative index only changes the distribution of the output by a \((1\pm c)\) factor. In particular, for \(c\in(0,1/2)\), the probability that the protocol \(A\) generates \(\Pi\) on input \(C^{\prime}\) is at least \((1-c)^{T}\geq e^{-2cT}\) times the probability that the protocol \(A\) generates \(\Pi\) on input \(C\). However, since \(C\) can differ from \(C^{\prime}\) on \(S\), then \(C\) can differ from \(C^{\prime}\) on \(|S|=T-10f(M)=T-o(M)\) indices. Now since each coordinate of \(C\) is picked to be \(0\) with probability \(\frac{1}{2}\) and \(1\) with probability \(\frac{1}{2}\), then the probability that \(C\) contains more than \(T-M\) zeros is at least \(1-T^{M}\cdot\frac{1}{2^{T}}\geq 1-2^{T/2}\) for sufficiently large \(T\). But then there exists a choice of \(C^{\prime}\) that contains fewer than \(\frac{M}{2}\) zeros such that \(A\) will also output \(\Pi\) with probability at least \(\frac{e^{-cT}}{2}\). Since \(C^{\prime}\) contains fewer than \(\frac{M}{2}\), then \(C^{\prime}\) is more likely to generated from a YES instance and indeed a YES instance will generate \(C\) with probability \(2^{-T}\). On the other hand, since \(\Pi\) corresponds to a transcript for which \(A\) will output NO, then the probability that \(A\) is incorrect on \(C^{\prime}\) is at least \(\frac{e^{-cT}}{4}\), which contradicts the claim that \(A\) succeeds with probability \(1-\frac{e^{-cT}2^{-T}}{8}\). Thus it follows that \(\sum_{j=1}^{T}p_{j}=\Omega(M)\), as desired. Now we combine Lemma 3.9 and Lemma 3.10. This implies that the information cost can be lower bounded by \[I(\Pi;C_{1},C_{2},\ldots,C_{T})\geq\Omega\left(\sum_{j=1}^{T}p_{j}\right)\geq \gamma M, \tag{3.8}\] for a constant \(\gamma>0\). This completes the proof. ## 4 Algorithms Against Adaptive Adversaries In this section, we show that there exists algorithms for the discrete prediction with experts problem that is robust to adaptive outputs. ### A Near-Optimal Deterministic Algorithm We first present a simple deterministic algorithm for arbitrary-order streams with oblivious inputs. ``` 0: A stream of length \(T\) with \(n\) experts, upper bound \(M\) on the number of mistakes made by the best expert, and target regret \(R\) 0: A sequence of predictions with regret \(R\) 1:\(k\gets O\left(\frac{nM}{RT}\log n\right)\) 2:\(S\leftarrow\emptyset\) 3:while the stream persists do 4:if\(S\) is empty then\(\triangleright\)We have cycled through all \(n\) experts once 5:\(S\leftarrow[n]\) 6: Let \(P\) be the first \(k\) indices of \(S\) 7:\(S\gets S\setminus P\) 8:while\(P\neq\emptyset\)do 9: For each following day, choose the outcome output by the majority of the experts in \(P\) 10: Delete the incorrect experts on that day ``` **Algorithm 1** Deterministic algorithm for the experts problem We now justify the correctness and space complexity of Algorithm 1. **Theorem 4.1**.: _Among \(n\) experts in a stream of length \(T\), suppose the best expert makes \(M\) mistakes. There exists a deterministic algorithm that uses space \(\widetilde{O}\left(\frac{nM}{RT}\right)\) and achieves regret \(R\)._ Proof.: We first remark that the algorithm can make at most \(\log k\leq\log n\) mistakes over the lifespan of each pool of size \(k:=\frac{2nM}{RT}\log n\) because each time the algorithm makes a mistake, at least half of the pool must be incorrect and deleted, so the size of the pool decreases by at least half with each mistake the algorithm mistakes. Since each pool \(P\) has size \(k\) and there are \(n\) experts, then there are at most \(\frac{2n}{k}\) pools before the entire set \(S\), which is initialized to \(n\), is depleted. Thus, there are at most \(\frac{2n}{k}\) pools to iterate through the entire set of experts. Moreover, each time the algorithm has iterated through the entire set of experts, each expert must have made at least one mistake. This is because an expert is only deleted from the pool \(P\) when it has made a mistake and since all experts have been deleted from \(P\), then all experts have made at least one mistake. Since the best expert makes at most \(M\) mistakes, then the best expert can be deleted from the pool \(P\) at most \(M\) times. In other words, the algorithm can cycle through the entire set of \(n\) experts at most \(M\) times. Hence, the total number of mistakes by the algorithm is at most \[\frac{2n}{k}\cdot\log n\cdot M\leq\frac{2nRT}{2nM\log n}\cdot\log n\cdot M=RT,\] so the algorithm achieves regret at most \(R\). Since the algorithm selects a subset of \(k=\frac{2nM}{RT}\log n\) experts, then the space complexity follows. In light of Theorem 3.8, it is evident that Theorem 4.1 is nearly optimal, up to polylogarithmic factors, for deterministic algorithms, which are automatically adversarially robust. On the other hand, it does not seem necessary that any adversarially robust algorithm must be deterministic. Indeed, we now give a randomized adversarially robust algorithm with better space guarantees. ### A Randomized Robust Streaming Algorithm We first recall the following randomized algorithm for arbitrary-order streams with oblivious input, i.e., non-adaptive input: **Lemma 4.2** (Algorithm for oblivious inputs; [12]).: _Let \(R>\frac{16\log^{2}n}{T}\), and suppose the best expert makes at most \(M\leq\frac{\delta T}{128\log^{2}n}\) mistakes. Then there exists an algorithm DiscPred for the discrete prediction with experts problem that uses \(\widetilde{O}\left(\frac{n}{RT}\right)\) space and achieves regret at most \(R\), with probability at least \(1-\frac{1}{\operatorname{poly}(n,T)}\)._ The algorithm of Lemma 4.2 proceeds by sampling pools of \(k=\widetilde{O}\left(\frac{n}{RT}\right)\) experts and running majority vote on the pool, while iteratively deleting poorly performing experts until no experts remain in the pool, at which a new pool of \(k\) experts is randomly sampled. The main intuition is that either the pool of experts will perform well and achieve low regret, or the pool will be continuously re-sampled until the best expert is sampled multiple times, after which point it will not be deleted from the pool. Unfortunately, it is not evident that this algorithm is robust to adaptive inputs because an adversary can potentially learn the experts in each sampled pool and force the experts to make mistakes only on days in which they are sampled by the algorithm. Instead, we use differential privacy to hide the internal randomness of the algorithm and in particular, the identity of the experts that are sampled by each pool. We first run \(\widetilde{O}(\sqrt{T})\) copies of the algorithm and then output the private median of the \(\widetilde{O}(\sqrt{T})\) copies, guaranteeing roughly \(\left(\frac{1}{\widetilde{O}(\sqrt{T})},0\right)\)-differential privacy because we use \(\widetilde{O}(\sqrt{T})\) copies of the algorithm. Advanced composition, i.e., Theorem 2.9, then ensures \(\left(O(1),\frac{1}{\operatorname{poly}(n)}\right)\)-differential privacy, so that correctness then follows from the generalization properties of DP, i.e., Theorem 2.10. We give our algorithm in full in Algorithm 2. ``` 0: A stream of length \(T\) with \(n\) experts and a target regret \(R\) 0: A sequence of predictions with regret \(R\) 1: Run \(m=O\left(\sqrt{T}\log(nT)\right)\) independent instances of DiscPred with regret \(\frac{R}{4}\) 2: Run PrivMed on the \(m\) instances with privacy parameter \(\varepsilon=O\left(\frac{1}{\sqrt{T}\log(nT)}\right)\) and failure probability \(\delta=\frac{1}{\operatorname{poly}(n,T)}\) 3: At each time \(t\in[T]\), select the output of PrivMed ``` **Algorithm 2** Randomized, robust streaming algorithm for the experts problem We now show the correctness of our algorithm on adaptive inputs. **Theorem 4.3** (Algorithm for adaptive inputs).: _Let \(R>\frac{64\log^{2}n}{T}\), and suppose the best expert makes at most \(M\leq\frac{R^{2}T}{128\log^{2}n}\) mistakes. Then there exists an algorithm for the discrete prediction with experts problem that uses \(\widetilde{O}\left(\frac{n}{R\sqrt{T}}\right)\) space and achieves regret at most \(R\), with probability at least \(1-\frac{1}{\operatorname{poly}(n,T)}\)._ Proof.: Suppose we run \(m=O\left(\sqrt{T}\log(nT)\right)\) independent instances of DiscPred with regret \(\frac{R}{4}\). Note that for \(R>\frac{64\log^{2}n}{T}\), we have \(\frac{R}{4}>\frac{16\log^{2}n}{T}\), which is a valid input to DiscPred in Lemma 4.2. By Lemma 4.2, each instance succeeds on an arbitrary-order stream with probability at least \(1-\frac{1}{\operatorname{poly}(n,T)}\). By a union bound over the \(m\) instances, all instances succeed with probability at least \(1-\frac{1}{\operatorname{poly}(n,T)}\). In particular, each instance has regret at most \(\frac{R}{4}\), so that the total number of mistakes by each instance is at most \(M+\frac{RT}{4}\). Thus, the total number of mistakes by all instances is at most \(m\left(M+\frac{RT}{4}\right)\). To consider an adaptive stream, observe that PrivMed is called with privacy parameter \(O\left(\frac{1}{\sqrt{T}\log(nT)}\right)\) and failure probability \(\frac{1}{\operatorname{poly}(n,T)}\). By Theorem 2.9, the mechanism permits \(T\) adaptive interactions and guarantees privacy \(O\left(1\right)\) with failure probability \(\frac{1}{\operatorname{poly}(n,T)}\). By Theorem 2.10, we have that with high probability, if the output of the algorithm is incorrect, then at least \(\frac{m}{3}\) of the instances DiscPred are also incorrect. Since the total number of mistakes by all instances is at most \(m\left(M+\frac{RT}{4}\right)\), then the total number of mistakes by the algorithm is at most \(3\left(M+\frac{RT}{4}\right)\leq M+RT\), since \(M\leq\frac{R^{2}T}{128\log^{2}n}\). Hence, the algorithm achieves \(R\) regret with high probability. By Lemma 4.2, each instance of DiscPred uses \(\widetilde{O}\left(\frac{n}{RT}\right)\) space. Since we use \(m=O\left(\sqrt{T}\log(nT)\right)\) independent instances of DiscPred, then the total space is \(\widetilde{O}\left(\frac{n}{R\sqrt{T}}\right)\). ## Acknowledgements We thank Binghui Peng for helpful discussions. David P. Woodruff and Samson Zhou were supported by a Simons Investigator Award and by the National Science Foundation under Grant No. CCF-1815840. Fred Zhang was supported by ONR grant N00014-18-1-2562.
2302.01420
Multicolour Optical Variability Monitoring of Blazars with High Time Resolution
We carried out a high time-resolution, multicolour optical observing campaign for eight $\gamma$-ray detected blazars during 2010-2020. We analyze flux variations, correlations between magnitudes and colours on different timescales. Intraday variability (IDV) is detected in all eight sources of our sample. A bluer-when-brighter (BWB) chromatic trend is dominant on intraday timescales. On the short timescales, the BWB trend only shows up in ON 231, 3C 279, BL Lacertae and 1E 1458.8+2249. There is a BWB trend in 3C 279 on the long timescale. We estimate the upper limits of black hole mass for three blazars (i.e. ON 321, 1ES 1426+42.8, PKS 1510-089) using variability timescales. On April 13, 2010 a potential quasi-periodic oscillation (QPO) with the period of $P=48.67\pm13.90$ minutes is found in 1ES 1426+42.8. The light curve on March 16, 2021 further shows the existence of the QPO phenomenon. The QPO in this target deserves further observation and confirmation.
X. Chang, T. F. Yi, D. R. Xiong, C. X. Liu, X. Yang, H. Z. Li, Y. L. Gong, W. W. Na, Y. Li, Z. H. Chen, J. P. Chen, L. S. Mao
2023-02-02T21:18:06Z
http://arxiv.org/abs/2302.01420v1
# Multicolour Optical Variability Monitoring of Blazars with High Time Resolution ###### Abstract We carried out a high time-resolution, multicolour optical observing campaign for eight \(\gamma\)-ray detected blazars during 2010-2020. We analyze flux variations, correlations between magnitudes and colours on different timescales. Intraday variability (IDV) is detected in all eight sources of our sample. A bluer-when-brighter (BWB) chromatic trend is dominant on intraday timescales. On the short timescales, the BWB trend only shows up in ON 231, 3C 279, BL Lacertae and 1E 1458.8+2249. There is a BWB trend in 3C 279 on the long timescale. We estimate the upper limits of black hole mass for three blazars (i.e. ON 321, 1ES 1426+42.8, PKS 1510-089) using variability timescales. On April 13, 2010 a potential quasi-periodic oscillation (QPO) with the period of \(P=48.67\pm 13.90\) minutes is found in 1ES 1426+42.8. The light curve on March 16, 2021 further shows the existence of the QPO phenomenon. The QPO in this target deserves further observation and confirmation. keywords: BL Lacertae objects: individual (OJ 287, ON 231, 1ES 1426+42.8, 1E 1458.8+2249, OT 546, BL Lacertae) -- galaxies: photometry -- quasars: individual (3C 279, PKS 1510-089) ## 1 Introduction Active galactic nuclei (AGNs) are the brightest long-live objects in the Universe, which are powered by the accretion process of Super-Massive Black Holes (SMBHs) in the center of galaxies (Esposito et al. 2015; Xiong et al. 2017). Blazars are the most extreme subclass of AGNs, with relativistic jets pointing in the direction of the observer. They are characterized by rapid optical variation, high luminosity, high and variable polarization, apparent superluminal motion, non-thermal continuous radiation, and high-energy gamma-ray radiation (Urry & Padovani 1995). The two subclasses of blazars are BL Lacertae objects (BL Lacs) and flat-spectrum radio quasars (FSRQs). BL Lacs have featureless optical spectra, while FSRQs have broad emission lines in their optical spectra (Stocke et al. 1991; Marcha et al. 1996). The topical spectral energy distributions (SEDs) of blazars have two peaks (Fossati et al. 1998). The low-frequency peak ranges from radio to UV or X-ray, and the high-frequency peak extends from X-ray to \(\gamma\)-ray (Abdo et al. 2010a). The former is dominated by the synchrotron emission of relativistic electrons, while the latter can be explained by the inverse Compton scattering (SSC or EC), synchrotron radiation of protons, lepto-hadronic (P\(\gamma\)) models, or hadronic (PP) models (e.g., Ghisellini et al. 2010; Dermer et al. 2012; Bottcher et al. 2013). According to different positions of synchrotron peak frequencies, blazars can be further divided into high-synchrotron-peaked blazars (HSP), intermediate-synchrotron-peaked blazars (ISP), and low-synchrotron-peaked blazars (LSP) (Abdo et al. 2010a). The variability is a useful tool to explore the nature of blazars. The variability of blazars can be broadly divided into intraday variability (IDV), short-term variabil ity (STV), and long-term variability (LTV). Variations in the flux of a few tenths or hundredths of magnitudes over a timescale of tens of minutes to a few hours are often called as IDV (Wagner & Witzel, 1995). The timescale of STV ranges from days to months, and LTV ranges from months to years (Gupta et al., 2008; Dai et al., 2015). Some blazars show high amplitudes of variability on timescales as short as several minutes in different wavebands (Dai et al., 2001; Xie et al., 2002; Sagar et al., 2004; Fan et al., 2009). The minimum variability timescale is related to the mass of the central black hole. Therefore, the mass of black holes can be constrained by the observed minimum timescales of rapid variations in the optical regimes (Liu & Bai, 2015). Observations of the IDV can be used to derive the central sizes and the black hole masses in blazars (Gupta et al., 2009). Models related to jets and accretion disks have been proposed to explain the variability behaviors under different timescales, but many details of the models are still under discussion (Bhatta, 2021). The optical brightness/flux variations are associated with the colour/spectral variations for blazars (Gu & Ai, 2011; Xiong et al., 2016). The correlations between magnitudes and colours are often used to constrain the origin of flux variations (Agarwal et al., 2016). The bluer-when-brighter (BWB) trend was found in most BL Lacs, while FSRQs show the redder-when-brighter (RWB) trend (Gu et al., 2006). There are even more complicated correlations than the BWB trend and RWB trend between magnitudes and colours (e.g., Bonning et al., 2012). However, a consolidated framework of correlations between magnitudes and colours in different blazars has not yet been established. More multi-colour observations are still needed to study the correlations between magnitudes and colours. The periodic or quasi-periodic oscillations (QPOs) of flux variations were reported in some blazars (Sillanpaa et al., 1988). The QPOs of IDV were only detected in a few blazars (Gupta et al., 2009; Hong et al., 2018). Urry et al. (1993) found that PKS 2155-304 showed a potential QPO in 0.7 day in UV and optical bands. Lachowicz et al. (2009) reported a QPO of \(\sim\)4.6 hours in the XMM-Newton X-ray light curve of PKS 2155-304. Espallat et al. (2008) reported a QPO of \(\sim\)55 minutes in the XMM-Newton light curve of the quasar 3C 273. Gupta et al. (2009) detected high probabilities QPOs of IDV with timescales between \(\sim\)25 and \(\sim\)73 minutes for S5 0716+714, which was the first good evidence for quasi-periodic components in the optical IDV of blazars. Hong et al. (2018) also found that S5 0716+714 had a possible QPO of about 50 minutes at 99% confidence level. If the observed timescale of periodic variability indicates an innermost stable orbital period from the accretion disk, then the QPOs of IDV can be used to estimate/limit the black hole mass (e.g., Gupta et al., 2009; Dai et al., 2015). In order to find more QPOs of IDV and further understand them, new optical intraday observations need to be carried out. In this work, we present multi-colour photometric data for a sample of eight sources from 2010 to 2020 with high time-resolution. We analyze flux variations, correlations between magnitudes and colours on different timescales. QPOs of IDV are detected in one of the eight blazars. Considering the available telescope time and the observation limitations of telescope, we chose the eight sources as our observed sample. All of the eight sources were detected with gamma-ray radiation (Ajello et al., 2020). The sample includes all sub-classes of blazars. Seven sources have been detected with TeV radiation (see Table 1) and the remaining one target is included in the third Fermi-LAT catalog of high-energy sources (3FHL, \(>\) 100 GeV Fermi-LAT event)(Ajello et al., 2017). Our sample in \(R\) band ranges from 12 to 17. This paper is organized as follows. Section 2 describes the observations and data reduction. Section 3 presents the results. Discussion and conclusions are reported in Sections 4 and 5. ## 2 Observations and data reduction We observed the eight blazars with the 1.02 m optical telescope administered by Yunnan Astronomical Observatories (YNAO) of China. During our observation periods, the 1.02 m telescope was equipped with an Andor DW436 CCD (2048 \(\times\) 2048 pixels) camera at the Cassegrain focus (\(f\) = 13.3 m). The field of view (FOV) of the CCD image is 7.3 \(\times\) 7.3 arcmin\({}^{2}\). The pixel scale is 0.21 arcsec/pixel (Dai et al., 2015; Xiong et al., 2016). The readout noise and gain are 6.33 electrons/pixel and 2.0 electrons/ADU, respectively (Dai et al., 2015; Liao et al., 2014). The standard Johnson broadband filters are used for all frames. Our multi-band photometry observations are performed through a cyclic mode in the \(V\), \(R\) and \(I\) bands. The exposure time ranges from 1 to 6 minutes. Therefore, the data can be considered as quasi-simultaneous measurements (\(<\)10 minutes). Different exposure time is set depending on seeing, weather conditions, and the brightness of sources. The twilight sky flat-field images are taken in good weather conditions. Several bias frames are taken at the beginning of the night's observation. The data processing is carried out by using standard Image Reduction and Analysis Facility (IRAF1) software. Aperture photometry is carried out by using APPHOT task of the IRAF after the flat-field and bias are corrected. We tried different photometry apertures every night and then selected the best aperture radius from 0.5-2.0 Full Width at Half Maximum (FWHM) to obtain the best signal-to-noise ratio. For all of the eight blazars, at least three local comparison stars are required in the same frame. The magnitudes and finding charts of comparison stars are obtained from the Web page (Finding Charts for AGN2). A comparison star with colours similar to the blazar is chosen for flux calibration. We transformed the instrumental magnitude of the blazar to the apparent magnitude using the differential photometry (Bai et al., 1998; Zhang et al., 2004; Fan et al., 2014). The star with the smallest variation in differential magnitudes compared to the comparison star is chosen as the check star. The rms errors of the photometry of a specific night are derived based on the comparison star and the check star as follows: Footnote 1: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. Footnote 2: [https://www.lsw.uni-heidelberg.de/projects/extragalactic/charts/](https://www.lsw.uni-heidelberg.de/projects/extragalactic/charts/) \[\sigma=\sqrt{\frac{\sum_{i=1}^{N}(m_{i}-\overline{m})^{2}}{N-1}}, \tag{1}\] where \(m_{i}\) is the differential magnitude of the comparison star and check star, \(\overline{m}\) is the average differential magnitude for one night, and \(N\) is the number of observations in a given night. Intraday variability amplitude (Amp) is introduced by Heidt & Wagner (1996), and defined as: \[Amp=100\times\sqrt{(A_{max}-A_{min})^{2}-2\sigma^{2}\;percent}, \tag{2}\] where \(A_{max}\) and \(A_{min}\) are the maximum and minimum magnitudes of the light curve for the night being considered, respectively, and \(\sigma\) is the rms error. We obtained the optical multi-band observation data of eight targets in a total of 36 nights from 2010 to 2020. Table 1 lists the names, redshifts, types and TeV radiations. The light curves of all eight sources in IDV, STV, and LTV are given in Figures 1-3, respectively. Our observational data is listed in Table 2. ## 3 Results ### Variability Analysis In order to quantify the IDV/microvariation, we employed two different statistical methods: the F-test and the ANOVA-test (one-way analysis of variance). The F-test is considered as a proper statistic to quantify the optical variability (de Diego 2010; Joshi et al. 2011; Hu et al. 2014; Agarwal & Gupta 2015). The \(F\) value is calculated as: \[F_{1}=\frac{Var(BL-StarA)}{Var(StarA-StarB)}, \tag{3}\] \[F_{2}=\frac{Var(BL-StarB)}{Var(StarA-StarB)}, \tag{4}\] where \(StarA\) is the comparison star, \(StarB\) is the check star, \(BL\) is the blazar, and \(Var()\) is the variances of the differential instrumental magnitudes. The \(F\) value from the average of \(F_{1}\) and \(F_{2}\) is compared with the critical value \(F_{\nu_{ll},\nu_{\ast}}^{\alpha}\), where \(\nu_{ll}\) and \(\nu_{\ast}\) are the number of degrees of freedom for the blazar and comparison star, respectively (\(\nu=N\) - 1), and \(\alpha\) is the significance level set as 0.01 (2.6\(\sigma\)) (Xiong et al. 2016). If the average \(F\) value is larger than the critical value, then the blazar is variable at a 99% confidence level. ANOVA is a robust and powerful estimator for microvariations (de Diego 2010). We use ANOVA in the analysis because it relies on the expected variance from the subsamples of the data rather than the error measurement. According to the exposure time, we bin the data in groups of three or five observations (de Diego 2010; Xiong et al. 2016). This method is only applicable for light curves with more than nine observations in a given night. If the measurements in the last group are less than three or five, then they are combined with the previous group. The \(F_{\nu_{1},\nu_{2}}^{\alpha}\) is the critical value of ANOVA, where \(\nu_{1}=k-1\) (\(k\) is the number of groups), \(\nu_{2}=N-k\) (\(N\) is the number of measurements), and \(\alpha\) is the significance level (Hu et al. 2014). The results of IDV are shown in Table 3. The blazar is considered to be in a variable status if the light curves of a night satisfy the above two criteria and follow the normal distribution. Whether the light curves follow normal distribution is presented with N/Y (meaning no or yes) in column 5 of Table 3. The blazar is considered to be in a probably variable status if one of the above two criteria is satisfied and the light curves conform to the normal distribution. The blazar is considered to be in a non-variable status if none of the criteria are satisfied, or the light curves do not conform to the normal distribution. The results of IDVs are shown in Table 3. IDVs were found in 15 nights. The corresponding light curves are given in Figure 1. ### Intraday Variability, Short-Term Variability and Long-Term Variability In this subsection, basing on previous literature, we will report some new observation results for each of our sources. 1. OJ 287. The BL Lac object OJ 287 is one of the most widely observed extragalactic objects. Its redshift is \(z=0.306\) (Gupta et al. 2008b). Sillanpaa et al. (1988) discovered that OJ 287 has a \(\sim\)12 years periodicity and proposed a model of a binary black hole to explain this period. OJ 287 reached its brightest state with \(V=12\) mag in 1972 (Qian & Tao 2003; Fan et al. 2009). In 2015, there was an outburst of 12.9 mag in the optical \(R\) band (Valtonen et al. 2016). Fan et al. (2009) detected IDV of OJ 287. The timescale was from 10 minutes to 2 hours and the magnitude variation was from 0.11 mag to 0.75 mag. Furthermore, OJ 287 usually shows a BWB trend in colour behavior (Takalo & Sillanpaa 1989; Carini et al. 1992; Vagnetti et al. 2003; Wu et al. 2006; Villforth et al. 2010). We observed OJ 287 in the \(V\), \(R\), and \(I\) bands on April 3, 2013, and March 13 through 16, 2020. The IDV was detected in one day. On March 13, 2020, it was monotonically brightened by \(\Delta I=0.069\) mag in 265.28 minutes from JD = 2458922.036 to JD = 2458922.220, \(\Delta R=0.052\) mag in 218.17 minutes from JD = 2458922.039 to JD = 2458922.191, and \(\Delta V=0.053\) mag in 185.67 minutes from JD = 2458922.042 to JD = 2458922.171. The variability amplitudes on March 13, 2020 were 9.47%, 5.15%, and 5.23% in \(I\), \(R\), and \(V\) bands, respectively. The light curve of the short-term timescale showed an obvious brightening that the source brightened by 0.237 mag, 0.191 mag, 0.207 mag from March 13 to March 16, 2020 in \(V\), \(R\) and \(I\) bands, respectively. 2. ON 231. ON 231 (1219+285) was classified as a BL Lac object and its redshift is \(z=0.102\) (Weistrop et al. 1985). ON 231 has a variation with a characteristic timescale of an order of years (Pollock et al. 1979). From April to May 1988, an abnormal outburst was observed by Massaro et al. (1999), and the magnitude of \(R\) band reached the maximum brightness of 12.2 mag. Tosti et al. (1998) carried out multi-band optical monitoring of the source for three years (1994-1997). The light curve of the source from 1994 to 1997 showed the brightest (\(\sim 13.5\) mag) and faintest (\(\sim 15.0\) mag) states in the \(R\) band. Gupta et al. (2008b) observed ON 231 on January 11, 2007 in the \(R\) band. In their observations, the source did not show IDV in one night. They observed the source at \(R\sim 15.0\) mag. They argued that the source was in the low state, which was comparable to the faintest state observed by Tosti et al. (1998). We observed ON 231 in the \(I\) band on May 8, 2014, and \(V\), \(R\), \(I\) bands on April 14 and April 15, 2020. The IDVs were detected in all three days. On May 8, 2014, we detected that the source brightened by 0.304 mag within 16 minutes and then darkened by 0.3 mag within 27 minutes (see Figure 1) in the I band. The variability amplitude was 30.31% on May 8, 2014. On April 14, 2020, the magnitude variations were \(\Delta V=0.085\) mag, \(\Delta R=0.059\) mag and \(\Delta I=0.123\) mag, respectively. On April 15, 2020, the source brightened by 0.031 mag in the I band. From April 14 to April 15, 2020, the largest magnitude variations in the \(V\), \(R\), and \(I\) bands were \(\Delta V=0.199\) mag, \(\Delta R=0.152\) mag, and \(\Delta I=0.145\) mag, respectively. As can be seen from Figure 2, the light curve showed an obvious brightening. the most prominent blazars and extremely variable at all wavelengths (Katajainen et al. 2000). In the optical band, it showed fast and significant outbursts on a single night (e.g., Miller & Noble 1996). Rapid variations have also been observed in \(IR,UV\), \(X\)-ray, and \(\gamma\)-ray (e.g., Fan 1999). Xie et al. (1999) reported the most rapid optical variation \(\Delta V=1.17\) mag in 40 minutes on May 22, 1996. Webb et al. (1990) reported that the source brightened by \(\sim\)2.0 mag within 24 hours. Gupta et al. (2008b) reported a large variation \(\Delta R=1.5\) mag of the source in the time span of 42 days (January-February 2007) which was a much larger variation than the previous \(\Delta R=0.91\) mag in 49 days (April-June 2001) reported by Xie et al. (2002). We observed 3C 279 for 11 days from 2016 to 2018. The IDVs were detected on two days. On May 31, 2016, the brightness variation of the source was \(\Delta V=0.195\) mag (Amp=19.28%). On May 08, 2018, the source brightened by \(\Delta V=0.118\) mag, \(\Delta R=0.099\) mag within 74.9 minutes and darkened by \(\Delta V=0.19\) mag, \(\Delta R=0.126\) mag within 171.75 and 123.42 minutes. The variability amplitudes of the V and R bands were 12.15% and 9.59%, respectively. The light curves of the long-term timescale are shown in Figure 3. 4. 1ES 1426+42.8. Abdo et al. (2010b) classified 1ES 1426+42.8 (\(z=0.129\)) as BL Lac object based on its spectral energy distributions. It is a TeV source. Its spectrum has been measured by the High Energy Gamma-Ray Astronomy (HEGRA) up to 10 TeV (Costamante et al. 2003). In addition, it is a low power, high synchrotron peak (\(\nu_{peak}>100keV\)) BL Lac object (Costamante et al. 2003). \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline Object & Date & Band & N & Normal & F & Fc(99) & Fa & Fa(99) & V/N & A\% & Ave(mag) \\ \hline OJ 287 & 2013 Apr 03 & I & 11 & Y & 0.71 & 4.85 & 0.34 & 8.65 & N & 2.83 & 14.17 \\ & 2013 Apr 03 & R & 11 & N & 1.34 & 4.85 & 4.08 & 8.65 & N & 3.33 & 14.76 \\ &... &... &... &... &... &... &... &... &... &... &... \\ ON 231 & 2014 May 08 & I & 46 & Y & 20.01 & 2.01 & 9.55 & 3.04 & V & 30.31 & 14.89 \\ & 2020 Apr 14 & I & 57 & Y & 27.86 & 1.87 & 11.19 & 2.73 & V & 9.98 & 14.68 \\ &... &... &... &... &... &... &... &... &... &... &... \\ 3C 279 & 2016 May 07 & I & 12 & Y & 1.03 & 4.46 & 2.02 & 7.56 & N & 5.57 & 15.08 \\ & 2016 May 07 & R & 13 & Y & 2.17 & 4.16 & 0.30 & 7.21 & N & 11.60 & 15.76 \\ &... &... &... &... &... &... &... &... &... &... &... \\ 1ES 1426+42.8 & 2010 Apr 13 & I & 31 & Y & 130.61 & 2.39 & 5.33 & 3.86 & V & 14.89 & 15.60 \\ &... &... &... &... &... &... &... &... &... &... &... \\ 1E 1458.8+2249 & 2011 May 26 & I & 11 & Y & 141.88 & 4.85 & 29.82 & 8.65 & V & 96.53 & 16.43 \\ & 2020 Apr 14 & I & 51 & Y & 2.07 & 1.95 & 5.48 & 2.88 & V & 8.70 & 14.93 \\ &... &... &... &... &... &... &... &... &... &... &... \\ PKS 1510-089 & 2010 Apr 13 & R & 23 & Y & 116.36 & 2.72 & 2.47 & 4.58 & PV & 49.28 & 16.55 \\ & 2010 May 06 & R & 45 & Y & 257.05 & 2.04 & 8.21 & 3.05 & V & 42.39 & 16.60 \\ &... &... &... &... &... &... &... &... &... &... &... \\ OT 546 & 2010 May 06 & I & 17 & N & 8.73 & 3.41 & 0.34 & 5.41 & N & 12.06 & 14.97 \\ & 2010 May 06 & R & 16 & Y & 3.82 & 3.52 & 1.27 & 5.67 & PV & 26.39 & 15.47 \\ &... &... &... &... &... &... &... &... &... &... &... \\ BL Lacertae & 2019 Sep 16 & B & 33 & Y & 2.60 & 2.32 & 4.68 & 3.79 & V & 9.68 & 15.04 \\ & 2019 Sep 16 & I & 30 & Y & 0.53 & 2.42 & 3.39 & 3.90 & N & 4.43 & 12.65 \\ &... &... &... &... &... &... &... &... &... &... \\ \hline \end{tabular} Notes: Column 1 is the name of the object; Column 2 is the date of the observation; Column 3 is the observed band; Column 4 is the number of data points; Column 5 is the result of the normal distribution of the light curve; Column 6 is the average \(F\) value; Column 7 is the critical \(F\) value with 99% confidence level; Column 8 is the \(F\) value of ANOVA; Column 9 is the critical \(F\) value of ANOVA with 99% confidence level; Column 10 is the variability status (V: variable, PV: probably variable, N: non-variable); Column 11 is the variability amplitude; Column 12 is daily average magnitudes. (The full Table 3 can be accessed electronically in machine readable format.) \end{table} Table 3: Results of the IDV analysis. Most of the researches of 1ES 1426+42.8 focus on the high-energy band, while the optical bands remain less explored. In the blazar monitoring program of Kurtanidze et al. (2009) from May 1997 to September 2006 (66 days), 1ES 1426+42.8 had a small variation of 0.10 \(\pm\) 0.05 mag in the \(R\) band. Leonardo et al. (2009) observed 1ES 1426+42.8 in May-June 2008 in \(R\) band. It is brightened by 20 % during the observation. The IDV is detected in our observations of 1ES 1426+42.8 in the \(I\) band on April 13, 2010. 1ES 1426+42.8 had a rapid outburst of \(\Delta R=0.149\) mag within 20.68 minutes from JD = 2455300.244 to JD = 2455300.258. The variability amplitude was 14.89% on April 13, 2010. We also ob \begin{table} \begin{tabular}{c c c c} \hline \hline Object & Date & r & p \\ \hline OJ 287 & 2013 Apr 03 & 0.930 & \(<0.0001\) \\ OJ 287 & 2020 Mar 14 & 0.821 & \(<0.0001\) \\ OJ 287 & 2020 Mar 15 & 0.501 & 0.0001 \\ OJ 287 & 2020 Mar 16 & 0.786 & 0.0010 \\ ON 231 & 2020 Apr 14 & 0.614 & \(<0.0001\) \\ ON 231 & 2020 Apr 15 & 0.815 & \(<0.0001\) \\ 3C 279 & 2016 May 07 & 0.948 & \(<0.0001\) \\ 3C 279 & 2016 May 27 & 0.866 & \(<0.0001\) \\ 3C 279 & 2016 May 28 & 0.954 & \(<0.0001\) \\ 3C 279 & 2016 May 31 & 0.724 & \(<0.0001\) \\ 3C 279 & 2018 May 07 & 0.639 & \(<0.0001\) \\ 3C 279 & 2018 May 08 & 0.628 & \(<0.0001\) \\ 3C 279 & 2018 May 09 & 0.290 & 0.1600 \\ 3C 279 & 2018 May 10 & 0.390 & 0.0060 \\ 3C 279 & 2018 Jun 06 & 0.839 & 0.0006 \\ 3C 279 & 2018 Jun 07 & 0.532 & 0.0900 \\ 1E 1458.8+2249 & 2020 Apr 14 & 0.842 & \(<0.0001\) \\ 1E 1458.8+2249 & 2020 Apr 15 & 0.798 & \(<0.0001\) \\ PKS 1510-089 & 2013 Apr 02 & 0.816 & 0.0002 \\ PKS 1510-089 & 2016 May 07 & 0.792 & 0.0100 \\ PKS 1510-089 & 2016 May 08 & 0.767 & 0.0160 \\ PKS 1510-089 & 2016 May 09 & 0.796 & 0.0030 \\ OT 546 & 2010 May 06 & 0.740 & 0.0020 \\ OT 546 & 2016 May 07 & 0.510 & 0.0009 \\ OT 546 & 2016 May 08 & 0.677 & 0.0002 \\ OT 546 & 2016 May 09 & 0.562 & 0.0050 \\ OT 546 & 2016 May 12 & 0.418 & 0.1200 \\ BL Lacertae & 2019 Sep 16 & 0.549 & 0.0030 \\ BL Lacertae & 2019 Sep 17 & 0.577 & 0.0490 \\ BL Lacertae & 2019 Sep 19 & 0.670 & \(<0.0001\) \\ OJ 287 & 2020 Mar 13 - 2020 Mar 16 & 0.002 & 0.9830 \\ ON 231 & 2020 Apr 14 - 2020 Apr 15 & 0.852 & \(<0.0001\) \\ 3C 279 & 2016 May 07 - 2016 May 31 & 0.740 & \(<0.0001\) \\ 3C 279 & 2018 May 07 - 2018 Jun 07 & 0.208 & 0.0040 \\ 1E 1458.8+2249 & 2020 Apr 14 - 2020 Apr 15 & 0.628 & \(<0.0001\) \\ PKS 1510-089 & 2016 May 07 - 2016 May 09 & -0.431 & 0.0170 \\ OT 546 & 2016 May 07 - 2016 May 12 & 0.029 & 0.7740 \\ BL Lacertae & 2019 Sep 16 - 2019 Sep 19 & 0.897 & \(<0.0001\) \\ 3C 279 & 2016 - 2018 & 0.835 & \(<0.0001\) \\ \hline \end{tabular} Notes: Column 1 is the name of the object; Column 2 is the date of the observation; Column 3 is the coefficient of correlation; Column 4 is the chance probability. \end{table} Table 4: Results of error-weighted linear regression analysis. served three consecutive rising and falling brightness variations of 1ES 1426+42.8 on April 13, 2010. 5. 1E 1458.8+2249. Heidt & Wagner (1998) classified 1E 1458.8 + 2249 as a BL Lac object. Fiorucci et al. (1998) reported that the magnitudes of this source are \(V\) = \(15.58\pm 0.01\), \(R\) = \(15.06\pm 0.01\), and \(I\) = \(14.60\pm 0.01\), respectively. Massaro et al. (2003) carried out optical multi-band observations of 1E 1458.8+2249 from 1994 to 2001. The \(R\) band magnitude varied between 15.5 mag and 16.5 mag from 1994 to 1998. They observed a flare with \(R\) = \(14.75\) mag in January 2000. On February 8, 2001, it reached 14.62 mag in the \(R\) band. They observed the magnitudes on February 20, 2001 as follows: \(V\) = \(15.19\pm 0.06\) mag, \(B\) = \(15.53\pm 0.05\) mag, and \(U\) = \(14.99\pm 0.05\) mag, and gave the magnitudes on February 21 as follows: \(I\) = \(14.29\pm 0.03\) mag, \(R\) = \(14.79\pm 0.03\) mag, and \(V\) = \(15.17\pm 0.04\) mag, respectively. We observed 1E 1458.8 + 2249 in the \(I\) band on 2011 May 26, and in the \(V\), \(R\), and \(I\) bands from April 14 to April 15, 2020. The IDVs were detected on May 26, 2011 and April 14, 2020 in the I band. On May 26, 2011, the I band magnitude ranges from 15.98 to 16.95 mag. Figure 1 shows that 1E 1458.8 + 2249 had a drastic outburst of 0.97 mag. On April 14, 2020, the brightness of the I band was between 14.99 mag and 15.09 mag, and the variability amplitude was 8.70%. 6. PKS 1510-089. PKS 1510-089 (\(z\) = 0.361) is one of the most extreme AGNs, exhibiting strong and rapid variability in all wave bands (Rani et al. 2011). It is a highly polarized quasar in the optical band (Kataoka et al. 2008). It is observed with variations in the optical bands of 0.65 mag in 41 minutes on June 14, 1999 (Dai et al. 2001). The optical variations were 2.00 mag in 41 minutes on May 29, 2000, and 0.85 mag in 44 minutes on April 16, 2001 (Xie et al. 2001, 2002). These variations had similar timescales (\(\sim\) 42 min). Wu et al. (2005) reported that on March 08, 2002, the source brightness in the \(R\) band dropped from 16.62 \(\pm\) 0.12 to 17.92 \(\pm\) 0.15 mag within 19 minutes, and rose back to 16.57 \(\pm\) 0.15 mag in 16 minutes. They found that the actual timescale of the minimum was 35 minutes. We observed PKS 1510-089 for 11 days from 2010 to 2016. The IDVs were detected on five days. On May 6, 7, 8, 2010, May 6, 2011, and April 2, 2013, the variations of magnitude were \(\Delta R\) = 0.424 mag, \(\Delta R\) = 0.341 mag, \(\Delta R\) = 0.187 mag, \(\Delta R\) = 0.227 mag, and \(\Delta I\) = 0.412 mag, respectively. The variability amplitudes were 42.39%, 34.09%, 18.69%, 22.69%, and 57.65%, respectively. The light curves of the short-term timescale and the long-timescale are shown in Figure 2 and Figure 3. 7. OT 546. OT 546 (1727+502) was discovered by Zwicky (1966) and classified as a BL Lac object by Angel & Stockman (1980). Oke (1978) measured the redshift of OT 546 as \(z=0.0554\pm 0.0003\). Fan (1995) reported the mass of its SMBH as \(M_{\bullet}\) = \(10^{8.73}M_{\odot}\). From 1975 to 1987, Kinman (1976) reported that the average brightness in the \(V\) band was around 16 mag. Pica et al. (1988) reported an average magnitude of 16.70 mag in the \(B\) band from 1975 to 1987. The monitoring of Katajainen et al. (2000) displayed that the brightness varied between 15.76 mag and 16.12 mag in the \(V\) band from 1996 to 1997. Guo et al. (2014) detected that the \(V\) band varied between 15.72 mag and 16.05 mag from 2009 February 16 to July 1. We observed OT 546 in the \(V\), \(R\), \(I\) bands on May 6, 2010, and from May 7 to May 12, 2016. The IDV was detected on May 07, 2016 in the I band with the magnitude variation of \(\Delta I\) = 0.142 mag. The variability amplitude was 13.80% on May 07, 2016. During the observation period between May 7 and May 12, 2016, the largest magnitude variations in the \(V\), \(R\) and \(I\) bands were \(\Delta V\) = 0.284 mag, \(\Delta R\) = 0.355 mag, and \(\Delta I\) = 0.423 mag, respectively. 8. BL Lacertez. BL Lacertez (2200+420) is the prototype of BL Lac, and its redshift is \(z\) = 0.069 (Miller & Hawley 1977). Woo & Urry (2002) discovered that its black hole mass was \(M_{\bullet}\) = \(10^{8.23}M_{\odot}\). It is highly variable from radio to \(\gamma\)-ray. A rapid optical variation of 1.3 mag in 20 hours was detected in the \(V\) band by Weistrop (1973). On August 25, 1996, it faded \(\Delta\)\(B\) = 0.73 mag within 53 minutes, and on October 19, 1996, it faded \(\Delta\)\(I\) = 0.55 mag within 70 minutes (Xie et al. 1999). Li et al. (2021a) reported a violent variation on September 15, 2017 in the \(B\) band with an variability amplitude of 16.5% (0.17 mag). The brightest magnitude of \(R\) band is 12.95 mag on September 17, 2017, while the faintest magnitude of \(R\) band is 13.55 mag on October 28, 2018. We observed BL Lacertez in the \(B\), \(V\), \(R\), and \(I\) bands from September 16 to September 19, 2019. The IDV was detected on September 16, 2019 in the B band with the magnitude variation of \(\Delta B\) = 0.099 mag. The light curve of short-term timescale showed an obvious brightening from September 16 to September 19, 2019. The brightness variations of the source were \(\Delta B\) = 0.549 mag, \(\Delta V\) = 0.481 mag, \(\Delta R\) = 0.426 mag and \(\Delta I\) = 0.433 mag within 4 days, respectively. ### Auto-correlation Analysis and Variability Timescale We performed the auto-correlation function (ACF) analysis (Alexander 1997) to search for the characteristic timescale of variability. The ACF is written as \[ACF(\tau)=\left\langle\left(m\left(t\right)-\left\langle m\right\rangle\right) \cdot\left(m\left(t+\tau\right)-\left\langle m\right\rangle\right)\right\rangle, \tag{5}\] where the brackets denote a time average. The ACF measures the correlation of the optical light curve with itself, shifted in time, as a function of the time lag \(\tau\) (Giveon et al. 1999; Xiong et al. 2017). The width of the ACF peak near zero time lag is proportional to the timescale if there is an underlying signal in the light curve with a typical variability timescale (Giveon et al. 1999; Liu et al. 2008). The zero-crossing time is the shortest time required for ACF to fall to zero (Alexander 1997). Following Liu et al. (2008), Xiong et al. (2017) and Giveon et al. (1999), we choose the zero-crossing time of the ACF as the variability timescale. The variability timescale of the ACF is related to the characteristic size scale of the corresponding emission region (Chatterjee et al. 2012). The ACF was estimated by the code from Alexander (1997). Only nights detected with IDV are analyzed with ACF. The results of the ACF analysis that can determine the time delay are shown in Figure 4. Following Giveon et al. (1999), the fifth-order polynomial least-squares fitting was chosen to find the zero-crossing time, with the constraint that ACF (\(\tau\) = 0) = 1. In order to obtain a better fitting effect on the light curves of 1ES 1426+42.8 (I band on April 13, 2010) and PKS 1510-089 (R band on May 06, Figure 1: Light curves of the IDV for the eight sources. The black open circles are the light curves for the sources. The red open circles are the magnitude difference between comparison stars and check stars in the same period. 2010), we performed a ninth-order polynomial least-squares fitting for 1ES 1426+42.8, and an eighth-order polynomial least-squares fitting for PKS 1510-089. We adopted an error-weighted polynomial fitting method when performing polynomial fit. The fitting results show that the detected variability timescales of IDVs are 0.0322 day for the \(I\) band of ON 231 on May 08, 2014, 0.01 day for the \(I\) band of 1ES 1426+42.8 on April 13, 2010, and 0.0295, 0.0298 and 0.036 day for the \(R\) band of PKS 1510-089 on May 06, 2010, May 08, 2010, and May 06, 2011. The minimum characteristic variability timescale of PKS 1510-089 is 0.0295 day. ### Black Holes Mass Estimation The observed minimum timescale of variability are widely used to estimate the masses of the central black holes in blazars (e.g., Abramovicz & Nobili 1982; Miller et al. 1989; Liu & Bai 2015). Liu & Bai (2015) proposed a new sophisticated model to limit the black hole mass \(M_{\bullet}\) using the rapid variations for blazars as follows: \[M_{\bullet}\lesssim\ 5.09\times 10^{4}\frac{\delta\Delta t_{min}^{\,ab}}{1+z}M_{ \odot}\qquad(j\sim 1), \tag{6}\] \[M_{\bullet}\lesssim\ 1.70\times 10^{4}\frac{\delta\Delta t_{min}^{\,ab}}{1+z}M_{ \odot}\qquad(j=0), \tag{7}\] where \(\Delta t_{min}^{\,ab}\) is the minimum timescale in units of seconds, \(z\) is the redshift, and \(j=J/J_{max}\) is the dimensionless spin parameter of a black hole with the maximum possible angular momentum \(J_{max}=GM_{\bullet}^{2}/c\) in which \(G\) is the gravitational constant. Equations (6) is suitable for Kerr black holes. Equations (7) can be applied to Schwarzschild black holes. We estimated the masses of black holes of ON 231, 1ES 1426+42.8 and PKS 1510-089. According to the results of ACF, the minimum timescales of optical IDV are \(\tau=0.0322\) day, \(\tau=0.01\) day, and \(\tau=0.0295\) day, respectively. The Doppler factors are reported as \(\delta=1.56\), \(\delta=27.3\) and \(\delta=13.18\), respectively in the literatures (Lahteenmaki & Valtaoja 1999; Gaur et al. 2010). Following Equations (6) and (7), the black hole mass of ON 231 is the \(M_{\bullet}\lesssim 10^{8.30}M_{\odot}\) for the Kerr black hole and \(M_{\bullet}\lesssim 10^{7.82}M_{\odot}\) for the Schwarzschild black hole. This is consistent with the \(10^{8.38}M_{\odot}\) for the rapidly spinning case and \(10^{7.58}M_{\odot}\) for the non-rotating case reported by Gaur et al. (2010) derived as the relation between the period and the mass of SMBH. 1ES 1426+42.8 has \(M_{\bullet}\lesssim 10^{9.03}M_{\odot}\) (Equation (6)) and \(M_{\bullet}\lesssim 10^{8.58}M_{\odot}\) (Equation (7)). Woo & Urry (2002) obtained \(M_{\bullet}=10^{9.13}M_{\odot}\) of 1ES 1426+42.8 using stellar velocity dispersion which was estimated from an indirect method. Wu et al. (2009) estimated the black hole mass \(M_{\bullet}=10^{8.51}M_{\odot}\) in 1ES 1426+42.8 using the \(R\) band magnitudes of host galaxy. Thus, our estimates of black hole mass for 1ES 1426+42.8 are consistent with the black hole mass in Wu et al. (2009). In the case of PKS 1510-089, \(M_{\bullet}\lesssim 10^{9.01}M_{\odot}\) is derived with Equation(6) and \(M_{\bullet}\lesssim 10^{8.62}M_{\odot}\) is derived with Equation (7). Woo & Urry (2002) obtained \(M_{\bullet}=10^{8.65}M_{\odot}\) of PKS 1510-089 using the BLR size-luminosity relation, which is consistent with our result of the Kerr black hole. ### The Correlation between Magnitudes and Colours In order to explore the optical spectral behaviors, we analyzed the correlations between the magnitudes and colours on intraday timescales, short-term timescales, and long-term timescales. In this Section, we focus on the correlation between \(V-I\) index and \(V\) magnitude, which is widely studied. Only data obtained by quasi-simultaneous measurements are analyzed. We use the correlation coefficient from error-weighted linear regression analysis to indicate the intensity of the BWB trend. The results of Spearman correlation coefficient analysis and the corresponding \(p\)-values are given in Table 4. In Table 4, \(r\) is the Spearman correlation coefficient, and \(p\) is the chance probability. Generally, if \(r\) is Figure 2: Short-term light curves of the six sources. positive, it means positive correlation, and if \(r\) is negative, it means negative correlation. The absolute value of \(r\) of 0-0.1 means no correlation, 0.1-0.3 means weak correlations, 0.3-0.5 means moderate correlations, and 0.5-1.0 means strong correlations (Cohen 1988). The \(p\)-value is a parameter that indicates whether the correlation is significant or not. If the \(p\)-value is less than 0.001, it indicates that there is a significant correlation; if the \(p\)-value is greater than 0.05, it indicates that there is no correlation (Cohen 1988). Figure 5 shows the correlations between the \(V-I\) index and \(V\) magnitude on intraday timescales. Figure 6 shows the correlations between the \(V-I\) index and \(V\) magnitude on short-term timescales and long-term timescale, respectively. Table 4 and Figure 5 show that the BWB trend appears in twenty-one nights on the intraday timescales, of which twenty nights have strong correlations (\(p<0.001\)) and one night has moderate correlation (3C 279 on May 10, 2018). On the short-term timescales, four objects (ON 231, 3C 279, 1E 1458.8+2249, BL Lacertae) have strong correlations, two objects (OJ 287, OT 546) have no correlations, and one object (3C 279 in May-June 2018) has a weak correlation (\(r=0.208\), \(p=0.004\)). On the long-term timescales, 3C 279 shows a strong correlation from 2016 to 2018 (\(r=0.835\), \(p<0.0001\)). An insignificant RWB trend is detected on short-term timescale for PKS 1510-089 (\(r=-0.431\), \(p=0.017\); see Figure 6). Therefore, the BWB chromatic trend is dominant for our objects on intraday timescales. Except for three sources, the other four Figure 3: Long-term light curves of 3C 279 and PKS 1510-089 for different periods and bands. targets all display the BWB trend on the short timescales. There is one target with BWB trend on the long timescales. A FSRQ with RWB trend is found. ### Search for Periodicity in 1ES 1426+42.8 The study of blazars' QPO is of great value for exploring the physical mechanism and radiation process in blazars (Gupta et al., 2009). In our observations, most of the light curves of IDV are monotonous. The light curve of 1ES 1426+42.8 on April 13, 2010 is the only one that had three obvious peaks. It was likely periodic, so we performed a periodic analysis for the 1ES 1426+42.8. At present, Lomb-Scargle periodogram (LSP) (Lomb, 1976; Scargle, 1982) and weighted wavelet Z-transform (WWZ) (Foster, 1996; Bhatta, 2017) are commonly used for investigating the periodicity of variability in the light curves. We used these two methods to explore whether there is QPO in the \(I\) band light curve of 1ES 1426+42.8. LSP is a method widely used in the search of QPOs, which can be used to eliminate the aliasing problem caused by unevenly sampled data (Lomb, 1976; Scargle, 1982; Press et al., 1992). The LSP method extracts the average value of the signal and employs a phase shift of the basis functions. The resulting normalized periodogram does not need to interpolate the missing data. Therefore, false artificial peaks are avoided (Wang et al., 2014). The periodogram is defined as (Li et al., 2016): \[\begin{split} P_{\rm X}(f)&=\frac{1}{2}\left\{ \frac{\left[\sum_{i=1}^{N}X(t_{i})\cos[2\pi f(t_{i}-\tau)]\right]^{2}}{\sum_{i= 1}^{N}\cos^{2}[2\pi f(t_{i}-\tau)]}\right.\\ &\left.+\frac{\left[\sum_{i=1}^{N}X(t_{i})\sin[2\pi f(t_{i}-\tau) ]\right]^{2}}{\sum_{i=1}^{N}\sin^{2}[2\pi f(t_{i}-\tau)]}\right\},\end{split} \tag{8}\] where \(X(t_{i})(i=1,2,3,...,N)\) is a time series, \(f\) is the test frequency, and \(\tau\) is the time offset, which can be calculated by the formula: \[\tau=\frac{1}{2\omega}\tan^{-1}\left[\frac{\sum_{i=1}^{N}\sin 2\omega t_{i}}{ \sum_{i=1}^{N}\cos 2\omega t_{i}}\right], \tag{9}\] where \(\omega=2\pi f\). For the real signal \(X(t_{i})\), the power in \(P_{X}(f)\) would present a peak, and the most likely period corresponds to the peak of the maximum power \(P_{peak}(f)\)(Wang et al., 2014). The WWZ is a periodicity analysis method in both the time and frequency domains (Li et al., 2021), which can be used to cross-validate the possible QPO of 1ES 1426+42.8. The WWZ method can effectively handle irregularly sampled variability data by introducing the weighted projection of the light curves on trial functions (Wang et al., 2014). Following Foster (1996) and Bhatta (2017), we constructed the WWZ spectra using the Morlet mother function for each artificial light curve. WWZ projects the Morlet wavelet transforms onto three trial functions: \(\varphi_{1}(t)=1(t)\), \(\varphi_{2}(t)=\cos\left[\omega\left(t-\tau\right)\right]\), \(\varphi_{3}(t)=\sin\left[\omega\left(t-\tau\right)\right]\), and also includes statistical weights \(\omega_{\alpha}=exp\left[-\omega^{2}(t_{\alpha}-\tau)^{2}\right],\)\(\alpha=1,2,3\)) on the projection, where \(c\) is an adjustable parameter (Li et al., 2021). The WWZ power estimates the confidence level of a detected periodicity with frequency \(\omega\) and time shift \(\tau\) as follows: \[WWZ=\frac{(N_{eff}-3)V_{y}}{2(V_{x}-V_{y})}, \tag{10}\] where \(N_{eff}\) is the effective number of data points contributing to the signal, and \(V_{x}\) is the weighted variations of the uneven data \(x\), while \(V_{y}\) is the model function \(y\), respectively. Figure 4: Results of the ACF analysis for the three blazars. The red dashed line is a polynomial least-squares fit. These factors are defined as follows: \[N_{eff}=\frac{\left(\sum\omega_{\alpha}\right)^{2}}{\sum\omega_{ \alpha}^{2}}, \tag{11}\] \[V_{x}=\frac{\sum_{\alpha}\omega_{\alpha}x^{2}(t_{\alpha})}{\sum_{ \lambda}\omega_{\lambda}}-\left[\frac{\sum_{\alpha}\omega_{\alpha}x(t_{\alpha}) }{\sum_{\lambda}\omega_{\lambda}}\right]^{2},\] \[V_{y}=\frac{\sum_{\alpha}\omega_{\alpha}y^{2}(t_{\alpha})}{\sum_ {\lambda}\omega_{\lambda}}-\left[\frac{\sum_{\alpha}\omega_{\alpha}y(t_{\alpha} )}{\sum_{\lambda}\omega_{\lambda}}\right]^{2},\] where \(\lambda\) is the number of test frequencies. Then, the WWZ periodogram can be obtained by decomposing the data into observing epoch and time/frequency domain, and the peaks of WWZ power can be used to determine the probable period (Li et al., 2021). In addition, frequency-dependent red-noise needs to be considered for the periodicity analysis of blazars, because the periodicity of blazars generally exhibits red-noise behavior at lower frequencies (Vaughan, 2005; Bhatta, 2017; Fan et al., 2014; Sandrinelli et al., 2016). This can be solved by using the power response method, which characterizes the power spectral density (PSD) (Uttley et al., 2002; Ren et al., 2021). The random fluctuations of blazars are generally approximated as a power-law PSD: \(P(f)\propto f^{-\alpha}\), where \(P(f)\) is the power at temporal frequency \(f\), and \(\alpha\) is the spectral slope. Following Vaughan (2005), we estimated the power spectral slope \(\alpha\) by fitting a linear function to the log-periodogram. The best-fit PSD is shown in the upper right panel of Figure 7. The PSD result show that the slope is \(\alpha=1.85\pm 0.06\). Then, we assessed the confidence level of the QPO of 1ES 1426+42.8 by modeling the optical variability as red-noise with the spectral slope (\(\alpha=1.85\)). We simulate a large number of light curves, using the Monte Carlo method described by Timmer and Koenig (1995) to establish the red-noise background. Once 10000 light curves were simulated by using even sampling intervals, the LSP was computed (see Yang et al., 2020). Consequently, using the power spectral distribution of the simulated light curves, local 95%, 99%, and 99.7% (3\(\sigma\)) confidence contour lines were evaluated. We also simulated 10000 light curves and re-sampled them according to the source light curves to evaluate the confidence level of the WWZ (Timmer and Koenig, 1995). Then, the 95%, 99%, and 99.7% (3\(\sigma\)) confidence contour lines were evaluated, respectively. As shown in the middle left panel of Figure 7, there is an obvious peak in the periodogram of LSP, which hints a possible QPO with \(48.67\pm 13.90\) minutes (\(>99\%\) confidence level) for 1ES 1426+42.8 in I band on April 13, 2010. We use the FWHM of the peak as the uncertainty of period value. The middle right panel of Figure 7 shows that there is a strong peak in the periodogram of WWZ, which hints a possible QPO with \(47.23\pm 11.21\) minutes (\(>3\sigma\)). The periodicity of WWZ method is very close to the \(48.67\pm 13.90\) minutes obtained by the LSP method, and the result indicates that 1ES 1426+42.8 has periodicity in IDV. The REDFT method also can be used to evaluate the QPO and red-noise. Schulz and Mudelsee (2002) presented a program (REDFT1), which can be used to test if peaks in the spectrum of a time series are significant against the red-noise background by fitting a first-order autoregressive (AR1) process. The results of REDFT are shown in the bottom panel of Figure 7. From Figure 7, we can see that the peak (0.0289 day \(\sim 41.62\) minutes) in the spectrum of a time series is significant (\(>3\sigma\)) against the red-noise background. The three methods support the possible QPO (\(48.67\pm 13.90\) minutes) within the error range. Footnote 1: [http://www.geo.uni-bremen.de/geomod/staff/mschulz/](http://www.geo.uni-bremen.de/geomod/staff/mschulz/) We performed new observations of 1ES 1426+42.8 in March 2021 to further confirm the reliability of this QPO. For PSD on March 16, 2021 of 1ES 1426+42.8, we fit the part where frequency \(<200\) day\({}^{-1}\), because the part where frequency \(>200\) day\({}^{-1}\) is dominated by white-noise. The PSD result (upper right panel of Figure 8) shows that the slope is \(1.64\pm 0.08\). Then, the confidence level is assessed with the spectral slope of \(\alpha=1.64\). The results show that the close QPOs (\(>3\sigma\)) are confirmed by the LSP method, WWZ method, and REDFT method. The corresponding QPOs are \(30.70\pm 6.55\) minutes, \(30.74\pm 5.22\) minutes, and \(28.88\pm 8.66\) minutes, respectively. This new result further confirms the existence of QPO in 1ES 1426+42.8 after considering the errors of QPQ. The light curve of 1ES 1426+42.8 on March 16, 2021 and the corresponding results of different methods are shown in Figure 8. ## 4 Discussion ### Optical Intraday Variability Many physical models have been proposed to explain variability, which can be divided into external mechanisms and internal mechanisms. The external mechanisms include interstellar scintillation and microgravity microlens models (Heeschen et al., 1987; Agarwal and Gupta, 2015), and the internal mechanisms are related to relativistic jet activities and accretion disk instabilities (Mangalam and Wiita, 1993; Chakrabarti and Wiita, 1993; Marscher et al., 2008). The external mechanisms are not considered to explain optical IDVs (Xiong et al., 2017). Generally, the observed radiation in blazars is dominated by emission from the jet, which overwhelms the thermal emission from the accretion disk. The shock-in-jet model is often used to explain optical IDVs in blazars (Marscher and Gear, 1985; Marscher et al., 2008; Xiong et al., 2016, 2017). The shocks propagate along the relativistic jets of plasma, sweeping emitting regions. If the emitting regions have large intrinsic changes, then a large variability on intraday timescale can be observed (Marscher and Gear, 1985). Therefore, the optical IDVs we detected are likely to be interpreted by the shock-in-jet model. In addition, the small optical variability could be attributed to turbulence behind a shock along the jet (Agarwal and Gupta, 2015), or the hot spots or disturbances in or above accretion disks (Chakrabarti and Wiita, 1993; Mangalam and Wiita, 1993; Gaur et al., 2012). When the blazar is in the low state, the model based on instabilities of the accretion disks could give rise to IDVs because in the low state, any contribution from the jets is weak (Rani et al., 2011; Xiong et al., 2017). ### Relations between Colours and Magnitudes Generally, BWB chromatic trend is dominant for most of the BL Lacs, while the RWB chromatic trend is usually observed in FSRQs (Gu et al., 2006; Dai et al., 2009; Bindu et al., 2010). Figure 5: Correlations between the \(V-I\) index and \(V\) magnitude for intraday. The red solid lines are the results of linear regression analysis. \(r\) is the correlation coefficient; \(p\) is the chance probability. Figure 6: Correlations between the \(V-I\) index and \(V\) magnitude for short-timescales and long-timescales. The BWB chromatic trend is dominant for all of our objects on the intraday timescales. For short-term timescale, the BWB trends from ON 231, 3C 279, 1E 1458.8+2249, and BL Lacertae are strong, while OJ 287 and OT 546 have no chromatic trend. A moderate RWB trend of PKS 1510-089 on the short-term scale was found. The FSRQ 3C 279 showed the BWB chromatic trend in the long-term timescales. The shock-in-jet model is most widely used to explain the BWB behavior (Gupta et al. 2008a; Xiong et al. 2016). According to the shock-in-jet model, as the shock propagates down the jet, it strikes a region with a high electron population, and radiation at different wavelengths is produced from different distances behind the shock. High-energy photons from the synchrotron mechanism generally emerge faster and closer to the shock front than lower frequency radiation, thus causing the BWB behavior (Agarwal & Gupta 2015; Xiong et al. 2017). Therefore, the detected BWB trends can be explained by the shock-in-jet model. In addition, the different relative contributions of the thermal versus non-thermal radiation to the optical emission may be responsible for the different trends of the colour index with brightness in FSRQs and BL Lac objects (Gu et al. 2006). For FSRQ PKS 1510-089, a moderate RWB trend could be explained by a blend of jet and accretion components. As the target brightens, the red component of the jet dominates the radiation of the total flux. So a moderate RWB trend could be observed. The superposition of the different BWB trends on intraday timescales may explain the no chromatic Figure 7: The results of periodicity analysis of 1ES 1426+42.8 on April 13, 2010. Upper left panel: lightcurve of 1ES 1426+42.8 on April 13, 2010. Upper right panel: PSD of 1ES 1426+42.8; the solid lines denote the best-fitting power-law model of the underlying coloured noise. Middle left panel: corresponding power spectrum of LSP (green solid line); the red, blue, and purple dashed lines represent the confidence level of 95%, 99%, and 99.7%, respectively. Middle right panel: 2D plane contour plot of the WWZ power of the light curve. The green solid line represents the time-averaged WWZ power; the red, blue, and purple dashed lines represent the confidence level of 95%, 99%, and 99.7%, respectively. Bottom panel: corresponding result of REDFIT; the black line is the bias-corrected power spectra, the red line is the theoretical red-noise spectrum, and the blue, green, and purple dashed lines represent the confidence levels of 90%, 95%, and 99.7%, respectively. trend in the two BL Lac objects OJ 287 and OT 546 (Xiong et al. 2020). In addition, the BWB trends may occur when electrons being accelerated to preferentially higher energies before radiative cooling, while the RWB trends may occur when the highest-energy electrons suffer the stronger radiative cooling or escape cooling (Isler et al. 2017; Xiong et al. 2020). ### Intraday Periodicity We have searched for the QPO of 1ES 1426+42.8 in the I band light curve on April 13, 2010, using the LSP, WWZ, and REDFIT methods. The possible QPOs are \(48.67\pm 13.90\) minutes (\(>99\%\) confidence level) with LSP analysis, \(47.23\pm 11.21\) minutes (\(>3\sigma\)) with WWZ analysis, and \(41.62\pm 14.25\) minutes (\(>3\sigma\)) with REDFIT analysis, respectively. The QPOs (\(>3\sigma\)) on March 16, 2021 are calculated with consistent periods by the three methods as \(30.70\pm 6.55\) minutes, \(30.74\pm 5.22\) minutes, and \(28.88\pm 8.66\) minutes, respectively. Therefore, based on our new observations, a possible intraday QPO (about 30 to 50 minutes) is found, which is rarely reported in previous literature. For the periodic variability of blazar, some possible mechanisms are as follows: the close binary black hole system model (e.g., Villata et al. 1998), the orbital motion of hot spots around the SMBH (e.g., Broderick & Loeb 2006), the global oscillations of the accretion disc (e.g., Rubio-Herrera & Lee 2005), the precession of jet, or the rotating helical jet structure (e.g., Villata & Raiteri 1999; Fan et al. 2014). In the binary black hole system model, the periodicity would be induced by the Keplerian orbital motion of a binary SMBH, thus producing the long-term or short-term QPOs in blazars (Romero et al. 2000). According to this, the binary black hole system model is unlikely to explain the intraday QPO. The detection of the periodic variability on the intra Figure 8: The results of periodicity analysis of 1ES 1426+42.8 on March 16, 2021. The upper left panel is the lightcurve of 1ES 1426+42.8 on March 16, 2021. The upper right panel is the result of PSD. The middle left panel is the result of LSP method; The middle right panel is the result of WWZ method; The bottom panel is the result of REDFIT method. day timescale could be explained by the presence of a single dominating hot spot on the accretion disk (e.g., Mangalam & Wiita 1993; Chakrabarti & Wiita 1993) or perhaps by pulsational modes in the disk (e.g., Espaillat et al. 2008). The instability of a disk will lead to the periodicity of the blazar (Ren et al. 2021). Based on the assumption that the QPO is related to the orbital timescale of a hot spot, spiral shocks, or other non-axisymmetric phenomena in the innermost portion of the rotating accretion disk, the SMBH mass can be estimated with (Gupta et al. 2009, 2019) \[\frac{M_{\bullet}}{M_{\odot}}=\frac{3.23\times 10^{4}\delta P}{(r\ ^{3/2}+a)(1+z )}, \tag{12}\] where \(\delta\) is the Doppler factor of the blazar, \(P\) is the period of QPO in units of second, \(r\) is the radius at the inner edge of accretion disk in units of \(GM_{\bullet}/c^{2}\), \(a\) is the SMBH spin parameter, and \(z\) is the redshift. From Equation (12), we estimated the mass of SMBH as \(10^{8.99^{+0.11}_{-0.15}}M_{\odot}\) for the Kerr black hole (with \(r=1.2\), \(a=0.9982\) and \(P=48.67\pm 13.90\) minutes) and \(10^{8.19^{+0.11}_{-0.15}}M_{\odot}\) for the Schwarzschild black hole (with \(r=6.0\), \(a=0\) and \(P=48.67\pm 13.90\) minutes) in the case of 1ES 1426+42.8. Our black hole masses estimated by the period of QPO are within the ranges of black hole masses calculated by minimum timescale, which indicates that the intraday QPO caused by the perturbations on the accretion disk is possible. Maraschi & Tavecchio (2003) proposed that, for the low-luminosity blazars (BL Lacs), the jet luminosity is higher than the disk luminosity because of a very low radiative efficiency for the accretion disk. In the scenario of BL Lac 1ES 1426+42.8, a disk flux variation is unlikely to directly produce any detectable QPO. The fluxes from jets usually would swamp any disk fluxes. Thus, the QPOs of 1ES 1426+42.8 detected by us probably produce from the relativistic jets. It is worth noting that there is a connection between the jet and the accretion disk (Dannen et al. 2020). The perturbations caused by disk instabilities can drive changes in the mass flux entering the jets or the velocity (or density, magnetic field, etc.) of the jets, thus producing the relativistic shocks (Wiita 2006). When a relativistic shock advances along the helical structure of the jet, or the jet precesses/twists, variation in the Doppler boosting factor would amplify weak fluctuations, reduce timescales, and produce the quasi-periodic flux variations (Camenzind & Krockenberger 1992; Villata & Raiteri 1999; Gupta et al. 2019; Rani et al. 2010). Therefore, the perturbations from disks would probably produce actual QPOs, which are in turn amplified by the relativistic motions of jets. In addition, it is very likely that the intraday QPO is originated from jets, instead of accretion disks. As mentioned above, when a relativistic shock advances along the helical structure of the jet (or the jet precesses/twists), configured with high Lorentz factor and very small viewing angle, an intraday QPO could be observed. ## 5 Conclusions We present quasi-simultaneous multi-colour photometric data of eight blazars (3244 data points) observed over a time span from 2010-2020. After analyzing flux variations, correlations between magnitudes and colours on different timescales, our main results are summarized as follows. (1) Intraday variability (IDV) is detected in all eight sources of our sample. The IDV of OJ 287 is detected on one night. The IDV is found on 3 nights for ON 231, 2 nights for 3C 279, 1 night for 1ES 1426+42.8, 2 nights for 1E 1458.8+2249, 5 nights for PKS 1510-089, 1 night for OT 546 and 1 night for BL Lacertae. (2) A BWB chromatic trend is dominant for all eight objects on intraday timescales. On the short timescales, the BWB trend is displayed in four targets (ON 231, 3C 279, 1E 1458.8+2249, and BL Lacertae). There is a target (3C 279) detected with the BWB trend on the long timescales. A FSRQ (PKS 1510-089) is detected with the RWB trend. (3) Based on the ACF analysis, the upper limits of black hole mass for three blazars are estimated using variability timescales. In the case of Kerr black holes, the black hole masses are \(M_{\bullet}\lesssim 10^{8.30}M_{\odot}\) for ON 231, \(M_{\bullet}\lesssim 10^{9.03}M_{\odot}\) for 1ES 1426+42.8, and \(M_{\bullet}\lesssim 10^{9.01}M_{\odot}\) for PKS 1510-089, respectively. In the case of Schwarzschild black holes, the black hole masses are \(M_{\bullet}\lesssim 10^{7.82}M_{\odot}\) for ON 231, \(M_{\bullet}\lesssim 10^{8.55}M_{\odot}\) for 1ES 1426+42.8, and \(M_{\bullet}\lesssim 10^{8.62}M_{\odot}\) for PKS 1510-089, respectively. (4) On April 13, 2010, a potential QPO of \(P=48.67\pm 13.90\) minutes is found in 1ES 1426+42.8. The light curve on March 16, 2021 further confirms the existence of the QPO (\(>3\sigma\)). The black hole mass is estimated as \(10^{8.99^{+0.11}_{-0.15}}M_{\odot}\) for the Kerr black hole and \(10^{8.19^{+0.11}_{-0.15}}M_{\odot}\) for the Schwarzschild black hole using the period of the QPO. ## 6 Acknowledgements We thank the anonymous referee for the valuable comments and suggestions. This work is supported by the National Natural Science Foundation of China (grants 11863007, 12063005, 12063007, 11703078), the Yunnan Province Foundation (2019FB004), the Program for Innovative Research Team (in Science and Technology) in University of Yunnan Province (IRTSTYN), the Yunnan Local Colleges Applied Basic Research Projects (2019FH001-12), and National Astronomical Observatories Yunnan Normal University Astronomical Education Base. We acknowledge the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A06. ## 7 Data availability The data underlying this article will be shared on reasonable request to the corresponding author. Sample of the all observed data will be available electronically at Vizier.
2303.16275
Writing Assistants Should Model Social Factors of Language
Intelligent writing assistants powered by large language models (LLMs) are more popular today than ever before, but their further widespread adoption is precluded by sub-optimal performance. In this position paper, we argue that a major reason for this sub-optimal performance and adoption is a singular focus on the information content of language while ignoring its social aspects. We analyze the different dimensions of these social factors in the context of writing assistants and propose their incorporation into building smarter, more effective, and truly personalized writing assistants that would enrich the user experience and contribute to increased user adoption.
Vivek Kulkarni, Vipul Raheja
2023-03-28T19:38:57Z
http://arxiv.org/abs/2303.16275v1
# Writing Assistants Should Model Social Factors of Language ###### Abstract. Intelligent writing assistants powered by large language models (LLMs) are more popular today than ever before, but their further widespread adoption is precluded by sub-optimal performance. In this position paper, we argue that a major reason for this sub-optimal performance and adoption is a singular focus on the information content of language while ignoring its social aspects. We analyze the different dimensions of these social factors in the context of writing assistants and propose their incorporation into building smarter, more effective, and truly personalized writing assistants that would enrich the user experience and contribute to increased user adoption. writing assistants, large language models, social factors + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications + Footnote †: journal: Computer graphics Communications be used to refer to the same real-world concept (_zucchini_ in the US vs _courgette_ in the UK) (Kalewski et al., 2017). Writing assistants not accounting for such linguistic variation may lead to poor user experience (e.g., incorrect sentiment or tone detection, or word recommendations). 5. **Intent (Communicative Goal)**: Writing assistants need to have an intimate knowledge of the communicative intent of the user to be effective. Recommendations on word choice, sentence and paragraph restructuring, and feedback on sentiment and tone depend on the user's specific communicative goal (which might be to inform, entertain, persuade, or narrate) and targeted setting (academic, creative writing, or conversational). For example, in content targeted for an academic publication, writing assistants might assist users by recommending templates and phrases that seek to achieve specific communicative goals like (a) introducing standard views, quotations, and an ongoing debate, (b) contrasting with prior work, and (c) motivating claims. ## 3. Closing Remarks In this paper, we discuss clear use cases of intelligent writing assistants that would benefit by adopting a richer view of language, which accounts for its social aspects. Building writing assistants that adopt this richer view of language opens up exciting research directions. First, a majority of the current evaluation benchmarks used for evaluating writing assistants today ignore these social factors. Therefore, there is a critical need to construct comprehensive evaluation benchmarks grounded in social factors. Second, note that many of these social factors are extra-linguistic and may involve modeling multiple modalities. Research needs to be undertaken around exploring approaches to modeling these social factors in a manner that is best suited toward their incorporation in writing assistants. Finally, one needs to work within appropriate considerations around data/user privacy and ethics to ensure models benefit end users and not perpetuate negative biases. We thus conclude by urging the community to advance further research on the social aspects of language and how these aspects can relate to building smarter, more effective, highly personalized, and inclusive writing assistants.
2310.11422
A Full Accounting of the Visible Mass in SDSS MaNGA Disk Galaxies
We present a study of the ratio of visible mass to total mass in spiral galaxies to better understand the relative amount of dark matter present in galaxies of different masses and evolutionary stages. Using the velocities of the H-alpha emission line measured in spectroscopic observations from the Sloan Digital Sky Survey (SDSS) MaNGA Data Release 17 (DR17), we evaluate the rotational velocity of over 5500 disk galaxies at their 90% elliptical Petrosian radii, R90. We compare this to the velocity expected from the total visible mass, which we compute from the stellar, HI, molecular hydrogen, and heavy metals and dust masses. Molecular hydrogen mass measurements are available for only a small subset of galaxies observed in SDSS MaNGA DR17, so we derive a parameterization of the molecular hydrogen mass as a function of absolute magnitude in the r band using galaxies observed as part of SDSS DR7. With these parameterizations, we calculate the fraction of visible mass within R90 that corresponds to the observed velocity. Based on statistically analyzing the likelihood of this fraction, we conclude that the null hypothesis (no dark matter) cannot be excluded at a confidence level better than 95% within the visible extent of the disk galaxies. We also find that when all mass components are included, the ratio of visible-to-total mass within the visible extent of star-forming disk galaxies increases with galaxy luminosity.
Nitya Ravi, Kelly A. Douglass, Regina Demina
2023-10-17T17:29:53Z
http://arxiv.org/abs/2310.11422v2
# A Full Accounting of the Visible Mass in SDSS MaNGA Disk Galaxies ###### Abstract We present a study of the ratio of visible mass to total mass in spiral galaxies to better understand the relative amount of dark matter present in galaxies of different masses and evolutionary stages. Using the velocities of the H\(\alpha\) emission line measured in spectroscopic observations from the SDSS MaNGA DR17, we evaluate the rotational velocity of 5522 disk galaxies at their 90% elliptical Petrosian radii, \(R_{90}\). We compare this to the velocity expected from the total visible mass, which we compute from the stellar, H i, H\({}_{2}\), heavy metal, and dust masses. H\({}_{2}\) mass measurements are available for only a small subset of galaxies observed in SDSS MaNGA DR17, so we derive a parameterization of the H\({}_{2}\) mass as a function of absolute magnitude in the \(r\)-band using galaxies observed as part of SDSS DR7. With these parameterizations, we calculate the fraction of visible mass within \(R_{90}\) that corresponds to the observed velocity. Based on statistically analyzing the likelihood of this fraction, we conclude that the null hypothesis (no dark matter) cannot be excluded at a confidence level better than 95% within the visible extent of the disk galaxies. We also find that, by including all of these mass components, star-forming disk galaxies contain statistically the same ratio of visible-to-total mass, independent of magnitude. 0000-0002-8800]Nitya Ravi 0000-0002-4880-7088]Kelly A. Douglass 0000-0002-4880-7088]Regina Demina ## 1 Introduction Current cosmological models indicate that the dominant component of matter in the universe is dark matter (Planck Collaboration et al., 2020), which manifests itself primarily through gravity. Dark matter is expected to have minimal to no interaction with the electromagnetic force, therefore emitting little to no light. It is also unlikely to participate in the strong interaction, since otherwise it would be embedded in nuclei. It is currently unclear whether or not dark matter engages in the weak interactions (Porter et al., 2011, and references their). Phenomena such as gravitational lensing around galaxy clusters (Bartelmann, 2010, and references theirn) and galaxy kinematics (e.g., Freeman, 1970; Bosma, 1978; Carignan & Freeman, 1985; Salucci, 2019) contribute to the observational evidence for dark matter across most scales in the universe. Constraints from Big Bang nucleosynthesis (Fields & Sarkar, 2006) and detailed measurements of the imprint of the Baryon Acoustic Oscillations on the anisotropy of the Cosmic Microwave Background (Komatsu et al., 2011) strongly suggest that dark matter is of a non-baryonic nature. Simulations based on cold dark matter models are able to reproduce the current distribution of galaxies (e.g., Springel et al., 2005), indicating that dark matter is likely to be composed of heavy, weakly interacting particles. However, ground-based experiments have failed to observe any effects associated with the passage of such particles through normal matter (Boveia & Doglioni, 2018). Moreover, results from the Large Hadron Collider exclude most models that offer plausible candidates for dark matter (for the latest results see ATLAS collaboration, 2023; Aad et al., 2023; CMS Collaboration, 2023; Tumasyan et al., 2022). Hence, solving the puzzle of dark matter is one of the leading problems currently faced by the scientific community. Modern large-scale galaxy surveys offer high quality data that allow us to reevaluate the astronomical evidence for the existence of dark matter. One of the original sources of such evidence was galactic rotation curves (Rubin & Ford, 1970; Rubin et al., 1980, 1985, 1982). These studies were based on samples with low statistics, containing only about 20 galaxies. The expected rotational velocities of galaxies were estimated based only on stellar mass and did not include gas or dust. Since the 1980s, rotation curve analysis has been performed on larger galaxy samples to study various galaxy properties. Mathewson et al. (1992) analyzed long slit spectroscopy, where velocities were measured along the semi major axis of galaxies, to construct the rotation curves of over 900 galaxies. Persic et al. (1996) analyzed the rotation curves of the same sample and found that the stellar disk did not contain sufficient matter to produce the observed rotation curve. Other studies that construct rotation curves from long slit spectroscopy (e.g., Catinella et al., 2006; Di Teodoro et al., 2021) support the observation that rotation curves ubiquitously flatten at the outer radii of galaxies and find that the stellar mass scales with the inferred mass of the dark halos. More recently, studies have fit rotation curves to stellar and gas velocity fields using integral field spectroscopy (e.g., de Blok et al., 2008; Torres-Flores et al., 2011; Kalinova et al., 2017; Schmidt et al., 2023) for tens to hundreds of galaxies to estimate the galaxies' dynamical masses and model dark matter halo profiles. Douglass et al. (2019) and Yoon et al. (2021) each study the rotation curves of almost 2000 SDSS MaNGA galaxies using either gas or stellar kinematics. Because of the large variations in galaxy properties throughout these samples, one of the biggest shortcomings of these prior studies has been their limited statistical power. We perform a rotation curve analysis for over 5500 galaxies in SDSS MaNGA Data Release 17 (DR17; Abdurro'uf et al., 2022) to analyze the dark matter content of spiral galaxies. In this paper, we re-evaluate the amount of dark matter needed to explain the observed rotational velocities and revisit the statistical significance of the null, i.e. "no dark matter," hypothesis using the high statistics afforded by SDSS MaNGA (Bundy et al., 2015). We construct models of rotation curves for each galaxy using the H\(\alpha\) emission line velocities measured across a galaxy's surface. Based on the rotational velocity we infer the value of the total (gravitational) mass and compare it to the visible mass. A similar analysis was conducted on 1988 galaxies in SDSS MaNGA DR15 (Aguado et al., 2019) in Douglass and Demina (2022), where visible mass was defined as the sum of the stellar and atomic hydrogen masses. The ratios of visible to total mass for these galaxies were studied by splitting the sample into three subsamples based on color-magnitude classification and analyzing the mass ratios' dependence on the luminosity, gas-phase metallicity, and color-magnitude classification. We improve on the earlier studies by defining the visible mass as the sum of the stellar, neutral atomic hydrogen, molecular hydrogen, helium, heavy metals, and dust masses. We present the ratio of visible to total mass as a function of galaxy luminosity. For each galaxy in our sample, we construct a statistical model that accounts for the statistical and systematic uncertainties on the measured rotational velocity, as well as the uncertainties on each of the visible mass components. Using this statistical model, we evaluate the level of consistency of the observed rotational velocities with the null, i.e. "no dark matter", hypothesis. The paper is structured as follows. In Section 2, we discuss the data selection process. In Section 3, we describe the modeling of the rotation curves and stellar mass distributions. In Section 4, we detail the estimation of the mass components. In Section 5, we describe the statistical model. We present the results in Section 6; the data and the results of the statistical analysis are summarized in Table 3. We conclude in Section 7. ## 2 SDSS MaNGA DR17 and Galaxy Selection We use the H\(\alpha\) emission line velocity maps from SDSS MaNGA DR17 (Abdurro'uf et al., 2022) to model the rotation curves of spiral galaxies. The SDSS MaNGA survey used integral field spectroscopy to measure spectra at different points throughout a galaxy by placing an integral field unit (IFU) on each galaxy. The IFU is a bundle of spectroscopic fibers arranged in a hexagonal shape containing between 19 and 127 fibers and covers 12.5" to 32.5" in diameter (Law et al., 2015). The light received by the fibers was fed to two spectrographs with wavelength ranges 3600-6000A and 6000-10300A, respectively, with a resolution of \(\lambda/\Delta\lambda\sim 2000\)(Drory et al., 2015). SDSS MaNGA DR17 is the final data release for the MaNGA survey and contains more than 10,000 nearby galaxies in the northern sky. The target selection process prioritized maintaining a flat distribution in luminosity (Wake et al., 2017), so the survey consists of three subsamples: the primary sample, where the IFU covers out to \(1.5R_{e}\), the secondary sample, where the IFU covers out to \(2.5R_{e}\), and the color-enhanced sample, which supplements the primary sample with high-mass blue galaxies and low-mass red galaxies. In order to check for possible systematic bias, we present the results of our analysis for the entire data set and each of these individual subsamples, referred to as MaNGA sample 1, 2, and 3, respectively. We extract each galaxy's rotation curve using the H\(\alpha\) velocity map and \(g\)-band-weighted mean flux map as processed by the MaNGA Data Analysis Pipeline (DAP; Westfall et al., 2019). The stellar mass rotation curve is extracted from the stellar mass density maps processed by Pipe3D (Sanchez et al., 2016, 2018). Absolute magnitudes are obtained from version 1.0.1 of the NASA-Sloan Atlas (NSA; Blanton et al., 2011). Distances are in units of Mpc/\(h\), where \(h\) is the reduced Hubble constant defined by \(H_{0}=100h\) km/s/Mpc. #### 2.0.1 Sdss Dr7 SDSS Data Release 7 (DR7; Abazajian et al., 2009) observed approximately one quarter of the northern sky in both photometry and spectroscopy. A dedicated 2.5-m telescope at the Apache Point Observatory in New Mexico with a wide-field imager and a pair of double fiber-fed spectrometers was used to conduct the multiband imaging and spectroscopic survey. Photometric data was taken in the five SDSS filters: \(u\), \(g\), \(r\), \(i\), and \(z\)(Fukugita et al., 1996; Gunn et al., 1998). Using 320 fibers placed into fiber plug plates with a minimum fiber separation of 55", follow-up spectroscopic analysis was performed on galaxy targets with Petrosian \(r\)-band magnitudes \(m_{r}\leq 17.77\) and \(r\)-band Petrosian half light radii \(\mu_{50}\leq 24.5\) mag arcsec\({}^{-2}\)(Lupton et al., 2001; Strauss et al., 2002). For SDSS DR7, the spectrometers covered a wavelength range of 3800-9200A with a resolution of \(\lambda/\Delta\lambda\sim 1800\)(Smee et al., 2013). We make use of the photometric data (colors, color gradients, and inverse concentration indices) for MaNGA galaxies available from the KIAS-VAGC (Blanton et al., 2005; Choi et al., 2010). We use the global emission line fluxes from the Portsmouth group galaxy properties catalog (Thomas et al., 2013) to calculate the gas-phase metallicity. #### 2.0.2 H i observations H i mass estimates are obtained from the H i-Manga Data Release 3 (Stark et al., 2021). H i-MaNGA is a follow-up survey of MaNGA galaxies conducted on the Robert C. Byrd Green Bank Telescope (GBT) in Green Bank, West Virginia. The third data release of H i-MaNGA also includes a cross-match between the Arecibo Legacy Fast ALFA (ALFALFA) survey performed at the Arecibo Observatory in Arecibo, Puerto Rico, and MaNGA DR17 targets. #### 2.0.3 CO observations H\({}_{2}\) masses are inferred from measurements of the CO(1-0) line emission from two surveys: the MaNGA ARO Survey of Targets (MASCOT) first data release (Wylezalek et al., 2022) and the xCOLD GASS survey (Saintonge et al., 2017). The MASCOT survey performs observations of MaNGA galaxies at the Arizona Radio Observatory. The xCOLD GASS survey conducted CO(1-0) observations of SDSS galaxies on the IRAM 30-meter telescope in Spain. ### Color-magnitude classification As shown in Douglass & Demina (2022), a galaxy's ratio of total to stellar mass depends on the galaxy's evolutionary stage. We therefore separate the galaxies into three populations -- blue cloud, green valley, and red sequence -- in the color-magnitude diagram (CMD) to better understand these relationships. Galaxies in the blue cloud are typically fainter and more blue, while galaxies in the red sequence are brighter and more red. It is believed that galaxies transitioning between the blue cloud and red sequence occupy the green valley (Martin et al., 2007). We use the same method to classify the galaxies into one of these three populations as used in Douglass & Demina (2022), where the classification is based on the inverse concentration index, \(c_{\rm inv}\), color, \(u-r\), and color gradient, \(\Delta(g-i)\). As shown in Fig. 1, galaxies that are part of the red sequence are those that generally fall above and to the right of the depicted boundary originally defined by Park & Choi (2005) (normal early-type galaxies), while galaxies that are part of the blue cloud are those that generally fall below and to the left of the boundary (late-type galaxies). Galaxies that are part of the green valley are either those above the boundary but with \(u-r<2\) (blue early-type galaxies) or a high \(c_{\rm inv}\), or those below the boundary with \(\theta<20^{\circ}\), where \[\theta=\tan^{-1}\left(\frac{-\Delta(g-i)+0.3}{(u-r)-1}\right). \tag{1}\] See Douglass & Demina (2022) for a more detailed description of the CMD classification. Figure 1: \(\Delta(g-i)\) color gradient versus \(u-r\) color for our sample of SDSS MaNGA galaxies with stellar mass estimates, marked by their color-magnitude diagram classification: open red circles for the red sequence, green stars for the green valley, and blue crosses for the blue cloud. The black boundary is the separation between early- and late-type galaxies as defined by Choi et al. (2010). In this study, we require our objects to be dominated by rotational motion (described in Section 3 below). As a result, we expect few galaxies in our sample to be in the red sequence. After visual inspection, we find that the red sequence galaxies that are in our final sample appear to be either red disk galaxies with little to no star formation (likely lenticulars) or elliptical galaxies that are still supported by rotation. ## 3 Modeling of the Rotation Curves and Stellar Mass Distribution ### Modeling the velocity map We estimate a galaxy's total dynamical mass using its H\(\alpha\) velocity map obtained from the SDSS MaNGA DAP. Only spaxels with a data quality bit of 0 and S/N \(\geq\) 5 are included in the analysis. We also require that all galaxies have a smooth gradient with a maximum "smoothness score" of 2.0 as described in Douglass et al. (2019). We restrict the analysis to galaxies with a T-Type \(>\) 0 (late-type galaxies) as classified by the MaNGA Morphology Deep Learning DR17 Value Added Catalog (Dominguez Sanchez et al., 2022). Similar to both Douglass et al. (2019) and Douglass & Demina (2022), the velocity map of each galaxy is fit to the rotation curve parameterization defined in Barrera-Ballesteros et al. (2018), \[V(r)=\frac{V_{\rm max}r}{(R_{\rm turn}^{\alpha}+r^{\alpha})^{1/\alpha}} \tag{2}\] where \(V(r)\) is the rotational velocity at a distance \(r\) from the center of the galaxy. The free parameters are \(V_{\rm max}\), the magnitude of the velocity at which the rotation curve plateaus, \(R_{\rm turn}\), the radius at which the rotation curve changes from increasing to flat, and \(\alpha\), which describes the sharpness of the curve. The extent of the MaNGA H\(\alpha\) velocity maps and the radius to which we can measure rotational velocities is limited by the visible extent of the galaxy. Rotation curves are only fit out to the maximum radius, \(R_{\rm max}\), covered by the integral field unit (IFU), the extent of which is shown for an example galaxy in Fig. 2. Each galaxy's systemic velocity, kinematic center, inclination angle, and position angle are also free parameters in this fit, resulting in a total of eight free parameters. When determining the best-fit model for each galaxy, we make use of \(\chi^{2}=\Sigma((\rm data-model)/uncertainty)^{2}\) and \(\chi^{2}_{\nu}\), where we normalize \(\chi^{2}\) by the difference between the number of unmasked spaxels in the velocity map and the number of free parameters in the fit. We define four best-fit models as follows: **Model 1:**: The model with the smallest \(\chi^{2}\). **Model 2:**: The model with the smallest residual, \(\Sigma(\rm data-model)^{2}\). **Model 3:**: To help remove foreground artifacts from the analysis, we define upper and lower velocity bounds by binning all unmasked spaxels with a bin width of 10 km/s. The velocity bounds are defined as the nearest empty bin on either side of the bin with the most spaxels, as shown in the histogram on the bottom left of Fig. 3. Spaxels with values outside of this velocity range are masked; see the top center of Fig. 3 for an example of the resulting mask. We then select the model with the smallest \(\chi^{2}\). **Model 4:**: To help remove spaxels that are potentially contaminated by emission from AGN, which are defined as bins with an unusually high velocity dispersion, we define an upper limit on the velocity dispersion by binning the velocity dispersion of the unmasked spaxels with a bin width of 10 km/s. The velocity dispersion upper bound is defined as the nearest empty bin to the lowest velocity dispersion bin containing spaxels, as shown in the histogram on the bottom right of Fig. 3. Spaxels with velocities above this upper limit are masked; see the top right plot in Fig. 3 for an example of the resulting mask. We then select the model with the smallest \(\chi^{2}\). Figure 2: IFU (magenta hexagon) overlaid on RGB composite image of MaNGA galaxy 8997–9102 (made with the SDSS Marvin python package by Cherinka et al., 2019). The IFU does not cover the entire visible extent of the galaxy, as is common for MaNGA observations. Out of these four models, we select the one with the lowest \(\chi^{2}_{\nu}\) that satisfies the requirement \(\alpha<100\) as the best-fit model for each galaxy. An example H\(\alpha\) velocity map and best-fit model map is shown in Fig. 4. ### Modeling the stellar mass We estimate each galaxy's stellar mass by fitting a rotation curve due to the stellar component of the galaxy using the stellar mass density maps available through the Pipe3D MaNGA analysis pipeline (Sanchez et al., 2016, 2018). An example stellar mass density map is shown on the top in Fig. 5. Using the best-fit model H\(\alpha\) velocity map values for the galaxy's kinematic center, inclination angle, and position angle described above, we define concentric ellipses that correspond to different orbital radii in the galaxy, with the radius of each ellipse increasing by 2 spaxels. We compute the stellar mass as a discretized function of radius, \(M_{*}(r)\), by summing the Figure 4: Example H\(\alpha\) velocity map from the MaNGA DAP (first column), our best fit model to the velocity map (second column), the residual between the velocity map and our best-fit model (third column), and the deprojected rotation curve for for the galaxy (fourth column). Figure 3: Masks for the different velocity map models for example galaxy 10001–12701. The masks for models 1 and 2 is shown on the top left, the mask for model 3 is shown in the center, and the mask for model 4 is shown on the top right. The histogram on the bottom left shows the distribution of unmasked spaxel velocities used in models 1 and 2. Model 3 masks spaxels outside of the vertical dashed lines. The histogram on the bottom right shows the distribution of unmasked spaxel velocity dispersions used in models 1 and 2. Model 4 masks spaxels to the right of the vertical dashed line. Note that masking the outlying spaxels in the velocity distribution shifts the velocity gradient (indicated by the colormap) to that expected for a rotating disk galaxy. stellar mass density per spaxel over all the spaxels within each ellipse. We assume that the stellar mass is the primary component of the galaxy's disk and model the stellar mass as the sum of a central bulge and exponential disk. The rotational velocity due to the bulge and disk is summed in quadrature to get the rotational velocity due to the stellar mass: \[V_{*}(r)^{2}=V_{b}(r)^{2}+V_{d}(r)^{2} \tag{3}\] where \(V_{*}(r)\) is the rotational velocity due to the stellar mass, \(V_{b}(r)\) is the rotational velocity due to the bulge component, and \(V_{d}(r)\) is the rotational velocity due to the disk component. The bulge is modeled as an exponential sphere (Sofue, 2017) with rotational velocity \[V_{b}(r)^{2}=\frac{GM_{0}}{R_{b}}\,F\left(\frac{r}{R_{b}}\right) \tag{4}\] where \(F(x)=1-e^{-x}(1+x+0.5x^{2})\) and \(M_{0}=8\,\pi\,R_{b}^{3}\,\rho_{b}\). The free parameters in this fit are the scale radius of the bulge, \(R_{b}\), and the central density of the bulge, \(\rho_{b}\). The rotational velocity due to the exponential disk (a thin disk without perturbation; Freeman, 1970) is \[V_{d}(r)^{2}=4\pi G\Sigma_{d}R_{d}y^{2}[I_{0}(y)K_{0}(y)-I_{1}(y)K_{1}(y)] \tag{5}\] where \(\Sigma_{d}\) is the central surface mass density of the disk, \(R_{d}\) is the scale radius of the disk, \(y=r/2R_{d}\), and \(I_{i}\) and \(K_{i}\) are the modified Bessel functions (Sofue, 2013). The free parameters in this fit are \(\Sigma_{d}\) and \(R_{d}\). ## 4 Estimating the Mass Components ### Total mass, \(M_{tot}\) We calculate the galaxy's total mass dynamical mass within the 90% elliptical Petrosian radius, \(R_{90}\), using the rotational velocity at this radius as determined from the best-fit rotation curve found as described in Sec. 3.1. We can calculate the mass of a galaxy within some radius \(r\) from the center of the galaxy under the assumption that the galaxy's rotational motion is dominated by Newtonian orbital mechanics. Assuming axial symmetry, the velocity of a particle at distance \(r\) from the center of the galaxy is a function of the mass within that radius, \(M(r)\). Assuming that the orbital motion is circular in spiral galaxies, the centripetal acceleration of an orbiting particle is due to the gravitational force, \[M(r)=\frac{V(r)^{2}r}{G} \tag{6}\] Here, \(V(r)\) is the rotational velocity a distance \(r\) from the center of the galaxy, and \(G=6.67408\times 10^{-11}\) m\({}^{3}\) kg\({}^{-1}\) s\({}^{-2}\) is the Newtonian gravitational constant. In order to study the same region of each galaxy, we estimate the mass within \(R_{90}\), \(M(R_{90})=M_{\rm tot}\), by calculating \(V(R_{90})\) from Eqn. 2. When \(R_{\rm max}<R_{90}\), we extrapolate our parameterization of the fitted rotation curve, Eqn. 2, out to \(R_{90}\). On average, \(R_{90}\) exceeds \(R_{\rm max}\) by about 10%. Fig. 6 shows a subset of our rotation curves normalized by \(R_{\rm max}\), where the curves extrapolated out to \(R_{90}\) for the galaxies with \(R_{90}>R_{\rm max}\) are shown as dashed extensions beyond \(r/R_{\rm max}=1\). The stellar mass is also evaluated within \(R_{90}\), but only global measurements are available for the remaining mass components. While the majority of the stellar mass is encompassed by \(R_{90}\), gas and dark matter profiles are known to extend much further than that (e.g., Ostriker et al., 1974; Figure 5: Example stellar mass density map extracted from MaNGA Pipe3D (top) and our best fit to the rotation curve extracted from this map (bottom). Begeman, 1989; Kamphuis & Briggs, 1992; Pohlen et al., 2010). Extrapolating the rotation curves to higher radii would significantly increase the uncertainty on the rotational velocity, so we focus our study on the mass content within the visible extent of the galaxy. To calculate the total dynamical mass within \(R_{90}\), we require: * \(\alpha\leq 99\) * Velocity maps with less than 95% of its spaxels masked * 10 km/s \(<V(R_{90})<1000\) km/s * \(\sigma_{V_{\rm max}}/V_{\rm max}\leq 2\), where \(\sigma_{V_{\rm max}}\) is the uncertainty in the best-fit value of \(V_{\rm max}\). After applying these quality cuts, our final sample consists of 5522 galaxies with best-fit rotation curves. ### Stellar mass, \(M_{*}\) To estimate the total stellar mass of each galaxy, \(M_{*}\), within \(R_{90}\), we use the parameters from the best-fit disk and bulge rotation curve as described in Sec. 3.2. The total mass of the bulge and disk at some radius \(r\) is \[M_{*}(r)=M_{b}(r)+M_{d}(r) \tag{7}\] where the mass of the bulge component within some radius \(r\) is \[M_{b}(r)=M_{0}\,F\left(\frac{r}{R_{b}}\right) \tag{8}\] and the mass of the disk component is \[M_{d}(r) =2\pi\Sigma_{d}\int_{0}^{r}re^{-r/R_{d}}\,dr \tag{9}\] \[=2\pi\Sigma_{d}R_{d}\left[R_{d}-e^{-r/R_{d}}(r+R_{d})\right] \tag{10}\] So that we study the stellar mass within the same region of each galaxy as the total mass, we evaluate Eqn. 7 at \(R_{90}\). ### Atomic hydrogen, H i We use the H i mass from the H i-MaNGA DR3 survey to quantify the neutral atomic gas content within each galaxy. As listed in Table 3, H i mass estimates are available for 2588 galaxies in our sample. ### Molecular hydrogen, H\({}_{2}\) Molecular hydrogen, H\({}_{2}\), is a low mass, symmetric molecule without a dipole moment and therefore does not produce a significant amount of radiation, making it notoriously difficult to detect. Hence, to evaluate the molecular hydrogen content in a galaxy, it is customary to parameterize it with respect to some other observable. The most commonly used method is to use another molecular gas, particularly CO. We obtain mass estimates of H\({}_{2}\) parameterized by the CO(1-0) line emission from the MASCOT and xCOLD GASS surveys for 108 galaxies that also have total mass, stellar mass, and H i mass estimates (as described above). We have CO observations for only a small fraction of our galaxies, so we use CO observations of SDSS DR7 galaxies to derive a parameterization of the H\({}_{2}\) mass as a function of galaxy luminosity in the \(r\)-band, \(M_{r}\). Shown in Fig. 7, we find the coefficients that describe the linear relationship between \(\log(M_{\rm H_{2}}/M_{\odot})\) and \(M_{r}\): \[\log(M_{\rm H_{2}}/M_{\odot})=a\,M_{r}+b \tag{11}\] where \(M_{\rm H_{2}}\) is the mass of molecular hydrogen. The values of \(a\) and \(b\) are listed in Table 1 and depend on the color-magnitude classification. We use this parameterization to estimate \(M_{\rm H_{2}}\) when CO observations are not available for galaxies in our sample. Figure 6: Rotation curves of the 108 MANGA galaxies with H i and H\({}_{2}\) masses from the H\(\alpha\) velocity field (top) and the stellar mass component (bottom). The solid lines extend to the maximum observed distance for each galaxy, \(R_{\rm max}\), and the dashed lines show the extrapolation of the model to \(R_{90}\). The colors correspond to the different MaNGA samples. ### Total gas mass, \(M_{\rm gas}\) We define the total gas mass, \(M_{\rm gas}\), as the sum of the H i mass, H\({}_{2}\) mass, and helium mass: \[M_{\rm gas}=M_{\rm H}+M_{\rm H_{2}}+M_{\rm He} \tag{12}\] We approximate the helium mass, \(M_{\rm He}\), by assuming a mass fraction of 25%: \[M_{\rm He}=\left(\frac{0.25}{1-0.25}\right)(M_{\rm H}+M_{\rm H_{2}}) \tag{13}\] This is the amount of helium measured in the intergalactic medium and agrees well with the prediction from Big Bang nucleosynthesis (Cooke and Fumagalli, 2018). ### Heavy metals and dust mass, \(M_{\rm dust}\) The heavy metal mass and dust mass, \(M_{\rm dust}\), is approximated from a galaxy's gas-phase metallicity. We compute the gas-phase metallicity, \(12+\log\left(\frac{\rm O}{\rm H}\right)\), following the R-calibration method described in Pilyugin and Grebel (2016) using the flux of the [O ii] \(\lambda\lambda\)3727,3729 doublet, [N ii] \(\lambda\)6548, [N ii] \(\lambda\)6584, [O iii] \(\lambda\)4959, and [O iii] \(\lambda\)5007 emission lines. The fluxes are extinction-corrected using the Balmer decrement, assuming a flux ratio H\(\alpha\)/H\(\beta\) = 2.86. We compute the metallicity as \[12+\log\left(\frac{\rm O}{\rm H}\right)=a_{1}+a_{2}\log\left( \frac{R_{3}}{R_{2}}\right)+a_{3}\log N_{2}\\ +\left(a_{4}+a_{5}\log\left(\frac{R_{3}}{R_{2}}\right)+a_{6}\log N _{2}\right)\\ \times\log R_{2} \tag{14}\] where \[R_{2}=\frac{\rm[O~{}\rm II]\lambda\lambda 3727,3729}{\rm H\beta} \tag{15}\] \[N_{2}=\frac{\rm[N~{}\rm II]\lambda 6548+[N~{}\rm II]\lambda 6584}{ \rm H\beta}\] (16) \[R_{3}=\frac{\rm[O~{}\rm III]\lambda 4959+[O~{}\rm III]\lambda 5007}{ \rm H\beta} \tag{17}\] are ratios of the specified emission line fluxes. The values of the coefficients in Eqn. 14 depend on the value of \(\log N_{2}\) and are listed in Table 2. We assume a constant dust-to-metals ratio corresponding to the metallicity calibration, \(M_{\rm dZ}\)/ \(M_{\rm dust}\) = 0.206 for galaxies with a gas-phase metallicity greater than 8.2 (De Vis et al., 2019). \(M_{\rm dZ}\) is the dust mass of each galaxy, including metals locked up in dust. The total mass of heavy metals and dust is then \[M_{\rm dust}=1.259\,f_{Z}\,M_{g} \tag{18}\] where \(f_{Z}\) is the mass fraction of metals, \[f_{Z}=27.36\left(\frac{\rm O}{\rm H}\right) \tag{19}\] and \(M_{g}\) is the gas mass of the galaxy as defined in De Vis et al. (2019): \[M_{g}=\xi M_{\rm H}\left(1+\frac{M_{\rm H_{2}}}{M_{\rm H}}\right) \tag{20}\] \begin{table} \begin{tabular}{l|c c} \multicolumn{3}{c}{CMD classification} & \(a\) & \(b\) \\ \hline Blue cloud & \(-0.40\pm 0.02\) & \(1.12\pm 0.36\) \\ Green valley \& red sequence & \(-0.27\pm 0.02\) & \(3.62\pm 0.37\) \\ \multicolumn{3}{c}{} \\ \end{tabular} Note. – Coefficients for \(\log(M_{\rm H_{2}}/M_{\odot})\) parameterized as a function of \(M_{r}\) as shown in Eqn. 11. \end{table} Table 1: \(M_{\rm H_{2}}\) mass parameterization coefficients Figure 7: Top: The dependence of \(\log(M_{\rm H_{2}})\) on \(M_{r}\) for 531 galaxies in SDSS DR7 with H\({}_{2}\) masses available through CO surveys. The blue crosses represent blue cloud galaxies, red crosses are green valley and red sequence galaxies. The points are the mean of the \(\log(M_{\rm H_{2}})\) distribution in each bin in \(M_{r}\). The lines are linear fits to the points: \(\log(M_{\rm H_{2}})\) = \(aM_{r}+b\), with the coefficients shown in Table 1. Bottom: Resolution on \(\log(M_{\rm H_{2}})\): the difference between the H\({}_{2}\) mass evaluated based on CO mass and the parameterization from the top plot. The red line is a fit to a Gaussian with \(\sigma=0.27\). where \[\xi=\left(1-\left(0.2485+1.41f_{Z}\right)-f_{Z}\right)^{-1} \tag{21}\] ### The total visible mass, \(M_{\rm vis}\) We define the total visible mass of a galaxy, \(M_{\rm vis}\), as the sum of the stellar mass, \(M_{*}\), the gas mass, \(M_{\rm gas}\) (Eqn. 12), and the heavy metals and dust mass, \(M_{\rm dust}\) (Eqn. 18): \[M_{\rm vis}=M_{*}+M_{\rm H{\textsc{i}}}+M_{\rm H_{2}}+M_{\rm He}+M_{\rm dust} \tag{22}\] A summary of the relative contributions of each individual mass component to the total visible mass for SDSS DR7 galaxies, as a function of the \(r\)-band luminosity, \(M_{r}\), is shown in Fig. 8. For galaxies with \(M_{r}>-18\), gas is the dominant component of the visible mass, whereas for galaxies with \(M_{r}<-19\), the stellar mass dominates the visible mass. Heavy metals and dust contribute on the order of 1% regardless of magnitude. ## 5 Statistically modeling the rotational velocity To test our null hypothesis, i.e. galaxies do not have a dark matter halo, hence the observed rotational velocity at \(R_{90}\) is due entirely to visible mass, we construct a statistical model to predict the expected rotational velocity of a galaxy given its total visible mass. We choose the ratio of the expected to observed velocity evaluated at \(R_{90}\), \(V_{\rm exp}/V_{\rm obs}\), as our observable. The expected velocity is evaluated based on the visible mass calculated using the methods described below. A value of this observable close to unity signals consistency of the data with the null hypothesis. The resolution of this observable is determined by the measured uncertainty of each visible mass component and the uncertainty of the fitted rotation curve to the velocity map, from which we determine the velocity at \(R_{90}\). We expect the velocity to be normally distributed around its true value with the uncertainty returned by the fit. To evaluate the effect of these uncertainties, we implement the following procedure. First, for each galaxy, we determine the mass of each component of the visible mass as described in Sec. 4.2-4.6. Since \(M_{\rm H_{2}}\) is available from CO observations for only a small number of galaxies, we also use the parameterization as a function of \(M_{r}\) to estimate \(M_{\rm H_{2}}\) described in Sec. 4.4. We estimate the total mass, \(M_{\rm tot}\), from the best-fit rotation curve as described in Sec. 4.1. We then compute the ratio of visible to total mass, \[F_{\rm vis}=\frac{M_{\rm vis}}{M_{\rm tot}} \tag{23}\] for each galaxy. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \(\log N_{2}\) & \(a_{1}\) & \(a_{2}\) & \(a_{3}\) & \(a_{4}\) & \(a_{5}\) & \(a_{6}\) \\ \hline \(\geq-0.6\) & \(8.589\) & \(0.022\) & \(0.399\) & \(0.137\) & \(0.164\) & \(0.589\) \\ \(<-0.6\) & \(7.932\) & \(0.944\) & \(0.695\) & \(0.970\) & \(-0.291\) & \(-0.019\) \\ \hline \end{tabular} Note. – Coefficients for the gas-phase metallicity calculation shown in Eqn. 14, from Pilyugin & Grebel (2016). \end{table} Table 2: Gas-phase metallicity coefficients Figure 8: The relative contributions of each mass component to the total visible mass of SDSS DR7 galaxies as a function of \(M_{r}\). For simplicity, we only show \(M_{\rm H_{2}}\) parameterized as a function of \(M_{r}\) here. Figure 9: Illustration of the statistical model. Red horizontal arrows denote Gaussian smearing with the corresponding \(\sigma\). To statistically determine the rotational velocity, we smear each mass component according to its expected resolution1. The expected velocity, \(V_{\rm exp}\), is then evaluated based on the sum of each of these smeared mass components and is smeared according to the velocity uncertainty from the fit to the rotation curve. This smearing procedure is repeated 1000 times for each galaxy. A schematic of this statistical model is illustrated in Fig. 9. From this procedure, we find the expected fraction of the instances where the observed rotational velocity is less than the rotational velocity expected from just the visible mass components, \(F(V_{\rm obs}<V_{\rm exp})\), where \(V_{\rm obs}\) is the rotational velocity measured at \(R_{90}\) from the best-fit rotation curve. Footnote 1: Since we observe a Gaussian distribution in \(\log M\) of the corresponding component, we randomly smear \(\log M\) according to a Gaussian distribution and then invert to find the corresponding mass. ## 6 Studying the ratio of visible to total mass In Fig. 10, we present the PDFs of the ratio of expected to observed velocities at \(R_{90}\) derived using the statistical model and the distribution observed in data. The integrals of these distributions above 1 correspond to the fractions of galaxies for which the expected velocity exceeds the observed one, \(F(V_{\rm obs}<V_{\rm exp})\), listed in Table 3. In Table 3 we also present the mean and RMS of \(F_{\rm vis}\) (the ratio of visible to total mass, as described in Sec. 5). We break down the sample into a number of different subsets: by CMD class into blue cloud, green valley, and red sequence; and by MaNGA targeting sample (to check for possible systematic bias). Due to the limited statistics, we combine galaxies in the green valley and red sequence. For each sample of galaxies, we consider three different mass ratios: \(M_{*}/M_{\rm tot}\) (labeled "Only stars" in Table 3), \(M_{\rm vis}/M_{\rm tot}\) with \(M_{\rm H_{2}}\) inferred from \(M_{r}\), and \(M_{\rm vis}/M_{\rm tot}\) with \(M_{\rm H_{2}}\) measured with CO observations. Across all galaxy samples and subsamples, the null hypothesis for the ratio \(M_{*}/M_{\rm tot}\) is clearly inconsistent with the data. The preferred value of \(F_{\rm vis}\) is 40-50% for all of the galaxy samples. When we include all visible mass, with \(M_{\rm H_{2}}\) parameterized by \(M_{r}\), \(F_{\rm vis}\) increases to 50-55%, reducing the variation between the different categories. The effect is more dramatic for galaxies in the blue cloud, where gas contributes a significant amount the mass budget. Finally, when we use \(M_{\rm H_{2}}\) estimated from CO observations, which is a more reliable method than our parameterization with \(M_{r}\), we again Figure 10: The PDF of the ratio of expected to observed velocities at \(R_{90}\). The black points are the data with the expected velocity evaluated from the visible mass without smearing. The colored histograms are the PDF evaluated based on the statistical model for a sample of randomly selected galaxies. The red histogram is the normalized sum of the individual PDFs. The vertical black line at 1 corresponds to the observed and expected velocities being equal. The integral of the PDFs to the right of this line corresponds to the observed (black points) and modeled (red histogram) \(F(V_{\rm obs}<V_{\rm exp})\) listed in Table 3. Top row: Only stellar mass contributes to the visible mass. Second row: Gas mas is added to stellar mass, with \(M_{\rm H_{2}}\) determined from \(M_{r}\). Third row: Same as the second row, but \(M_{\rm H_{2}}\) is determined from CO observations. see \(F_{\rm vis}\) increase to \(\sim\)65%. As we go from just \(M_{*}\) to \(M_{\rm vis}\) with \(M_{\rm H_{2}}\) estimated from CO, we see a continual increase in the fraction of galaxies with \(V_{\rm obs}<V_{\rm exp}\). As shown by the values in Table 3, we find no statistically significant difference between the three MaNGA targeting samples. The remaining component of the baryonic mass that is missing from our analysis is ionized hydrogen, H ii. We expect the H ii mass to be on the same order of magnitude as \(M_{\rm He}\), with star-forming galaxies containing more H ii, but we do not anticipate that the inclusion of H ii to significantly change our results. Finally, we show the dependence of \(F_{\rm vis}\) on luminosity for galaxies in the blue cloud, green valley and red sequence in Fig. 11. When only stellar mass is included in the visible mass estimation, the dependence of \(F_{\rm vis}\) on \(M_{r}\) is rather flat for green valley and red sequence galaxies, while for the blue cloud galaxies there is a notable upward trend, with brighter galaxies having a larger ratio of \(M_{*}/M_{\rm tot}\). This matches results from previous studies of the stellar-halo mass relation, including Persic et al. (1996); Strigari et al. (2008); Torres-Flores et al. (2011); Karukes and Salucci (2017); Behroozi et al. (2019); Di Paolo et al. (2019); Douglass et al. (2019); Douglass and Demina (2022), and from the simulations by Moster et al. (2010). We find, though, that once gas is added to the visible mass, the dependence of \(F_{\rm vis}\) on \(M_{r}\) becomes rather flat for blue cloud galaxies as well. Though lower in statistics, the distribution for galaxies with \(M_{\rm H_{2}}\) evaluated based on CO confirms this behavior. Based on this observation, we conclude that the dark matter content is independent of galaxy luminosity or evolutionary stage. This conclusion contradicts a widespread belief that dwarf galaxies are rich in dark matter (Pryor and Kormendy, 1990; Binney and Tremaine, 2008). Yet, as shown in Fig. 8, the visible mass in such galaxies is dominated by gas, which should be properly accounted for when dark matter content is evaluated. ### Comparison to previous results As shown in Fig. 11, we find that, once we account for all of the visible mass components of a galaxy, the ratio of \(M_{\rm vis}/M_{\rm tot}\) does not depend on a galaxy's luminosity or evolutionary stage. This is contrary to previous work by Torres-Flores et al. (2011), who consider the relationship between \(M_{\rm vis}\), defined as stellar mass and H i mass, and total mass. Torres-Flores et al. (2011) find a correlation between the mass ratio and evolutionary stage in that late-type low mass spirals are dominated by dark matter in comparison to early-type high mass spirals. This discrepancy could be attributed to our inclusion of both H\({}_{2}\) and helium mass, which contribute more to the visible mass of fainter galaxies. \(M_{\rm vis}/M_{\rm tot}\) is a version of the stellar-halo mass relation (SHMR) typically described as the ratio of stellar mass to halo mass. Models predict a SHMR that deviates from a flat distribution (e.g. Behroozi et al., 2019), with lower values for the faintest and brightest galaxies. These galaxies are thought to be dominated by dark matter. Similar to Douglass and Demina (2022), we find Figure 11: The dependence of various mass fractions on luminosity for blue cloud galaxies (top) and green valley and red sequence galaxies (bottom). The purple circles compare just the stellar mass to total mass, the cyan triangles compare the visible mass (with \(M_{\rm H_{2}}\) estimated from \(M_{r}\)) to total mass, and the black triangles compare the visible mass (with \(M_{\rm H_{2}}\) inferred from CO observations) to total mass. The black line at 1 is where the visible mass is equal to the total mass. that on the faint end of blue cloud galaxies, the inclusion of gas significantly increases \(M_{\rm vis}/M_{\rm tot}\). We find that these galaxies are not dominated by dark matter, but instead have the same amount of dark matter relative to their visible mass as brighter galaxies, suggesting that the lower stellar mass in these galaxies is a result of visible matter that is still in the gas phase. A flat SHMR indicates that halo size is strongly related to galaxy size and plays an important role in galaxy evolution. ## 7 Conclusions We study the ratio of visible to total mass in spiral galaxies using rotation curves evaluated with the H\(\alpha\) velocity maps from SDSS MaNGA DR17. From the dependence of the rotational velocity on the distance from the center of a galaxy, we evaluate the velocity at the 90% elliptical Petrosian radius, \(R_{90}\) from the fitted rotation curves. We compute the visible mass of each galaxy, which includes stellar mass evaluated at the same radius \(R_{90}\), the mass of atomic hydrogen (H i), molecular hydrogen (H\({}_{2}\)) evaluated based on the CO content, helium, and heavy metals and dust mass. To increase the size of the sample under study, we also use a parameterization of \(M_{\rm H_{2}}\) as a function of the galaxy luminosity in the \(r\)-band, \(M_{r}\), derived using the SDSS DR7 galaxy sample. The helium mass is added assuming that its mass fraction in the total gas amount is 25%. We construct a statistical model that predicts the velocity based on the visible mass and compares it to the observed velocity. If the expected velocity is evaluated based solely on the stellar mass, the expected velocity exceeds the observed velocity in only 3-9% of the cases. After including all of the gas and dust mass, this fraction increases to 8-20%, depending on the sample selection and method for estimating \(M_{\rm H_{2}}\). Hence, the null hypothesis (no dark matter) cannot be excluded at a con \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{3}{c}{\(F(V_{\rm obs}<V_{\rm exp})\)} & \multicolumn{2}{c}{\(F_{\rm vis}\)} \\ \multicolumn{1}{c}{Sample} & Count & Observed & Modeled & Mean & RMS \\ \hline \multicolumn{1}{l}{**Only stars**} & & & & & \\ All & 5522 & 5.4\% & 5.7\% & \(45\pm 0.5\%\) & \(35\pm 0.4\%\) \\ Blue cloud & 3025 & 3.3\% & 3.7\% & \(40\pm 0.6\%\) & \(30\pm 0.4\%\) \\ Green valley, red sequence & 1945 & 8.2\% & 8.6\% & \(52\pm 0.9\%\) & \(39\pm 0.7\%\) \\ MaNGA sample 1 & 2476 & 4.8\% & 5,3\% & \(45\pm 0.8\%\) & \(35\pm 0.6\%\) \\ MaNGA sample 2 & 2073 & 5.5\% & 5.5\% & \(44\pm 0.8\%\) & \(33\pm 0.6\%\) \\ MaNGA sample 3 & 943 & 6.5\% & 6.9\% & \(46\pm 1.3\%\) & \(37\pm 0.9\%\) \\ \hline \multicolumn{1}{l}{**Stars, dust, H i, H\({}_{2}\)(\(M_{r}\)), He**} & & & & & \\ All & 2588 & 9.1\% & 9.8\% & \(52\pm 0.8\%\) & \(38\pm 0.6\%\) \\ Blue cloud & 1743 & 8.6\% & 9.4\% & \(51\pm 0.9\%\) & \(37\pm 0.7\%\) \\ Green valley, red sequence & 560 & 9.8\% & 10.3\% & \(55\pm 1.9\%\) & \(41\pm 1.3\%\) \\ MaNGA sample 1 & 1588 & 9.0\% & 9.7\% & \(52\pm 1.0\%\) & \(39\pm 0.7\%\) \\ MaNGA sample 2 & 560 & 7.7\% & 8.4\% & \(51\pm 1.5\%\) & \(34\pm 1.0\%\) \\ MaNGA sample 3 & 430 & 10.7\% & 11.2\% & \(53\pm 2.0\%\) & \(40\pm 1.4\%\) \\ \hline \multicolumn{1}{l}{**Stars, dust, H i, H\({}_{2}\)(CO), He**} & & & & & \\ All & 108 & 18.5\% & 17.0\% & \(65\pm 5\%\) & \(49\pm 4\%\) \\ Blue cloud & 75 & 21\% & 19\% & \(66\pm 7\%\) & \(54\pm 5\%\) \\ Green valley, red sequence & 28 & 14\% & 16\% & \(64\pm 7\%\) & \(37\pm 5\%\) \\ \hline \end{tabular} Note. – The observed velocity, \(V_{\rm obs}\), is evaluated at \(R_{90}\) based on the fit to the rotation curve. The expected velocity, \(V_{\rm exp}\), is evaluated based on the visible mass. \(F(V_{\rm obs}<V_{\rm exp})\) is the fraction of galaxies for which \(V_{\rm obs}<V_{\rm exp}\). In the “Modeled” column, the visible mass and \(V_{\rm exp}\) are distributed according to the statistical model; in the “Observed” column, they are not smeared. \(F_{\rm vis}\) is the fraction of the visible mass, i.e. the ratio of the visible to total mass. Color classification and MaNGA sample information may not be available for all galaxies. \end{table} Table 3: Mass ratio statistics for MaNGA DR17 galaxies. -fidence level better than 95% for the mass within the visible extent of disk galaxies. However, we do observe that, once all of the visible mass is accounted for, the ratio of visible to total mass is largely independent of a galaxy's luminosity or evolutionary stage. Future work will incorporate the mass of ionized hydrogen and extend the mass component analysis to elliptical galaxies. ## Acknowledgements The authors would like to thank Bob Cousins for insightful remarks on the statistical model, and Eric Blackman and Alice Quillen for careful reading and thoughtful comments. N.R. acknowledges support from the Feinberg Research Award through the Department of Physics & Astronomy at the University of Rochester. R.D. acknowledges support from the Department of Energy under the grant DE-SC0008475.0. This project makes use of the MaNGA-Pipe3D data products. We thank the IA-UNAM MaNGA team for creating this catalogue, and the Conacyt Project CB-285080 for supporting them. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
2303.09355
Multi-step planning with learned effects of partial action executions
In this paper, we propose a novel affordance model, which combines object, action, and effect information in the latent space of a predictive neural network architecture that is built on Conditional Neural Processes. Our model allows us to make predictions of intermediate effects expected to be obtained during action executions and make multi-step plans that include partial actions. We first compared the prediction capability of our model using an existing interaction data set and showed that it outperforms a recurrent neural network-based model in predicting the effects of lever-up actions. Next, we showed that our model can generate accurate effect predictions for other actions, such as push and grasp actions. Our system was shown to generate successful multi-step plans to bring objects to desired positions using the traditional A* search algorithm. Furthermore, we realized a continuous planning method and showed that the proposed system generated more accurate and effective plans with sequences of partial action executions compared to plans that only consider full action executions using both planning algorithms.
Hakan Aktas, Utku Bozdogan, Emre Ugur
2023-03-16T14:38:26Z
http://arxiv.org/abs/2303.09355v2
# Multi-step planning with learned effects of (possibly partial) action executions ###### Abstract In this paper, we propose an affordance model, which is built on Conditional Neural Processes, that can predict effect trajectories given objects, action or effect information at any time. Affordances are represented in a latent representation that combines object, action and effect channels. This model allows us to make predictions of intermediate effects expected to be obtained from partial action executions, and this capability is used to make multi-step plans that include partial actions in order to achieve goals. We first show that our model can make accurate continuous effect predictions. We compared our model with a recent LSTM-based effect predictor using an existing dataset that includes lever-up actions. Next, we showed that our model can generate accurate effect predictions for push and grasp actions. Finally, we showed that our system can generate successful multi-step plans in order to bring objects to desired positions. Importantly, the proposed system generated more accurate and effective plans with partial action executions compared to plans that only consider full action executions. Although continuous effect prediction and multi-step planning based on learning affordances have been studied in the literature, continuous affordance and effect predictions have not been utilized in making accurate and fine-grained plans. Keywords:Affordances, effect prediction, object motion trajectory prediction, multi-step planning ## 1 Introduction From a robotics standpoint, learning to predict the effects of a robot's actions beforehand would be a beneficial skill for robots, since this can prevent potential failures and dangerous situations to the robot and those around it, and enable planning for achieving certain goals. Planning for multi-step tasks in the real world is difficult, and a generalized approach to solving this problem is even more so. Due to its difficulty, previous works compromise on certain aspects, such as predefining effect categories or object categories, simplifying the task. Discretizing information in the continuous sensorimotor space with the purpose of high-level symbolic planning with symbols may result in inaccurate plans. In robotics, learned affordances have been used to choose objects for manipulation, to make discrete and continuous effect predictions given objects and actions, to make plans to achieve goals [1, 2, 3, 4]. However, to the best of our knowledge, there is no single framework that can predict effects given objects and action, predict required actions to achieve desired goals, can predict the movement trajectory of the objects in response to parametric actions and can make plans composed of sequence of actions, including partial action executions, in order to achieve given goals. The contribution of our work is as follows: * A novel latent representation for affordances: Considering affordances as relations between objects, actions and effects, our neural network based architecture forms a representation that encodes object, continuous action and continuous effect information in a single latent layer. * Accurate continuous effect prediction: Our system can predict the motion trajectories of the objects expected to be generated by parametric robot actions. We showed that the high prediction performance compared to a strong baseline that also make affordance-based continuous effect prediction. * Multi-step planning with partial actions: We exploited the outcome prediction capability of our system for partial actions in order to form multi-step plans that may include full or partial action executions. ## 2 Related work Early affordances work on effect predictionsIn early works such as [5], objects were required to be recognized first, and object-specific affordances were learned from robot's interaction experience with those objects, therefore the learned affordances could not be generalized to different/novel objects. [6] studies learning of traversability affordance where the LIDAR input was directly processed without any object detection step. Therefore, 'direct perception' aspect of affordance perception was realized in that work, however only a single pre-defined affordance category was addressed. Effect categories and affordances were discovered by the robot in [7, 8] via unsupervised clustering techniques. However, unsupervised clustering results depend on the particular clustering method and the feature space that might be composed of values with different metrics such as distance, frequency, angle, etc. In [9], this is taken one step further and hierarchical clustering is made over channels for better effect category discovery. Both [8] and [9] also enable forward chaining for multi-step planning. [10] propagated affordance predictions by exploiting similarities among object properties, action parameters and resulting effects using Maximum Margin Multi-Valued Regression (MMMVR), obtaining efficient affordance learning; however affordance categories were also pre-defined in that study. In [11], complex affordance learning was bootstrapped through using pre-learned basic-affordances as additional inputs of the complex affordance predictors or as cues in selecting the next objects to explore during learning. In [12] the objects were represented via point clouds, in a non-parametric manner, to provide direct perception and the grasping and a single pouring action were shown to generalize well on novel objects but result in single-step plan only. Bayesian Networks were used [13, 14], enabling bidirectional predictive capabilities using the robot's own interaction experience, but clustering is performed on object features and effects in a predefined manner. In this paper, we also study acquiring bi-directional prediction capabilities. Different from previous work, we do not find effect clusters or do not only aim predicting the final effect. Instead, our system aims to learn predicting the complete effect/action trajectory during action execution. #### 1.0.1 Learning affordances for planning [15] learned affordances for symbolic planning, by again clustering effect categories and using object categories as a collection of effect categories obtainable by actions available to the robot. This enables the representation of nonlinear relations in planning, however, their discrete representation while making planning easy, makes the estimations approximate, increasing long-horizon planning errors. This work was validated on a real-world setup with simple actions such as poke, grasp, and release, and also with a more complex action which is stack. The experience of the robot enabled it to gain experience from the simple actions and after experiencing the stack action enabled it to generate a valid plan for the stacking action. This framework was extended in [16] by enabling the robot to progressively update the previously learned concepts and rules in order to better deal with novel situations that appear during multi-step action executions. Similar planning capabilities were obtained by the robot using deep encoder-decoder neural networks with binary bottleneck layers [17] and with multi-head attention mechanism [18] by directly processing the pixel images instead of using hand-coded features. In [19], probabilistic planning with symbols using parameterized actions was applied to a real robot task, showing that continuous tasks can be performed with discrete planners using parameterized behaviours. Different from the previous work where only final outcomes were used for planning, our system can exploit intermediate effects expected to be observed for example in the middle of the action execution for planning. #### 1.0.2 Learning visual grasp affordances Using RGB/RGBD images for predicting affordance classes or pixel-wise affordance labels for object manipulation has become popular in the recent years [20, 21, 22, 23, 24, 25] and was shown to be a feasible approach for learning how to grasp different objects. Similarly, [26] are able to learn the affordances of objects such that they can place objects/humans in correct poses in a scene and also choose the correct object type to place in a given scene. [27] uses point clouds to learn general geometric features from object interactions, enabling them to place objects in a scene correctly. In [28], a generative model learned from interaction images was used to propose potential affordances. The aim was to learn a generalizable prior from interaction data and then utilize it to propose reasonable goals for unseen objects. These goals were then attempted to be executed by an offline RL policy, learned from interaction data, and tuned online efficiently to adapt to unseen objects. However, continous effect prediction and multi-step planning aspects were not addressed in these studies. #### 3.0.1 Affordances for efficient learning Affordances can also be used to reduce the search space in order to efficiently generate plans to solve long-horizon tasks. Recent approaches utilizing this idea extended the definition of affordances to represent not only single-step action-effect knowledge but as action feasibilities [29], or intents which are similar to goals [30] in order to make affordances useful for multi-step plans. However, the feasibility concept of [29] accepts a grasp action achieving nothing as afforded, and would also accept a grasp action as afforded regardless of whether it was an appropriate grasp for an object. While [30] overcame this with intent representation, intents were specified a priori in their work and although a sub-goal discovery direction was proposed for learning, it was not explored. #### 3.0.2 Learning continuous effect trajectories Similar to our work, [31] and [32] learned to predict the full motion trajectory of an object, using the robot's own interaction experience, and top-down images of the objects. These studies are important for their inclusion of the temporal aspect of effects. The utility of different features was also investigated, such as hand-crafted shape features, CNN extracted features, or support point features extracted from a neural network. The authors used these features and the interaction experience to train recurrent neural networks and were able to accurately predict trajectories resulting from a lever-up action in a real-world setting with multiple objects. A common shortcoming of the aforementioned methods which make predictions for action or effects is related to the use of recurrent methods for long-horizon tasks. The use of recurrent networks such as LSTMs [33] or GRUs [34] is shown to be effective for short-term predictions. However, their recurrent structure causes any error in their prediction to accumulate over time, causing lower success rates in executing long-horizon tasks [35]. In this paper, we compare the effect prediction capabilities of recurrent neural network based systems and our conditional neural process based system. ## 3 Proposed Method ### General Architecture We propose a framework that can learn (i) predicting the effect trajectory given initial object image and action execution trajectory and (ii) finding the required action execution trajectory to achieve a desired effect on a given object. Our system is built on top of Conditional Neural Processes (CNPs) [36] that bring together the inference potential of Gaussian Processes and the training of neural networks with gradient descent by learning a prior from data. As we would like to predict both actions and effect from objects and (possibly missing) actions and effects, we propose a neural network structure that takes the object image as input together with action and/or effect values at different time points. In other words, given initial object image, our network can be conditioned with action and/or effect values at different time points in order to generate action and effect values for all time points during action execution. The general structure of the proposed system is shown in Fig. 1. As shown the system is composed of two encoders that process action and effect information at different time points, an image encoder that processes the depth image of the object, averaging operations to acquire a common object-action-effect representation (r), which in turn can be used to predict action and effect values at other time points using the corresponding two decoders. In detail, inspired from Deep Modality Blending Networks [37], our system encodes information coming from different channels (object, action and effect) into a common latent space, using a weighted average of encoded modality representations, facilitating information sharing, and also providing a regularization effect on the representations learned, similar to dropout [38]. Each channel is encoded separately by its own encoder, and the latent representations are subjected to a weighted averaging operation: \[r_{i}=\sum h_{a}(a_{t_{obs_{k}}})*w_{1}+h_{e}(e_{t_{obs}})*w_{2} \tag{1}\] where \(w_{1}+w_{2}=1\), and \[r=r_{1}\oplus r_{2}\oplus r_{3}\oplus...\oplus r_{i} \tag{2}\] where the commutative operation used is averaging. Then, depth image features and target time-step are concatenated to form the merged representation \(r_{mrg}=\{r,f(\gamma_{o}),t_{target}\}\). This merged representation is then decoded by separate decoders, each corresponding to a different channel. This merged representation is decoded at the action decoder Figure 1: An overview of the proposed model. Given object image and action or effect information at any time-point, our system can generate the effect trajectory and action execution trajectory. by \[g_{a}(r_{mrg})=(\mu_{a_{t_{target}}},\sigma_{a_{t_{target}}}), \tag{3}\] and the effect decoder by \[g_{e}(r_{mrg})=(\mu_{e_{t_{target}}},\sigma_{e_{t_{target}}}), \tag{4}\] to yield predictions for action and/or effect for the target time step shown in Fig 1. The latent representation (r) can be viewed as the shared affordance representation for actions \(a\) and effects \(e\) for different objects \(o\). Learned affordances can then be used to predict the effects of actions or the required action to generate target effects. This model can be used to create multi-step action plans to achieve goals beyond single action executions by chaining the predictions. ### Training In our implementation, the system is object-centric. An action is defined in terms of the distance of the robot's end-effector to the object. An effect is defined as the displacement of the object from its starting position throughout an action. A depth image of the object is included as an external parameter \(\gamma\) for the action. \(t_{i}\) is the \(i\)th time-step of an interaction trajectory from the dataset \(D\). \[D_{d}=(\{a_{t},e_{t},o,t\}_{t=0}^{t=1})_{d} \tag{5}\] is an interaction trajectory where \(1\leq d\leq m\) and \(m\) is the number of trajectories in the data set \(D\). \(0\leq t\leq 1\) is a phase variable in control of the passage of time where \(t\in\mathbf{R}\). At each training iteration, \(k\) observations are sampled uniformly at random from a randomly selected interaction trajectory \(D_{d}\) where \(1\leq k\leq obs_{max}\), \(k\in\mathbf{N}\) is also sampled uniformly at random and \(obs_{max}\) is a hyper-parameter denoting the maximum number of observations that the model is allowed to use during one iteration. These observations are then encoded and aggregated. A cropped object depth image is encoded separately on a CNN encoder network, and the resulting vector is concatenated at the end of this aggregated representation. Before a prediction can be made, a target time-step is also concatenated after the image features. Finally, this merged representation is decoded to yield predictions for action and/or effect for the target time step shown in Fig 1. Gradient descent is used with the loss function (6) with Adam optimizer [39]. \[\mathcal{L}(\theta,\delta)=-logP(y_{j}|\mu_{j},softmax(\sigma_{j})) \tag{6}\] After training, the network is able to predict the entire interaction trajectory given a single observation at \(t=0\). An A* planner [40] is then used on top of the network to solve tasks requiring multiple actions and steps. ### Actions Our arm-hand robot is equipped with two actions, namely push and grasp. The parametric push action is specified by an angle \(\theta\in[0,2\pi]\); a push distance \(l=0.05\) and a \(radius=0.2\) both in meters. The gripper starts the push execution from the red circumference of a circle of radius (\(radius=20cm\)) centered around the object (see Figure 2), at an angle \(\theta\) and pushes the object \(l\) meters from its center of mass. Larger sized objects may be displaced more as a result of this setup. The model is expected to learn the rollability and pushability affordances from these interactions and based on the object shape be able to predict the trajectories of rollable objects such as spheres or lying cylinders and non-rollable objects such as cuboids or upright cylinders. Grasp actions, on the other hand, are realized by lowering the open gripper to grasp position, attempting to grasp an object by closing the gripper and lifting the gripper up. Our model is expected to learn the graspability affordance from these interactions. Based on object size and shape, it should also be able to predict the interaction trajectories. ### Planning with partial action executions The A* planner (with Euclidean distance to the goal heuristic) is used to generate a sequence of actions to move the object from its initial position to a goal position. Each branch in the search tree corresponds to either a push action or a grasp action. As grasp action is not parameterized, there is a single branch for grasp action. On the other hand, as a push action might be applied from different approach angles and for different push distances, the range of possible approach angles and push distances are discretized and used to create multiple branches from the same node in the search tree. The initial 20%, 40%, 60%, 80% and 100% segments of push action was considered during search. Additionally, the push direction was discretized into 36 directions. Therefore, the branching factor was set to 180. Figure 2: Scene showing the parameters of a push action around an object. The planner uses predictions of actions with different parameters to update the predicted location of the object. The search is completed if the difference between the predicted and goal object position is less than 2 cm. Our model is able to work with continuous inputs, however, the planner can only propose a finite amount of actions due to the mentioned discretization design choice. Interactions with single and multiple actions are generated, which also include partial actions. Partial actions are when an action is started to be applied, however it is not completed, i.e. it is executed partially. For example in a push action, the push may be cut short before the gripper even contacts the object, or the gripper reaches the center of mass of the object, meaning that the object has already started being pushed, but the push is not completed yet. Importantly, our model is trained only with full action interactions. Yet, our model can generate the effect at any desired time point, and therefore it can predict the consequence of partial action executions. Planner can use effects of such partial action executions to generate plans with finer resolutions (compared to plans that can only include full action executions). Note that an action can be applied only if the object is reachable by the robot. Our model is expected to learn the reachability affordance from its interaction experience and actions are only considered during planning if the object is reachable. ## 4 Experiments results ### Effect prediction performance In this paper, we propose a novel system to predict effects given objects and actions. Furthermore, our system can generate complete motion trajectories of objects as effects rather than their final positions. In order to assess the performance of our system, we compare our system with a recent study that can also predict motion trajectories of objects using CNN and Long short Term Memory (LSTM) model [31]. We used the same dataset that [31] used where lever-up actions were applied to objects with different geometric shapes from different points. An example lever-up action is shown in Figure 3. The objects had different number of edges (between 3 and 8), sizes and orientations. The objects were levered-up from different contact points. The authors translated and rotated the top-down 128x128 grayscale image according to the contact point and lever-up direction in order to simplify the prediction problem. The dataset was separated randomly into 80% training, 10% validation, and 10% test as in [31]. The experiments were performed with 5-fold cross-validation and early stopping with one million iterations. For comparison, we gathered n-step predictions, taking 15 previous steps as observations, and compared the results with the ones reported in the same manner in [31]. As shown in Figure 4, the error between the predicted and actual positions of the objects in [31] varies between \(0.90-1.00\)cm, whereas the error Figure 3: Example lever-up action in the simulator, reused with permission from [31]. was in the range of \(0.3-0.4\)cm in our system (Table 1). This comparison shows our model yields significantly lower error rates compared to a recurrent method, since the output is predicted directly avoiding error accumulation for multi-step predictions. ### Training Environment A simulated scene was constructed in CoppeliaSim [41] as shown in Figure 2. A UR10 robot interacts with objects of different shapes and sizes by applying push and grasp actions on a tabletop setting. A Kinect sensor is placed above the table vertically such that the entire table is visible. The parts of the interaction where the object is potentially going to be displaced are recorded. The recorded information consists of action and effect data from once every 3 simulation steps (a single step is 50ms), which is chosen empirically, and a single depth image of the table with the object on top taken at the beginning of each interaction. The simulation dataset is split into training (%80), validation (%10) and test (%10) sets. ### Single-action push and grasp effect prediction Different models were trained for push and grasp actions separately. The simulation data sets were always split between 80% training, 10% validation, and 10% test data. For all the results reported in this work 10-fold cross-validation was applied unless otherwise specified. The models were trained for one million iterations, without batches due to variable length of inputs and early stopping was employed. The learning rate was set to 1e\(-\)4. All errors reported in meters denote the distances to a specified goal position. The predictions for a single action is fixed to take 25 time steps. For the push action, a data set made up of 500 trajectories was used. For each interaction, objects were chosen randomly and placed at the center of the table. An angle for the push \(\theta\in[0,2\pi]\) was chosen randomly. The robot performed a complete push action and the resulting interaction data was recorded. The error in predicting the final position of the object was found to be around 0.02m as shown in Figure 6. Figure 4: N-step error plots obtained in the LSTM approach, reused with permission [31]. \begin{table} \begin{tabular}{|c|c|} \hline **1-step (cm)** & \(0.340\pm 0.044\) \\ \hline **2-step (cm)** & \(0.352\pm 0.046\) \\ \hline **3-step (cm)** & \(0.364\pm 0.049\) \\ \hline **4-step (cm)** & \(0.375\pm 0.051\) \\ \hline **5-step (cm)** & \(0.387\pm 0.054\) \\ \hline \end{tabular} \end{table} Table 1: N-step prediction errors obtained in our model Similar to our analysis in the previous subsection, our system was shown to be effective in predicting object motion trajectories in push-like actions. For the grasp action, a data set made up of 100 trajectories was used. Objects of varying sizes were randomly chosen to be placed at the center of the table. The robot then performed the grasp action and the resulting interaction data was recorded. The results for the grasp action are provided in Table 2. In interpreting the performance and success of graspability prediction, if the change in height is larger than 0.1 meters, the grasp was assumed to be a success. If the change in height is less than 0.1 meters, it is a failed grasp. If the test data does not have a significant change in its z-axis coordinates but the predictions do, then this is a false positive and finally, if the test data has a significant change in z-axis coordinates but the predictions do not, then it is a false negative. We have found that the robot had more difficulty grasping non-rollable objects of equal size, most likely due to the fact that a cube and an upright cylinder are both grasped by straight surfaces whereas large spheres and sideways cylinders are grasped by curved surfaces from points that are located above their center of mass, causing them to slip easier. As shown, our system could successfully predict graspability affordance. However, it is important to note that the average grasp action error is significantly larger than the average push action error. The mean error was relatively high because the incorrect graspability predictions generated high positional errors that significantly increase the mean error value. This is potentially due to unsuccessful grasps. In the event of an unsuccessful grasp, the object may slowly slip from the robot's hands and land on the table close to its initial position. The object may topple or roll (sometimes off the table), leading to position changes that are uncertain beforehand and therefore cannot be accurately predicted. ### Planning performance Next, our model is requested to generate plans to bring the object to goal positions that might be beyond the range of single pushes or closer than a full-push. Therefore, the planner is expected to generate sequences of actions that might include partial executions as well. A sample plan execution is shown in Figure 5, where the plan is composed of two actions. The goal is shown with a blue box, the actual object positions are shown with red color and the predictions are shown with green color. Given goal positions, we run our model in three modes: * predict one push action to reach a goal, which is maximum one-step ahead, * predict one (possibly partial) push action to reach a goal, which is maximum one-step ahead, * predict sequence of (possibly partial) push actions to reach a goal, which is maximum three-steps ahead. After plans are made, they are executed by the robot. The distance between the goal position and final actual position of the object is reported as error. The results are provided in Figure 6. As shown, the object can be brought to the goal position more \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Error (m)** & **True** & **True** & **False** & **False** \\ & **Positive (\%)** & **Negative (\%)** & **Positive (\%)** & **Negative (\%)** \\ \hline \(0.275\pm 0.044\) & 75.17 & 90.48 & 9.52 & 24.82 \\ \hline \end{tabular} \end{table} Table 2: Single grasp action prediction results on variable sized objects placed on a fixed location. accurately if the partial actions are considered during planning even in single action execution case. Even error obtained in multiple (potentially partial) action executions is smaller than single full action execution. This shows the effectiveness of our system in making accurate plans. For the results reported in Figure 6; the single full setting is tested with 50 goals, the single partial setting with 100 goals, and the multi partial setting with 80 goals. ## 5 Conclusion In this paper, we realized a model for multi-step action and effect prediction. While previous work's utilization of bidirectional learning is limited, our model specifically creates its latent representations using this concept and is able to make multi-step predictions that are in accordance with ground truth manipulations. We emphasize using object-centric inputs to achieve generalizability and investigate simple affordances of several classes of objects of different sizes. By using a network for single interaction predictions which can be interpreted similar to a state transition function and pairing it with a planner with heuristics to propose goal-directed actions the model was shown to achieve low error in reaching target positions. While the results of our experiments Figure 5: Results of applying the model’s predicted actions in a scene. Images are ordered from left to right. The top row is from the first action execution, the bottom row is from the second action execution. The blue object denotes the target position, the red object is generated from the robot’s effect predictions. The blue and red objects are not interactable by the robot. The green object is interactable and is acted upon by the robot, to show the ground truth results of the robot’s predicted actions. are promising, the model still requires verification in the real world. Our next step is gathering data and testing our implementation with a real robot. Our work uses a conditional architecture to avoid the compounding error problems of recurrent architectures and models that are used in a similar way by feeding their current step output as next step input. This advantage of using conditional models is shown in this work against using an LSTM network, and against a Multimodal Variational Autoencoder (MVAE) in [37]. Recently, transformer models gained popularity as being a good alternative to recurrent models. By using the attention mechanism they eliminate the need for recurrence, and attention can potentially be beneficial for our model as well. We plan to investigate the capabilities of such models in future. ## 6 Disclosure statement No potential conflict of interest was reported by the authors. ## Acknowledgement(s) The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources). Authors would like thank to Alper Ahmetoglu for providing insightful comments for this paper. ## Funding This research was supported by TUBITAK (The Scientific and Technological Research Council of Turkey) ARDEB; 1001 program (project number: 120E274); TUBITAK BIDEB; 2210-A program; and by the BAGEP Award of the Science Academy. Figure 6: Results of applying push actions on objects on a fixed location with fixed size with different settings.
2306.12815
Exploring kaon induced reactions for unraveling the nature of the scalar meson $a_0 (1817)$
In this study, we comprehensively investigate the production of isovector scalar meson $a_{0}(1817)$ using the effective Lagrangian approach. Specifically, we employ the Reggeized $t$ channel Born term to calculate the total and differential cross sections for the reaction $K^{-}p \rightarrow a_{0}(1817)\Lambda$. Our analysis reveals that the optimal energy range for detecting the $a_{0}(1817)$ meson lies between $W=3.4$ GeV and $W=3.6$ GeV, where the predicted total cross section reaches a minimum value of 112 nb. Notably, the $t$ channel, as predicted by the Regge model, significantly enhances the differential cross sections, particularly at extreme forward angles. Furthermore, we investigate the Dalitz processes of $2\rightarrow 3$ and discuss the feasibility of detecting the $a_{0}(1817)$ meson in experiments such as J-PARC.
Xiao-Yun Wang, Hui-Fang Zhou, Xiang Liu
2023-06-22T11:30:27Z
http://arxiv.org/abs/2306.12815v2
# Exploring kaon induced reactions for unraveling the nature of the scalar meson \(a_{0}(1817)\) ###### Abstract In this study, we comprehensively investigate the production of isovector scalar meson \(a_{0}(1817)\) using the effective Lagrangian approach. Specifically, we employ the Reggeized \(t\) channel Born term to calculate the total and differential cross sections for the reaction \(K^{-}p\to a_{0}(1817)\Lambda\). Our analysis reveals that the optimal energy range for detecting the \(a_{0}(1817)\) meson lies between \(W=3.4\) GeV and \(W=3.6\) GeV, where the predicted total cross section reaches a minimum value of \(112\) nb. Notably, the \(t\) channel, as predicted by the Regge model, significantly enhances the differential cross sections, particularly at extreme forward angles. Furthermore, we investigate the Dalitz processes of \(2\to 3\) and discuss the feasibility of detecting the \(a_{0}(1817)\) meson in experiments such as J-PARC. ## I Introduction The \(a_{0}(1817)\) meson has attracted significant attention in the field of light hadron physics, as its study provides valuable insights into the intricacies of constructing the light flavor hadron spectroscopy. Recent experimental findings have added to the intrigue surrounding this meson. The BaBar Collaboration, through the \(\eta_{c}\to\eta\pi^{+}\pi^{-}\) reaction, discovered a new state named \(a_{0}(1700)\), which has a measured mass of \(1704\pm 5(\text{stat.})\pm 2(\text{syst.})\) MeV and a width of \(\Gamma=110\pm 15(\text{stat.})\pm 11(\text{syst.})\) MeV [1]. Additionally, the BESIII Collaboration observed a state denoted as the \(a_{0}(1710)^{0}\) in the \(D_{S}^{+}\to K_{S}^{0}K_{S}^{0}\pi^{+}\) reaction. However, in this detection process, it was not possible to differentiate between the \(a_{0}(1710)^{0}\) and \(f_{0}(1710)\), leading to a generalization of both states as \(S_{0}(1710)\)[2]. It was later resolved in a subsequent article using the isospin theorem, which distinguished the isospin \(I=1\) state \(a_{0}(1710)\) from the isospin \(I=0\) state \(f_{0}(1710)\)[3]. Subsequently, the BESIII Collaboration conducted another experiment to study the \(a_{0}(1710)^{+}\) state with the quantum numbers \(I(J^{P})=1^{+}(0^{+})\). The observation of \(a_{0}(1710)^{+}\to K_{S}^{0}K^{+}\) was made through an investigation of the \(D_{S}^{+}\to K_{S}^{0}K^{+}\pi^{0}\) decay [4]. This experiment reported the mass and decay width of the newly discovered meson as \(M=1.817\pm 0.008(\text{stat.})\pm 0.020(\text{syst.})\) GeV and \(\Gamma=0.097\pm 0.022(\text{stat.})\pm 0.015(\text{syst.})\) GeV, respectively. In accordance with the designation proposed by the Lanzhou group _et al._[5], we adopt the name \(a_{0}(1817)\) for this newly discovered isovector scalar meson in our work. However, there exist discrepancies in the measured mass and decay width of the \(a_{0}(1817)\) meson as observed by the BaBar experiment [1] and the BESIII experiment [3]. Moreover, due to the limited number of relevant experiments and available experimental data, further observations of the \(a_{0}(1817)\) meson in alternative experiments are necessary. These observations would facilitate the measurement of pertinent resonance parameters and provide a more comprehensive understanding of the properties associated with the \(a_{0}(1817)\) meson. Recent research by the Lanzhou group [5] has established the significance of the \(a_{0}(1817)\) meson as a reference point in the construction of scalar meson families. Its primary decay channels include \(\pi\eta(1295)\), \(\pi\eta^{\prime}\), \(\pi\eta\), \(\pi\eta(1475)\), \(\pi b_{1}(1235)\), \(K\bar{K}\), and others, with specific details provided in Table 1. Theoretical conjectures propose that the \(a_{0}(1450)\) and \(a_{0}(1817)\) represent the first and second radial excitations, respectively, of the \(a_{0}(980)\) meson [6]. Additionally, it is predicted that the \(a_{0}(2115)\) serves as the third radial excitation, contributing significantly to the expanding landscape of the light hadron spectrum [5]. Understanding the composition and meson structure of the \(a_{0}(1817)\) is crucial, as it sheds light on the structural characteristics of scalar mesons in the realm of light quarks, while also addressing other pertinent issues currently under debate in the field of hadron physics [7; 8; 9]. Previous studies have considered the possibility of the \(a_{0}(980)\) as a tetraquark candidate [10], and the \(a_{0}(1450)\) as a hybrid state comprising a combination of double and quadruple quarks [11]. The \(f_{0}(1710)\) meson, serving as the isovector partner of the \(a_{0}(1817)\), does not rule out the possibility of being a scalar glueball. However, given the limited information available on the structure of the \(a_{0}(1817)\), further resonance measurements are imperative. Consequently, the pressing task at hand involves detecting the \(a_{0}(1817)\) in other experimental settings. Upon consulting the Particle Data Group [12], we find that the \(K^{-}p\) scattering experiment is particularly noteworthy. Since the discovery of \(K\)-mesons [13], kaon beams have naturally emerged as a powerful tool for exploring strange hadrons and hypernuclei [14]. Experimental facilities such as J-PARC [15] and OKA@U-70 [16] offer excellent opportunities for such investigations. Several literature sources have presented the production of the \(a_{0}(980)\) in the reaction \(K^{-}p\to\Lambda\eta\pi^{+}\pi^{-}\)[17; 18; 19; 20]. The discovery of the \(\phi(1020)\) meson has been achieved in the reaction \(K^{-}p\to KKn\)[21]. Moreover, the \(a_{1}(1260)\) and \(D(1285)\) mesons have been observed in the reactions \(K^{-}p\rightarrow\Sigma^{-}\pi^{+}\pi^{+}\pi^{-}\)[22] and \(K^{-}p\rightarrow\Lambda\eta\pi^{+}\pi^{-}\)[19], respectively. These examples provide further support for the possibility of observing \(a_{0}(1817)\) in \(K^{-}p\) scattering experiments. In our previous work, we successfully calculated the production of the \(\phi(2170)\) meson via the reaction \(K^{-}p\rightarrow\phi(2170)\Lambda\)[14], the \(X_{0}(2900)\) state in \(K^{+}p\rightarrow\Sigma_{c}^{+}X_{0}(2900)\)[23], and the \(\eta_{1}(1855)\) meson through \(K^{-}p\rightarrow\eta_{1}(1855)\Lambda\)[24] using efficient Lagrangian methods and the Regge trajectory model. The numerical results obtained from these calculations provide valuable insights for future experimental endeavors. In this study, we explore the production mechanism of the scalar meson \(a_{0}(1817)\) in \(K^{-}p\) scattering utilizing an efficient Lagrangian approach, focusing on meson-induced reactions with \(K\)-meson exchange solely in the \(t\)-channel. Detailed information regarding our methodology will be presented in the subsequent section. The calculation of both the total cross section and differential cross section for the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction holds significant relevance for future high-precision experimental investigations in this field. This paper is structured as follows: In Section II, we present the efficient Lagrangian method and the Regge trajectory model employed for the analysis of the \(a_{0}(1817)\). The numerical results for the total and differential cross sections are presented in Section III. Finally, we summarize our findings and draw conclusions in Section IV. ## II Formalism The production of the scalar meson \(a_{0}(1817)\) through kaon-induced reactions on a proton target, with \(t\) channel \(K^{+}\) meson exchange, is illustrated in Fig. 1. In this study, we neglect the contribution from the \(s\) channel with nucleon pole, as it is known to be negligibly small. Typically, the contribution of the \(u\) channel with nucleon exchange is also minimal and can be neglected at low energies. Moreover, at high energies, the Reggeized treatment of the \(u\) channel renders its contribution to the total cross section small and negligible. Therefore, we do not include the contributions from nucleon resonances in the \(u\)-channel in the current calculation. Studying the strong interaction at the quark-gluon level in the energy region where the resonance can be detected poses significant challenges. Therefore, in this study, we employ the effective Lagrangian method to perform the necessary calculations. In the case of kaon-induced production of the \(a_{0}(1817)\), the relevant Lagrangians for the \(t\) channel are given by [25; 26; 27; 28] \[\mathcal{L}_{a_{0}KK} = \frac{f_{a_{0}KK}}{2m_{K}}a_{0}\partial_{\mu}\vec{\bar{K}}\cdot \partial^{\mu}\vec{\bar{K}}, \tag{1}\] \[\mathcal{L}_{KN\Lambda} = ig_{KN\Lambda}\gamma_{5}\Lambda K+\text{H.c.}, \tag{2}\] where the \(a_{0}\), \(K\), \(N\) and \(\Lambda\) stand for the \(a_{0}(1817)\), \(K\), nucleon and \(\Lambda\) fields, respectively. The coupling constant \(g_{KN\Lambda}=-13.24\), which can be determined [25] based on the SU(3) flavor symmetry relation [29; 30], plays a crucial role in our analysis. Additionally, the coupling constant \(f_{a_{0}KK}\) can be determined from the decay width \(\Gamma_{a_{0}-\bar{K}K}\), as indicated by the calculation results using the Nijmegen potential [31] \[\Gamma_{a_{0}\to K^{+}K^{-}} = \frac{2}{3}\Gamma_{a_{0}-\bar{K}K} \tag{3}\] \[= \left(\frac{f_{a_{0}KK}}{2m_{K}}\right)^{2}\frac{(M_{a_{0}}^{2}- 2m_{K}^{2})^{2}}{32\pi M_{a_{0}}^{2}}|\vec{p}_{K}^{\;\text{c.m.}}|\] with \[|\vec{p}_{K}^{\;\text{c.m.}}|=\frac{\lambda^{1/2}(M_{a_{0}}^{2},m_{K}^{2},m_{ K}^{2})}{2M_{a_{0}}}. \tag{4}\] Here, \(\lambda\) represents the Kallen function, defined as \(\lambda(x,y,z)=\sqrt{(x-y-z)^{2}-4yz}\). \(M_{a_{0}}\) and \(m_{K}\) denote the masses of \(a_{0}(1817)\) and the kaon meson, respectively. By considering the decay width \(\Gamma_{a_{0}\to K^{+}K^{-}}\) to be 5 MeV, we find that the corresponding coupling constant \(f_{a_{0}KK}\) is determined to be 0.52. Based on the aforementioned Lagrangians, the amplitude for the production of the \(a_{0}(1817)\) through \(t\) channel \(K^{+}\) exchange in \(K^{-}p\) scattering can be expressed as follows: \[\mathcal{M}_{K} = i\frac{f_{a_{0}KK}}{2m_{K}}g_{KN\Lambda}F(q^{2})\bar{u}_{\Lambda }(p_{2}) \tag{5}\] \[\times\gamma_{5}\frac{1}{t-m_{K}^{2}}(q_{\mu}\cdot k_{1}^{\mu})u_ {N}(p_{1}).\] In the above expression, \(\bar{u}_{\Lambda}\) and \(u_{N}\) represent the Dirac spinors of the \(\Lambda\) hyperon and nucleon, respectively. For the \(t\) channel meson exchange [27], a form factor \(F(q^{2})=(\Lambda_{t}^{2}-m^{2})/(\Lambda_{t}^{2}-q^{2})\) is utilized. Here, \(t=q^{2}=(k_{1}-k_{2})^{2}\) represents the Mandelstam variable. The parameter \(\Lambda_{t}\), the only free parameter in the form factor, will be discussed in detail in Section III. The Regge trajectory model has proven to be successful in analyzing hadron production at high energies [32; 33; 34; 35]. It provides a framework to study the spectral behavior of traditional light mesons [36]. In this model, the Reggeization procedure is performed by replacing the \(t\) channel propagator in \begin{table} \begin{tabular}{l c c c c} \hline \hline channel & \(\pi\eta\) & \(\pi\eta^{\prime}\) & \(\pi\eta(1295)\) & \(\pi\eta(1475)\) \\ \hline \(\Gamma\) (MeV) & 22.4\(\rightarrow\)24.9 & 27.8\(\rightarrow\)36.2 & 18.0\(\rightarrow\)47.5 & 6.2\(\rightarrow\)34.2 \\ \hline channel & \(\pi b_{1}(1235)\) & \(KK\) & \(\pi f_{1}(1285)\) & \(\rho\omega\) \\ \hline \(\Gamma\) (MeV) & 10.8\(\rightarrow\)19.2 & 7.5\(\rightarrow\)12.9 & 0.2\(\rightarrow\)9.5 & 0.1\(\rightarrow\)3.3 \\ \hline \hline \end{tabular} \end{table} Table 1: The partial decay widths of the \(a_{0}(1817)\) predicted in Ref. [5]. the Feynman amplitudes (Eq. (4)) with the Regge propagator, which can be expressed as follows: \[\frac{1}{t-m_{K}^{2}}\rightarrow\left(\frac{s}{s_{scale}}\right)^{\alpha_{K}(t)} \frac{\pi\alpha_{K}^{\prime}}{\Gamma[1+\alpha_{K}(t)]\sin[\pi\alpha_{K}(t)]}. \tag{5}\] Here, the factor \(s_{scale}\) is equal to 1 GeV. In addition, the Regge trajectory \(\alpha_{K}(t)\) read as [25], \[\alpha_{K}(t)=0.70(t-m_{K}^{2}). \tag{6}\] It is note that no additional parameter is introduced after the Reggeized treatment applying. ## III Numerical results ### Cross section In the following calculations, we can determine the cross section of the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction. The differential cross section in the center of mass (c.m.) frame is given by: \[\frac{d\sigma}{d\cos\theta}=\frac{1}{32\pi s}\frac{\left|\vec{k}_{2}^{\, \,\rm cm.}\right|}{\left|\vec{k}_{1}^{\,\,\rm cm.}\right|}\left(\frac{1}{2} \sum_{\lambda}|\mathcal{M}|^{2}\right), \tag{7}\] where the variable \(s=(k_{1}+p_{1})^{2}\) represents the squared center of mass energy, and \(\theta\) represents the angle between the outgoing \(a_{0}(1817)\) meson and the direction of the kaon beam in the center of mass frame. \(\vec{k}_{1}^{\,\,\rm cm.}\) and \(\vec{k}_{2}^{\,\,\rm cm.}\) represent the three-momenta of the initial kaon beam and the final \(a_{0}(1817)\) meson, respectively. Since there is no available experimental data for the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction, we provide predictions for the cross section based on our calculations, as shown in Fig. 2. The cutoff parameter in the form factor serves as the only free parameter in these calculations. In previous studies, different values of the cutoff parameter have been used in related processes. For instance, in the \(\pi^{-}p\to K^{*}\Sigma^{*}\) scattering process based on Reggeized \(t\) channel \(K^{(*)}\) exchange [28], a value of \(\Lambda_{t}=1.67\pm 0.04\) GeV was employed. In another study [33], to achieve better agreement with experimental data, a value of \(\Lambda_{t}=1.55\) GeV was chosen for the Reggeized \(t\) channel with \(K\) and \(K\) exchange. Additionally, in the kaon-induced reaction \(K^{-}p\to\eta_{1}(1855)\Lambda\), a cutoff value of \(1.6\pm 0.3\) GeV for the \(t\)-channel \(K^{+}\) exchange was considered [24]. In this work, we adopt \(\Lambda_{t}=1.6\pm 0.3\) GeV to ensure a more reliable and feasible conclusion. In Fig. 2, the total cross section exhibits a clear variation trend within the energy range of \(W=2\) to 10 GeV. Notably, there is a prominent peak between \(W=3.4\) and 3.6 GeV, indicating the potential for observing the \(a_{0}(1817)\) resonance through \(K^{-}p\) interactions within this energy range. The increase in the total cross section is steep leading up to the peak, with a value of 113 nb at a center-of-mass energy of 3.5 GeV. Following the peak, the downward trend becomes less pronounced. Taking into account the range of \(\Lambda_{t}\) as \(1.6\pm 0.3\) GeV and considering the error band, the total cross section varies by 67 nb from the value at \(W=3.5\) GeV. In Fig. 3, the predicted differential cross section of the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction is presented based on the Regge trajectory model, using a cutoff value of \(\Lambda_{t}=1.6\pm 0.3\) GeV. It is evident from this work that the differential cross section is highly dependent on the scattering angle \(\theta\). As the energy increases, the reaction exhibits a strong forward scattering and gradually strengthens. Therefore, the reinforcement treatment can be effectively validated through forward angle measurements. Figure 4 shows the \(t\)-distribution for the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction. It can be observed that the differential cross sections gradually decrease with increasing momentum transfer \(t\). However, as \(t\) becomes smaller, the differential cross section Figure 2: The energy dependence of the total cross section for production of the \(a_{0}(1817)\) and through \(t\) channel with cutoff \(\Lambda_{t}=1.6\pm 0.3\) GeV. The Full (red) line is for the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction. The bands stand for the error bar of cutoff \(\Lambda_{t}\). Figure 3: The differential cross section \(d\sigma/d\cos\theta\) of the \(a_{0}(1817)\) and \(a_{0}(1817)\) production at different center-of-mass (c.m.) energies \(W=3.5,3.6,4.0,6.0\) GeV. values continue to increase, and this phenomenon requires further experimental verification. ### Dalitz process According to Table 1, it is evident that the \(a_{0}(1817)\) meson frequently appears as an intermediate state in various decay processes. The minimum decay width for \(a_{0}\to K\bar{K}\) is \(\Gamma_{a_{0}\to\pi\bar{K}}=7.5\) MeV, while the minimum decay width for \(a_{0}\to\pi\eta\) is \(\Gamma_{a_{0}\to\pi\eta}=22.4\) MeV. In this study, we aim to calculate the Dalitz process for \(K^{-}p\to a_{0}(1817)\Lambda\to K^{+}K^{-}\Lambda\) and \(K^{-}p\to a_{0}(1817)\Lambda\to\pi\eta\Lambda\), respectively. The Dalitz process is of great importance and can provide valuable insights for future experimental investigations. Generally, the invariant mass distribution for the Dalitz process can be defined based on the two-body process [37] \[\frac{d\sigma_{K^{-}p\to a_{0}\Lambda\to K^{-}K^{-}\Lambda}}{dM_{K^{+}K^{-}}} \approx\frac{2M_{a_{0}}M_{K^{-}K^{-}}}{\pi}\frac{\sigma_{K^{-}p\to a_{0} \Lambda}\Gamma_{a_{0}\to K^{-}K^{-}}}{(M_{K^{+}K^{-}}-M_{a_{0}}^{2})^{2}+M_{a_ {0}}^{2}\Gamma_{a_{0}}^{2}}, \tag{8}\] \[\frac{d\sigma_{K^{-}p\to a_{0}\Lambda\to\pi\eta\Lambda}}{dM_{\pi\eta}} \approx\frac{2M_{a_{0}}M_{\pi\eta}}{\pi}\frac{\sigma_{K^{-}p\to a_{0} \Lambda}\Gamma_{a_{0}\to\pi\eta}}{(M_{\pi\eta}^{2}-M_{a_{0}}^{2})^{2}+M_{a_{0 }}^{2}\Gamma_{a_{0}}^{2}}. \tag{9}\] Here, the total width of \(a_{0}\) meson, denoted as \(\Gamma_{a_{0}}\), is 97 MeV. For the partial width \(\Gamma_{a_{0}\to K^{-}K^{-}}\), we consider a value of 5 MeV, and for \(\Gamma_{a_{0}\to\pi\eta}\), we use the value of 22.4 MeV. Based on these parameters, we calculate the invariant-mass distributions \(d\sigma_{K^{-}p\to a_{0}\Lambda\to K^{+}K^{-}\Lambda}/dM_{K^{-}K^{-}}\) and \(d\sigma_{K^{-}p\to a_{0}\Lambda\to\pi\eta\Lambda}/dM_{\pi\eta}\) for center-of-mass energies ranging from \(W=3.5\) GeV to \(W=6\) GeV. The results are shown in Fig. 5 and Fig. 6. It can be observed from these figures that there is a peak near the center-of-mass energy of approximately 1.82 GeV, which has direct implications for the experimental detection of the \(a_{0}(1817)\). To further assess the feasibility of detecting \(a_{0}(1817)\) in \(K^{-}p\) interactions, we calculate the ratio \(\sigma(K^{-}p\to a_{0}(1817)\Lambda\to K^{+}K^{-}\Lambda)/\sigma(K^{-}p\to K ^{+}K^{-}\Lambda)\). In Fig. 2, the cross section for \(a_{0}(1817)\) production in \(K^{-}p\) scattering is estimated to be approximately 110 nb at \(W=3.37\) GeV. Assuming a branching ratio of \(BR(a_{0}(1817)\to K^{+}K^{-})\approx 5.2\%\), we obtain a total cross section of \(\sigma_{K^{-}p\to a_{0}(1817)\Lambda\to K^{+}K^{-}\Lambda}\approx 5.72\) nb at \(W=3.37\) GeV. In Ref. [38], a total cross section of 35 \(\mu\)b is reported for \(W=3.37\) GeV. Based on this value, the ratio at \(W=3.37\) GeV can be calculated as follows: \[\frac{\sigma(K^{-}p\to a_{0}(1817)\Lambda\to K^{+}K^{-}\Lambda)}{\sigma(K^{-} p\to K^{+}K^{-}\Lambda)}\approx 0.016\%. \tag{10}\] Considering the current experimental landscape, we are optimistic about the potential of the J-PARC experiment in detecting the \(a_{0}(1817)\) in \(K^{-}p\) scattering [15; 39]. The experimental conditions at J-PARC are well-suited for this purpose. Based on the specifications of the J-PARC experiment, it is estimated that approximately 42,000 events of \(K^{+}K^{-}\Lambda\) can be generated in 100 days, among which about several events are expected to involve the \(a_{0}(1817)\). By performing calculations, we find that at \(W=3.37\) GeV, the cross section for \(K^{-}p\to a_{0}(1817)\Lambda\to\pi\eta\Lambda\) is approximately 25.40 nb, considering a branching ratio of \(BR(a_{0}(1817)\to\pi\eta)\approx 23.1\%\). These events can be reliably detected at J-PARC every 100 days, with dozens of events specifically related to the \(a_{0}(1817)\). Consequently, the \(a_{0}(1817)\) can be confidently observed from the \(K^{-}p\to a_{0}(1817)\Lambda\to\pi\eta\Lambda\) reaction under the current experimental conditions. Therefore, with the future upgrade of J-PARC, there is a promising opportunity to discover and study the \(a_{0}(1817)\) in greater detail. ## IV Summary In the past two years, significant progress has been made in the study of the isovector scalar meson \(a_{0}(1817)\) by the BaBar and BESIII Collaborations. However, the measured resonance parameters of the \(a_{0}(1817)\) differ between these experiments, and there is still a lack of sufficient data to fully understand its structure [12]. In order to further investigate the intrinsic properties of the \(a_{0}(1817)\), we propose to explore its characteristics through \(K^{-}p\) scattering. There are several reasons for choosing the \(K^{-}p\) interaction to search for the \(a_{0}(1817)\). Firstly, the ground state particle of the \(a_{0}(1817)\), \(a_{0}(980)\), has already been observed in the \(K^{-}p\) scattering process. Secondly, the decay channel \(a_{0}(1817)\to K^{+}K^{-}\) plays a significant role in the overall decay of the \(a_{0}(1817)\). By employing the effective Lagrangian method and the Regge trajectory model in quantum field theory, we calculate the total and differential cross sections of \(K^{-}p\to a_{0}(1817)\Lambda\). Our results indicate that the total cross section exhibits a peak at a center-of-mass energy of \(W=3.4\sim 3.6\) GeV, suggesting that this energy range is ideal for detecting the \(a_{0}(1817)\) through the \(K^{-}p\to a_{0}(1817)\Lambda\) reaction. Moreover, the differential cross section is highly sensitive to the scattering angle \(\theta\) and the minimum momentum transfer \(t\). Therefore, high-precision data from experimental facilities such as J-PARC, OKA@U-70, and SPS@CERN, which provide suitable kaon beams, are eagerly awaited. Based on the current experimental conditions, the detection of the \(a_{0}(1817)\) from \(K^{-}p\to\pi\eta\Lambda\) is considered more feasible compared to \(K^{-}p\to K^{+}K^{-}\Lambda\). The theoretical insights obtained in this study will provide valuable information for future experiments aimed at identifying and characterizing the \(a_{0}(1817)\) state. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grants No. 12065014, No. 12047501 and No. 12247101, and by the Natural Science Foundation of Gansu province under Grant No. 22JR5RA266. We acknowledge the West Light Foundation of The Chinese Academy of Sciences, Grant No. 21JR7RA201. X. L. is also supported by the China National Funds for Distinguished Young Scientists under Grant No. 11825503, National Key Research and Development Program of China under Contract No. 2020YFA0406400, the 111 Project under Grant No. B20063, the Fundamental Research Funds for the Central Universities, and the project for top-notch innovative talents of Gansu province.
2308.04503
Aspects of Machian Gravity (I): A Mathematical Formulation for Mach's Principle
Einstein formulated the general theory of relativity (GR) with an aim to mathematically incorporate Mach's principle. Despite early hopes, it became evident that GR did not follow Mach's proposition. Nevertheless, due to its accurate explanation of various observational results, Einstein refrained from further attempts to formulate Mach's principle. Over time, multiple researchers attempted to develop gravity theories aligned with the Machian model of inertia. However, each of these theories possessed its own strengths and weaknesses. This paper presents a novel theory of gravity that fully embraces Mach's principle. This metric-based theory, termed as Machian Gravity (MG), can be derived from the action principle, ensuring compliance with all conservation laws. The theory demonstrates its efficacy by providing precise explanations for galactic rotation curves. Moreover, it effectively resolves the discrepancy between dynamic mass and photometric mass in galaxy clusters without resorting to dark matter. It also presents a resolution for the expansion history of the universe without requiring any dark matter and dark energy. Consequently, MG presents a viable and compelling alternative to the standard gravity theory.
Santanu Das
2023-08-08T18:11:44Z
http://arxiv.org/abs/2308.04503v1
# Aspects of Machian Gravity (I): A mathematical formulation for Mach's Principle ###### Abstract Einstein formulated the general theory of relativity (GR) with an aim to mathematically incorporate Mach's principle. Despite early hopes, it became evident that GR did not follow Mach's proposition. Nevertheless, due to its accurate explanation of various observational results, Einstein refrained from further attempts to formulate Mach's principle. Over time, multiple researchers attempted to develop gravity theories aligned with the Machian model of inertia. However, each of these theories possessed its own strengths and weaknesses. This paper presents a novel theory of gravity that fully embraces Mach's principle. This metric-based theory, termed as Machian Gravity (MG), can be derived from the action principle, ensuring compliance with all conservation laws. The theory demonstrates its efficacy by providing precise explanations for galactic rotation curves. Moreover, it effectively resolves the discrepancy between dynamic mass and photometric mass in galaxy clusters without resorting to dark matter. It also presents a resolution for the expansion history of the universe without requiring any dark matter and dark energy. Consequently, MG presents a viable and compelling alternative to the standard gravity theory. ###### Contents * 1 Introduction * 2 Mach's principle and gravity theory * 2.1 Relativity of motions * 2.1.1 General theory of relativity * 2.1.2 General Relativity and its reference frame * 2.2 Sciama's attempt to incorporate Mach's principle in gravity * 2.2.1 Limitations of the theory * 2.3 Variation of the gravitation constant * 2.3.1 Brans Dicke theory * 3 Developing the theory of gravitation and Mach's principle * 3.1 Motion in a curved spacetime * 3.2 The Einstein's tensor * 4 Static, Spherically symmetric, Vacuum solution for weak gravitation field * 4.1 The Tully-Fisher relation * 5 Testing theory against observations * 5.1 Galactic cluster mass * 5.2 Spiral Galactic rotation curves * 6 Machian Gravity in presence of the source terms * 7 Cosmological solution from a generalized metric * 7.1 Cosmology in a Robertson Walker metric * 8 Discussion and Conclusion * A A brief discussion about Kaluza-Klein mechanism * A.1 Cristoffel Symbols * A.2 Kaluza-Klein mechanism * B Understanding the Hoyle-Narlikar's argument with C field * C Calculations for Cosmology * C.1 Calculating the components for a diagonal metric * C.2 Calculating the stress-energy tensor ## 1 Introduction Newtonian gravity can provide a very accurate description of gravity, provided the gravitational field is weak, not time-varying and the concerned velocities are much less than the speed of light. It can accurately describe the motions of planets and satellites in the solar system. Einstein formulated the general theory of relativity (GR) to provide a complete geometric approach to gravity. GR is designed to follow Newtonian gravity at a large scale. It can precisely describe the motion of planets in our solar system. It can explain the perihelion precession of Mercury's orbit and the bending of light by the Sun, which were never realized before, using Newtonian mechanics. Over the years, numerous predictions of GR, such as the existence of black holes, gravitational waves, etc. have been observed. This makes GR one of the most well-accepted theories of gravity. However, the drawbacks of GR come to light when GR is applied on the galactic and cosmological scale. It fails to produce the galactic velocity profiles, provided calculations are made just considering the visible matters in the galaxy. This led researchers to postulate a new form of weakly interacting matter named dark matter. Earlier people assumed that dark matter (DM) is the particles emerging from supersymmetry theory [1]. However, the lack of evidence of these particles from Large Hadron Collider (LHC) strengthens the proposition of other candidates, such as Axions, ultra-light scalar field dark matter, etc [2; 3]. A further mysterious puzzle is the dark energy (DE) because that requires to produce a repulsive gravitation force. Cosmological constant or \(\Lambda\)-term provides an excellent solution for this. However, as the observations get more precise, multiple inconsistencies come to light [4; 5; 6; 7; 8; 9; 10; 11]. There can be two ways to solve the dark sector of the universe. Firstly, we can assume that there are in-need some type of matter that does not interact with standard model particles and can give us dark matter, and we have some form of energy with a negative pressure and provide a dark energy like behavior. While this can, in need, be the case, the possibility that the GR fails to explain the true nature of gravity in kilo parsec scale can also not be overlooked. In such a case, we need an alternate theory of gravity that can replicate GR on a relatively smaller scale while deviating from it on a galactic scale. Several theories have been put forward in the last decade to explain DM and DE. Empirical theories like MOND, proposed by [12; 13; 14; 15] can explain the galactic velocity profiles extremely well but violates momentum conservation principles. Therefore, if a mathematically sound theory is developed that can mimic the MOND empirically, then that can explain the dark matter. Bekenstein proposed AQUAL [16; 17; 18] to provide a physical ground to MOND. Other theories, such as Modified gravity VeTes [19; 20; 21; 22], TeVeS [23], Massive gravity [24; 25; 26; 27] etc. are also proposed to match the galactic velocity profiles without dark matter, etc. Other higher dimensional theories such as induced matter theory [28; 29; 30; 31; 32] etc. are also proposed by researchers. However, all these theories came from the natural desire to explain the observational data and not build on a solid logical footing. In the early 20th century, Earnest Mach hypothesized that the inertial properties of matter must depend on the distant matters of the universe. Einstein was intrigued by Mach's Principle and tried to provide a mathematical construct of it through the General theory of relativity (GR). However, Einstein soon realized that his field equations imply that a test particle in an otherwise empty universe has inertial properties, which contradicts Mach's argument. However, intrigued by the overwhelming success of GR in explaining different observational data, he did not make any further attempt to explain Mach's principle. In view of this, it is worthwhile searching for a theory that implies that matter has inertia only in the presence of other matter. Several theories that abide by Mach's principle have been postulated in the last century. Amongst these, the most prominent are Sciama's vector potential theory [33], Brans Dicke theory or the scalar-tensor theory of gravity [34; 35; 36] and Hoyle Narlikar theory [37; 38; 39; 40; 41] etc. In this article, we address the issue of the dark sector of the universe and propose a theory of gravity based on Mach's principle. It is based on the following premises. * Action principle: The theory should be derived from an action principle to guarantee that the theory does not violate conservation laws. * Equivalence principle : Various research groups have tested the Weak Equivalence Principle (WEP) at an exquisite process. Therefore, any theory must follow the weak equivalence principle. However, the strong equivalence principle has not been tested on a large scale. Therefore, if the ratio of the inertial mass and the gravitational mass changes over space-time (on a galactic scale or cosmological scale), then that does not violate results from our local measurements. In accordance with Mach's principle, the inertial properties of matter come from all the distant matter of the universe. As the matter distribution at different parts of the universe is different, the theory may not follow the strong equivalence principle. * Departure from Newtonian gravity: Newtonian gravity and GR provide an excellent result in the solar system scale. However, the deviation from GR happens only at the galactic scale. Therefore, the proposed theory should follow GR on the solar system scale, and it only deviates from GR at the galactic scale. The paper is organized as follows. The second section briefly discusses the background and previous developments in gravity theory to explain Mach's principle. In the next section, we explain how to explain Mach's principle and discuss the mathematical tools used to formulate the theory. We present the source-free field equations for the theory in the next section. The static spherically symmetric vacuum solution for the theory in weak field approximation is presented in the fourth section. We show that the solution follows Newtonian gravity and GR at a smaller scale but deviates from it at a large scale. The fifth section provides some examples of galactic rotation curves and galaxy cluster mass distribution to show that the theory can provide results that match the observations accurately. The source term of the theory has been described in the 6th section. The next section explains how we can explain cosmology without additional dark matter and dark energy. The final section is the conclusion and discussion section. We have also added three appendices where we describe the nitty-gritty of the calculations. Also, in Appendix B we explain how Hoyle and Narlikar, in their C-field, correctly identified the issue with Mach's principle; however, they failed to provide a correct mathematical solution. ## 2 Mach's principle and gravity theory Despite being one of the most fundamental concepts in physics, the concept of mass lacks a well-defined definition. However, for doing any physics, we need some working concept of mass. Mass can be defined in two different ways. The mass, defined from the inertial properties of matter, is called the inertial mass. On the other hand, mass, defined from the gravitational properties of matter, is called gravitational mass. The gravitational mass can again be of two types, namely active gravitational mass and passive gravitational mass. However, this can be ignored for our discussion here. Several research has been conducted to measure the ratio between inertial and passive gravitational mass. However, they came out to be the same for all the materials. We term it as the weak equivalence principle. However, the inertial mass of a particle is measured based on its motion in an inertial coordinate frame. Therefore, determining the inertial coordinate system is important for measuring the inertial mass. However, it is difficult to determine a perfect inertial coordinate system because there is no external reference frame to measure its acceleration. Ernest Mach postulated that the inertial reference could be determined by measuring the motions of distant objects in the universe. This implies that the distant objects in the universe actually determine the inertial properties of matter, which is the famous Mach's principle. Therefore, if two identical objects are kept at two different locations in the universe, then depending on their backgrounds, the inertial masses of those two particles may differ. We can elaborate this concept below. ### Relativity of motions The velocity or acceleration of a particle are relative quantities, i.e. they are always measured with respect to some reference frame [42]. While measuring the velocity or the acceleration of a running train they are measured with respect to the surface of the Earth. However, the Earth is orbiting the sun, which is again circling our galaxy. The galaxy also has some random motion in the galaxy cluster, and so on. Therefore, if the origin of the coordinate system is chosen to be at the center of the galaxy, then the velocity and acceleration of the train will be completely different. As the acceleration of a particle is related to the force exerted on the particle, the force is also associated with the coordinate system. Let us consider that a stone is tied with a string and whirled around in a circle. We define two reference frames, one with the origin at the center of the circle, which is fixed with respect to us, and the other is fixed at the stone. In the reference frame that is fixed at the center of the circle, we can analyze the forces on the stone using Newton's law. If \(m_{i}\) is the inertial mass, \(v\) is the velocity of the stone, \(r\) is the radius of the circle, and \(T\) is the tension on the string, then using Newton's law, we can write \[m\frac{v^{2}}{r}=T\,. \tag{1}\] On the other hand, in the reference frame fixed to the stone, the stone has no velocity, \(v=0\). Therefore, the left-hand side of Eq.(1) becomes zero. However, the right-hand side, i.e., the tension on the string towards the center, remains the same as \(T\). Therefore, the equality of the Eq.(1) does not hold in this frame. Newton's law is not applicable in this reference frame, and it's called a non-inertial reference frame. To balance the equation in such frames, we need additional fictitious forces, known as inertial forces. In this example, the fictitious force is the centrifugal force, and it is equal and opposite to the force \(T\), i.e., \(-T\). The source of these inertial forces is still unknown. Interestingly, in the second reference frame, the rest of the universe, i.e. the distant starts, galaxies, etc, are rotating. Therefore, Mach [43] hypothesized that inertial forces are generated due to the rotation of all the distant stars in the non-inertial coordinate system, which is the so called Mach's principle. #### 2.1.1 General theory of relativity The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. The general theory of relativity (GR) was proposed to extend this philosophy. The logic behind Einstein's GR can be described as follows. According to Newton's gravity, if we place a particle in a gravitational field, then \[\texttt{Inertial mass}\times\texttt{acceleration}=\texttt{passive gravitational mass}\times\texttt{gravitational field} \tag{2}\] Therefore, if we consider the inertial mass is equal to the passive gravitational mass, then in a small region of spacetime, it is impossible to distinguish between the acceleration and the gravitational field. Then Einstein showed that spacetime gets curved in an accelerated reference frame for deriving GR. He used a simple thought experiment to do this. He considers two systems, \(K\) and \(K^{\prime}\), whose \(z\) axes are aligned, where \(K\) is an inertial frame and \(K^{\prime}\) is a noninertial frame that is rotating with constant angular velocity with respect to \(K\) frame. If we take a circle on the \(x^{\prime}y^{\prime}\) plane of the \(K^{\prime}\) reference frame and fill one of its diameter and perimeter with small rigid rods of length \(l\), in \(K\) frame perimeter to diameter ratio is \(\pi\). However, in \(K^{\prime}\) frame, the ratio is less than \(\pi\) due to the length contraction from the special relativity. He used Riemannian geometry to show that this spacetime curvature can explain the acceleration. As the acceleration and gravitational field cannot be distinguished within a small enough region of spacetime, in GR, the spacetime curvature is related to the stress-energy tensor to formulate the theory of gravitation. #### 2.1.2 General Relativity and its reference frame Even though GR was proposed based on the philosophy that "the laws of physics must be of such a that they apply to all the systems of reference", GR does not follow this principle. To explain that, we can consider a very simple example. Let us consider a hypothetical universe with no particle except two buckets full of water. One bucket is rotating with respect to the other. As there is no external reference frame, there is no way to know which bucket is rotating. So which one of these buckets will experience the centripetal force? In other words, the water level of which bucket will be curved? There is probably no correct answer to this question because in this hypothetical universe, there are no other distant objects with respect to which can be used to determine the inertial reference frame. So probably, the answer to this hypothetical question is that none of the water surfaces will get curved because, according to Mach's principle, the inertia of a particle is determined by the distant objects in the universe. However, in the real world, distant stars create the background star field, and an inertial or noninertial frame is determined with respect to that start field. In Einstein's thought experiment given in the previous section, \(K\) is an inertial reference frame. An observer in \(K\) reference frame will see that the background field created by the distant stars and galaxies etc., is static. However, an observer in the \(K^{\prime}\) reference frame finds that the background stars, galaxies, etc., are rotating. Therefore, somehow this distant star field is holding the space, and we are measuring the motions with respect to that space. If the \(K^{\prime}\) reference frame rotates with respect to the \(K\) at an angular velocity \(\omega\), then all the distant stars \(S\) in \(K^{\prime}\) frame will rotate in an angular velocity \(\omega\). We can extend the above thought experiment and consider that the entire background starts rotating so that the observer in the \(K^{\prime}\) reference frame finds herself in a static position with respect to the distant stars. The observer in the \(K\) reference frame will now see that background stars start rotating in her reference frame. Therefore, even though the observers at the \(K\) or \(K^{\prime}\) reference frame have done nothing, her definition of the inertial reference frame gets flipped. This is because the entire spacetime is not rotating. However, the general theory of relativity has no term that can account for this effect. Therefore, a proper theory of gravity should consist of all these effects. ### Sciama's attempt to incorporate Mach's principle in gravity There have been several attempts to incorporate Mach's principle in gravity. According to Mach's principle, as the rest of the universe determines the inertial frames, the inertia can not be an intrinsic property of matter; rather, it should arise due to the interaction of matter with the universe. This raises the question of how Newton's laws of motion can be so accurate despite their complete lack of reference to the physical properties of the universe, such as the energy density of the universe, etc. In 1954, Sciama proposed a model of gravity where he postulated that "In the rest-frame of any body the total gravitational field at the body arising from all the other matter in the universe is 0" [33, 44, 45]. He derived a vector potential from gravity and the velocity of a particle and showed that the universe in a noninertial reference frame could provide inertial forces, such as the Coriolis force or the centrifugal force, etc. To give a brief overview of Scima's theory, he considers that the universe creates a potential on the test particle. If the universe has a uniform density \(\rho\) with respect to any test particle, then the total potential of the universe on that particle will be \[\Phi=-G\int_{V}\frac{\rho}{r}dV. \tag{3}\] Here, \(G\) is the gravitational constant. \(dV\) is the volume of the universe within a distance \(r\) and \(r+dr\) from the test particle. If the particle moves with a velocity \(-\vec{v}\) with respect to the smoothed-out universe, then in the rest frame of the test particle, the universe will move with a velocity \(\vec{v}\). So the vector potential exerted by the universe on the particle will be given by \[\vec{A}=-G\int_{V}\frac{\vec{v}\rho}{cr}dV=-G\left(\frac{\vec{v}}{c}\right) \int_{V}\frac{\rho}{r}dV=\left(\frac{\vec{v}}{c}\right)\Phi. \tag{4}\] Here, we take the velocity outside the integral because the velocity \(v\) is independent of \(r\). The relativistic effects are not considered, i.e. the \(\left(\frac{v^{2}}{c^{2}}\right)\) terms are neglected. In the relativistic limit, we need to use the four-velocity, which will give us the four-potential, exactly like electromagnetism. This actually solves the problem of inertia related to the Mach principle. To understand that, we will consider two cases. #### Linear motion First, let's consider linear motion. In the rest frame of the test particle, the universe is moving with a linear velocity \(v\). Therefore, the gravitational field the universe will create due to the motion of the particle is given by \[\vec{E}=-\vec{\nabla}\Phi-\frac{1}{c}\frac{\partial\vec{A}}{\partial t}=- \frac{1}{c^{2}}\Phi\frac{\partial\vec{v}}{\partial t} \tag{5}\] Here the \(\vec{\nabla}\Phi=0\) as we are considering an isotropic universe and also the variation of \(\Phi\) with respect to \(t\) is significantly small and hence \(\frac{\partial\Phi}{\partial t}=0\). So, in the rest frame of the particle, an observer will experience this kind of gravitation field, and if the passive gravitational mass of the test particle is \(m_{p}\) then the particle will see a force \(\vec{F}=m_{p}\vec{E}\) coming from of the universe. Here it's important to note that if we define \(m_{i}=m_{p}\frac{\Phi}{c^{2}}\), then it will recover Newton's law. Therefore, we don't need to consider any additional pseudo forces to balance Newton's laws. Exactly like electromagnetism, we can also introduce a gravitomagnetic field given by \(\vec{B}=\vec{\nabla}\times\vec{A}\). However, in this case \(\vec{\nabla}\times\vec{A}=0\). #### Circular motion When the test particle is static with respect to the universe, then the gravitational potential on the test particle from the universe is given by \[\vec{A}=0\qquad\mbox{and}\qquad\Phi=-G\int_{V}\frac{\rho}{r}dV\,. \tag{6}\] If the test particle rotates, then in the test particle's reference frame, the universe should rotate. Therefore, the rotating universe should create a potential on the test particle. If the particle rotates at an angular velocity \(\omega\) in the \(x-y\) plane, then the four potential can be written as \[A_{x}=\omega yI\,,\qquad A_{y}=-\omega xI\,,\qquad A_{z}=0\,,\qquad\Phi=-[1+ \omega^{2}r^{2}]^{\frac{1}{2}}I\,. \tag{7}\] where \(r^{2}=x^{2}+y^{2}\) and \(I=G\int_{V}\frac{\rho}{r}dV\). As before, we can calculate the gravitational field of the universe as \[\vec{E}=-\vec{\nabla}\Phi-\frac{1}{c}\frac{\partial\vec{A}}{\partial t}=\frac {\omega^{2}r}{(1+\omega^{2}r^{2})^{\frac{1}{2}}}I\approx\omega^{2}rI\,,\qquad \mbox{for}\qquad\omega r\ll 1\,. \tag{8}\] Therefore, in the rest frame of the universe, the test particle will follow the standard Newtonian equations, and it will have the centripetal force. However, in the rest frame of the particle, the universe is providing it a gravitation field \(\vec{E}\), which has similar expressions as the centripetal force and will act as the centrifugal force. Therefore, we don't require any pseudo forces. Here the gravitomagnetic field is also nonzero and is given by \[\vec{B}=\vec{\nabla}\times\vec{A}=2\vec{\omega}I\,. \tag{9}\] The test particle is not moving at its rest frame and will not see any force. However, if a second test particle moves at a velocity \(\vec{v}\) in this frame, it will experience a force due to this gravitomagnetic effect, and similar to the electromagnetism it will be given by \[\vec{v}\times\vec{B}=2\vec{v}\times\vec{\omega}I \tag{10}\] This field corresponds to the Coriolis field in the Newtonian theory. Finally, we can also recover the Euler force in an accelerated rotation using straightforward calculations. #### 2.2.1 Limitations of the theory Even though Sciama's theory brilliantly incorporates Mach's principle, there are multiple limitations. 1. The theory has been derived for a vector potential from Newtonian mechanics. However, the field equations of gravity are given by the variation of \(g_{\mu\nu}\). Therefore a vector field equation can not provide a complete theory of gravity, and we need a tensor field equation. 2. In [33], it is assumed that the integral \(I\to 1\), leads to the correct value of the inertial forces. However, the value will depend on the content of the universe and the distribution of the matter in the universe. The gravitational force depends on the passive gravitational mass of the test particle, and the inertial force depends on the inertial mass of the test particle. Therefore, we need to make the integral unity to achieve equality between the passive gravitational mass and the inertial mass. Therefore, the \(\Omega_{m}\), \(\Omega_{r}\), and \(\Omega_{\Lambda}\) of the universe need to be such that the integral is unity. In fact, at different redshifts, the integral will be different. Therefore, the ratio of the gravitational and the inertial mass will be different, i.e., it will be a function of the space-time. 3. Most of the contribution of the inertia comes from the distant objects of the universe. The local objects do not have that much influence on the inertia of the test particle because most of the power in the integral is coming from the objects from far away. The integral, even in galactic scale or cluster scale, is significantly small about (\(\sim 10^{-7}-10^{-9}\)) of the full contribution. However, gravitational action must be instantaneous if it's happening through gravity. So there may be issues with causality. ### Variation of the gravitation constant According to Mach's principle, the only meaningful motion is that relative to the rest of the matter in the universe, and the inertial reaction relative to the distant matter of the universe may be interpreted equivalently as a gravitational force acting on a fixed laboratory due to the presence of the distant accelerated matter (which will refer as the background matter henceforth). However, the mass distribution of the universe is nonuniform, e.g., the density of the universe at two different redshifts, \(z_{1}\) and \(z_{2}\) is different. Therefore, if we place two particles at \(z_{1}\), and calculate the inertial reaction on one of the particles due to their gravitational attraction, then that should be different when they are kept at redshift \(z_{2}\). However, Newtonian gravitation does not show that property. Brans Dicke in [34] tried to address this particular aspect of Mach's principle. They show that the gravitational constant should change as a function of space and time. #### 2.3.1 Brans Dicke theory Brans Dicke theory, instead of addressing Mach's principle in its entirety, focused on a particular aspect of Mach's principle. They explored how the gravitational constant will change if Mach's principle is correct. The interpretation that inertia is a gravitational interaction with distant objects in the universe can be given an interesting implementation. Consider a test body that is falling towards Sun. In a coordinate system, where the object is not accelerating, the Sun's gravitational pull may be considered balanced by another gravitational pull of the rest of the universe, which we call an inertial reaction. Now if we double all the gravitational forces, the balance is not getting disturbed. Thus the acceleration is determined by the gravitational pull of the universe but is independent of the strength of the gravitational interaction. If the mass of Sun is \(m_{s}\) and the distance of the test particle from the Sun is \(r\), then the acceleration of the test particle due to the gravity of Sun will be \(a=Gm_{s}/r^{2}\). As the acceleration is independent of the gravitational constant \(G\) through the dimensional analysis, we can write \(a\sim m_{s}R_{U}c^{2}/M_{U}r^{2}\), where \(M_{U}\) is the mass of the visible universe and \(R_{U}\) is the Hubble radius near the test particle. These two equations can be combined to get \[GM_{U}/R_{U}c^{2}\sim 1\,. \tag{11}\] As the mass distribution of the universe is not the same at all places and times, the \(G\) will vary over space and time. If we define the \(\hbar\) and \(c\) and constant, we can define a characteristic mass \[(\hbar c/G)^{\frac{1}{2}}=2.16\times 10^{-5}\mbox{g}\qquad\cdots\qquad\mbox{( in our neighbourhood)} \tag{12}\] and this mass can be used as a characteristic mass based on which we can measure other particles in that part of the universe. Here it should be noted that \(\hbar\) and \(c\) are defined to be constant. The above-mentioned effect can be taken care of by varying \(G\) or inertial or gravitational mass; or even by any of their combination to be varying. However, for simplicity, Brans and Dicke, in their paper consider the masses to be constant and \(G\) is varying. It can be assumed that \(G\) is a function of a scalar field \(\phi\) and that \(G^{-1}\thicksim\phi\) and this leads to the Action \[S=\frac{1}{16\pi}\int d^{4}x\sqrt{-g}\left(\phi R-\frac{w_{D}}{\phi}\partial_ {\alpha}\phi\partial^{a}\phi\right)+\int d^{4}x\sqrt{-g}{\cal L}_{\rm M}\,, \tag{13}\] where \(w_{D}\) is the dimensionless constant known as Dicke coupling constant. \(R\) is the Ricci Scalar and L is the Lagrangian contribution from the matter density. This gives the field equations of the Brans-Dicke theory as \[G_{\alpha\beta} = \frac{8\pi}{\phi}T_{\alpha\beta}+\frac{w_{D}}{\phi^{2}}\left(\partial _{\alpha}\phi\partial_{\beta}\phi-\frac{1}{2}g_{\alpha\beta}\partial_{c}\phi \partial^{c}\phi\right)+\frac{1}{\phi}\left(\nabla_{\alpha}\nabla_{\beta}\phi- g_{\alpha\beta}\Box\phi\right), \tag{14}\] \[\Box\phi = \frac{8\pi}{3+2w_{D}}T\,. \tag{15}\] where \(g_{\alpha\beta}\) is the metric tensor, \(G_{\alpha\beta}=R_{\alpha\beta}-\frac{1}{2}Rg_{\alpha\beta}\) is the Einstein tensor, \(T_{\alpha\beta}\) is the stress-energy tensor, \(T=T_{\alpha}^{\alpha}\) is the trace of the stress-energy tensor; \(\phi\) is the scalar field. \(\Box\phi=(\sqrt{-g})^{-1}(\sqrt{-g}g^{\alpha\beta}\partial_{\beta}\phi)_{;\alpha}\). Many researchers have measured the variation of the gravitational constant. [46] measure orbital period rates of pulsars and set a limit of \(|\dot{G}/G|=23\times 10^{-12}\mathrm{yr}^{-1}\). Other such limits from pulser timings are researched by [47; 48; 49]. From white dwarf cooling an upper bound \(\dot{G}/G=-1.8\times 10^{-12}\mathrm{yr}^{-1}\) is set [50; 51] and from a white dwarf pulsation limit of \(\dot{G}/G=-1.3\times 10^{-10}\mathrm{yr}^{-1}\)[51; 52; 53]. Variation of the gravitational constant has also been derived from Lunar laser ranging to be \(\dot{G}/G=(4\pm 9)\times 10^{-13}\mathrm{yr}^{-1}\)[54]. Current planetary radar experiments have measured a significant linear increase of \(dAU/dt=0.15\pm 0.04\mathrm{myr}^{-1}\), which may imply \(\dot{G}/G=(-10\pm 3)\times 10^{-13}\mathrm{yr}^{-1}\)[55]. Constrain has also been put forward by the supernova Sn1a. An equivalent dimensionless limit is -\(0.5<\dot{G}/(GH_{0})<1\), where \(H_{0}\) is the Hubble parameter at present. The problem with the Brans-Dicke theory is that the gravitational constant is taken as a variable and treated as a scalar field. It firstly destroys the beauty of general relativity that only originated from the geometry of space-time. Secondly, as the gravitational constant is a coupling constant, treating it as a scalar field is ad hoc. It is unclear why it will behave as a scalar field and what it physically means. ## 3 Developing the theory of gravitation and Mach's principle In the previous section, we discuss how the Brans Dicke theory explains a particular aspect of Mach's principle through the variation of \(G\). Of course, in their logic, it does not make sense to fix which parameter is constant and which one is varying. We can get the same effect by keeping \(G\) constant and varying mass or \(\dot{h}\) or \(c\). Therefore, we defined these parameters as constants to simplify the equations. However, choosing \(G\) to vary may not always be a good idea. Let us assume that there are two particles with gravitational mass \(m_{1}\) and \(m_{2}\) and charge \(q_{1}\) and \(q_{2}\). They are placed at a distance \(r\) and their gravitational attraction and the electric repulsion of these two particles are balancing each other., i.e. \[\frac{Gm_{1}m_{2}}{r^{2}}=\frac{1}{4\pi\epsilon_{0}}\frac{q_{1}q_{2}}{r^{2}}\,. \tag{16}\] Suppose we change the background of these particles ( i.e. change all the matter distribution around it, or take the system and place it in another part of the universe ). According to the Brans Dicke theory, the \(G\) will change. Therefore the system will disbalance unless the \(\frac{q_{1}q_{2}}{\epsilon}\) also changes accordingly to counter the change in \(G\). The charges and the gravitational masses are the intrinsic properties of matter. Hence, it is not logical for the forces to become imbalanced as a result of the background change. So we need to change the \(\epsilon\). However, instead of changing the coupling constant for each force, an easier way to fix this is to give up the strong equivalence principle. The inertial mass of a particle depends on its inertial property and hence depends on the background. Therefore, we can consider that the ratio between the inertial and the gravitational mass is changing. It will not violate any of the logic of Brans-Dicke formulation. Suppose two identical particles are kept at two different places in the universe such that the backgrounds at the two different positions are different. In accordance with the previous discussions, the inertial properties of the two particles are different at those two locations. If the special theory of relativity is valid, if those masses are converted to energy completely at those two places, then the amount of energy released will be different. This will violate the energy conservation principle. Therefore, some changes are needed to save energy conservation principle, or some technique is needed which can relate the masses at the two places. If it is considered that the total energy of the particle for the first configuration of the background is \(E_{1}\) and that of the \(2^{\rm nd}\) case is \(E_{2}\), then according to the special theory of relativity, the energy can be related to the inertial mass as \[E_{1}^{2}=m_{1}^{2}c^{4}+p_{1}^{2}c^{2}\qquad\text{and}\qquad E_{2}^{2}=m_{2}^ {2}c^{4}+p_{2}^{2}c^{2}. \tag{10}\] Here \(p_{1}\) and \(p_{2}\) are the momentums of the particles at these two background configurations, and they can be assumed to be 0. Assuming that the background masses are far away from the particle, we can take that the 4-dimensional spacetime time is completely flat using General relativity. Therefore, there is no reason to believe that the energy of the particle at these two places are different. However, according to Mach's principle \(m_{1}\neq m_{2}\). We can add an extra term to the equation to save energy conservation and assume that this extra energy is coming from a fifth dimension. \[E^{2}=m^{2}c^{4}+p^{2}c^{2}+E_{m}^{2}.\] Now as we have an extra term for measuring the contribution from the background dimension, without loss of generality, we can redefine the coordinate systems, and we can use some intrinsic quantity \(m\) in the equations, such as the gravitational mass, instead of considering the inertial mass. This can simplify the equations by making the mass constant, and all the effects due to the background variation will get captured by the \(E_{m}\) term. We can divide the equation by \(m\) to get the line element. \[ds^{2}=c^{2}dt^{2}-dx^{2}-dy^{2}-dz^{2}-\frac{\hbar^{2}}{4}d\zeta^{2}\,. \tag{11}\] The last term is from a new dimension \(\zeta\), which I refer to as background dimension (or mass dimension), and it has a dimension \([M^{-1}L^{-1}T]\). The Planck constant \(\hbar\) is taken to match the dimensions [56]. We can show that this formulation not only solves this energy equation in the flat spacetime, but it can also solve the reference frame problem and Brans Dicke type of problem in a 5D curved spacetime. ### Motion in a curved spacetime The five-dimensional line element can be written as \(ds^{2}=\tilde{g}_{AB}dx^{A}dx^{B}\) where, \(\tilde{g}_{AB}\) is the 5 dimensional metric. We use the roman uppercase indices, \(A,B,C,\ldots\) for representing the \(5D\) coordinate system, and they run from 0 to 4. We use a tilde to denote the quantities in the \(5D\) frame. On the other hand, the Greek indices \(\alpha,\beta,\gamma,\ldots\) are used for representing the \(4D\) spacetime, and they can run from 0 to 3. A five-dimensional metric can be written in the 4+1 dimensional form as \[\widetilde{g}_{AB}=\left(\begin{array}{cc}g_{\alpha\beta}+\phi^{2}A_{\alpha }A_{\beta}&\phi^{2}A_{\alpha}\\ \phi^{2}A_{\beta}&\phi^{2}\end{array}\right)\qquad\qquad\text{or}\qquad \qquad\widetilde{g}^{AB}=\left(\begin{array}{cc}g^{\alpha\beta}&-A^{\alpha} \\ -A^{\beta}&A^{\beta}A_{\beta}+\frac{1}{\phi^{2}}\end{array}\right)\,, \tag{12}\] were \(g_{\alpha\beta}\) is the 4 dimensional metric, \(A_{\alpha}\) is a 4 dimensional vector field and \(\phi\) is a scalar field. We can show that this matrix can explain various aspects of Mach's principle. #### Producing the inertial forces First, let us understand how we can get different inertial or fictitious forces from our universe. For this, we consider the motion of a particle in a small enough region of spacetime. As the concerned distance is not very large, we can safely consider that the background of the particle remains roughly the same. Therefore, we can take all the partial derivatives with respect to the \(x^{4}\) as 0 and \(\phi\) as constant. We have calculated the 5-dimensional Crystoffel's symbols in terms of the 4D variables, and they are shown in Appendix A. Replacing the partial derivatives with respect to the \(x^{4}\) as 0 and \(\phi\) as constant we can get \[\widetilde{\Gamma}^{4}_{44}=0\,,\qquad\widetilde{\Gamma}^{4}_{4v}= \frac{1}{2}A^{\alpha}\phi^{2}F_{\alpha\nu}\,,\qquad\widetilde{\Gamma}^{4}_{ \alpha\nu}=-A_{\beta}\Gamma^{\beta}_{\alpha\nu}+A^{\beta}A_{\nu}\phi^{2}F_{ \beta\alpha}+\frac{1}{2}\left(\partial_{\alpha}A_{\nu}+\partial_{\nu}A_{\alpha} \right)\,,\] \[\widetilde{\Gamma}^{v}_{44}=0\,,\qquad\widetilde{\Gamma}^{v}_{4 \alpha}=\frac{1}{2}g^{\gamma\mu}\left(\phi^{2}F_{\alpha\mu}\right)\,,\qquad \widetilde{\Gamma}^{\beta}_{\mu\nu}=\Gamma^{\beta}_{\mu\nu}+\frac{1}{2}g^{ \beta\alpha}\left(A_{\nu}\phi^{2}F_{\mu\alpha}+A_{\mu}\phi^{2}F_{\nu\alpha} \right)\,. \tag{3.5}\] The motion of a particle is given by a five-dimensional geodesic \[\frac{\mathrm{d}^{2}x^{A}}{\mathrm{d}\tau^{2}}+\widetilde{\Gamma}^{4}_{BC} \frac{\mathrm{d}x^{B}}{\mathrm{d}\tau}\frac{dx^{C}}{\mathrm{d}\tau}=0\,. \tag{3.6}\] If we put the Crystoffel symbols from Eq. 3.5, then for the spacetime coordinates, we can get (This is a simple Kaluza Klein mechanism. For full calculation, please check Appendix A) \[\frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\tau^{2}}+\Gamma^{\mu}_{\nu\lambda}\frac {\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\frac{dx^{\lambda}}{\mathrm{d}\tau}=F^{\mu} _{\nu}\frac{\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\,. \tag{3.7}\] Provided we consider that \(A^{\mu}\) is the velocity of the coordinate system with respect to the background, this equation is the same as we discussed before in Sciama's theory in Sec. 2.2. So, we can get all the fictitious forces from the righthand side of the equations. For illustration, we can assume that a particle is whirling in a circular path. One reference frame is fixed at the center. In this reference frame, the background has no velocity, i.e., \(A^{\mu}=0\). Therefore, the right-hand side of the equation is zero. Suppose the particle is rotating in a circle of radius \(r\) and with rotational velocity \(\omega\). We can write the line element as \(ds^{2}=dt^{2}-dr^{2}-r^{2}d\phi^{2}\). As the particle is rotating in a circular orbit, we can write \(d\phi=d\phi^{\prime}+\omega dt\). Replacing this in the line element, we can get \[ds^{2}=-\left(1-\omega^{2}r^{2}\right)dt^{2}-2\omega\rho^{2}dtd\phi^{\prime}- dr^{2}-r^{2}d\phi^{\prime 2}\,. \tag{3.8}\] So for this line element, we can calculate the Crystoffel symbols as \[\Gamma^{\phi^{\prime}}_{tt}=-\omega^{2}r\,,\qquad\Gamma^{r}_{\phi^{\prime}\phi ^{\prime}}=-r\,,\qquad\Gamma^{r}_{t\phi^{\prime}}=\Gamma^{r}_{\phi t}=-\omega r\,, \tag{3.9}\] and the geodesic equation for \(r\) becomes \[\ddot{r}-\omega^{2}r\dot{t}\dot{t}-\omega r\dot{t}\dot{\phi}^{\prime}-r\dot{ \phi}^{\prime}\dot{\phi}^{\prime}=0\,. \tag{3.10}\] The overdot represents the derivative with respect to the line element \(ds\), which is roughly equal to \(dt\) in the non-relativistic case. As \(\dot{\phi}^{\prime}=0\), and \(\dot{t}\approx 1\), we get \(\ddot{r}=\omega^{2}r\). This is the standard centripetal acceleration for standard Newtonian mechanics. Now what will happen in the particle's reference frame? In this reference frame, the particle has no velocity, i.e., \(\omega=0\) and \(\dot{\phi}=0\). Therefore, according to Newtonian mechanics, there is no centripetal force. We need to introduce a centrifugal force to balance the tension on the string. The origin of this force is unknown. However, in our theory, the background now has a velocity \(A^{\phi}=r\omega\) and \(A^{t}=(1+r^{2}\omega^{2})^{\frac{1}{2}}\). The right-hand side of the equation gives \[\ddot{r}=\frac{\partial A^{t}}{\partial r}=\frac{\omega^{2}r}{(1+\omega^{2}r^{ 2})^{\frac{1}{2}}}\approx\omega^{2}r. \tag{3.11}\] where we have taken \(\dot{t}\approx 1\). This is the centrifugal force, which originates from the background rotation in the particle's reference frame. Therefore, this theory solves the reference frame-related issues of Mach's principle. ### The Einstein's tensor The vacuum field equation for the theory can be given by \(\widetilde{G}_{AB}=0\), where \(\widetilde{G}_{AB}\) is the 5-dimensional Einstein's tensor. Few straightforward calculations can show that \(\tilde{G}_{AB}=0\), in 5 dimension translate to the following equations in the four dimension \[G_{\alpha\beta}=\frac{\phi^{2}}{2}\left(g_{\alpha\beta}F_{\gamma\delta}F^{ \gamma\delta}/4-F_{\alpha}^{\gamma}F_{\beta\gamma}\right)-\frac{1}{\phi}\left[ \nabla_{\alpha}\left(\partial_{\beta}\phi\right)-g_{\alpha\beta}\Box\phi \right]+P_{\alpha\beta}\,, \tag{3.12}\] \[\nabla^{\alpha}F_{\alpha\beta}=-3\frac{\partial^{\alpha}\phi}{\phi}F_{\alpha \beta}+Q_{\beta}\,, \tag{3.13}\] \[\Box\phi=\frac{\phi^{3}}{4}F_{\alpha\beta}F^{\alpha\beta}+U\,. \tag{3.14}\] Here \(F_{\alpha\beta}=A_{\alpha;\beta}-A_{\beta;\alpha}\) is the field tensor. \(P_{\alpha\beta}\), \(Q_{\beta}\), and \(U\) are the terms containing the derivatives of the metric elements with respect to the fifth dimension i.e., \(\zeta\). \(G_{\alpha\beta}\) is a four-dimensional Einstein's tensor. We can see that the 4-dimensional Einstein's tensor comes up with some terms on the right-hand side, even in the absence of any matter. These extra terms, in fact, behave in the same way as matter and curve the space-time [28]. These terms can be interpreted as if some extra energy is coming from the background (i.e. from distant stars, galaxies, etc., due to their motion with respect to the chosen reference frame ). If the background of a particle changes, then the terms on the right-hand side will change. Therefore, any object sitting on that part of space-time will fill a force originating from the background, as shown before, and there is no need to take any fictitious mathematical force. In Newton's bucket experiment, explained in Sec. 2.1, the \(K^{\prime}\) reference frame, the distant objects are rotating. Therefore, according to Eq. 3.12, the \(G_{\alpha\beta}\) is nonzero. This gives rise to the inertial forces required to balance the equation. The need for these additional terms in the right-hand side of Einstein's field equations were realized much before by Hoyle and Narlikar. Therefore, they added an ad-hock scalar field with negative energy, which they term as the \(C\)-field (check B). This became the basis for energy conservation in steady-state cosmology. However, as we can see that the terms are much more complicated than a simple scalar field. #### Variation of gravitational field Another interesting aspect is that Eq. 3.12, provided the field tensor is 0, i.e., in the standard inertial reference frame where the background has no acceleration, \(G_{\mu\nu}\) acquires a term similar to the Brans-Dicke scalar field. This is a special case for the Brans-Dicke equation, where the Dicke coupling constant is \(w_{D}=0\). The equation of motion for the scalar field, \(\phi\) given by Eq. 3.14 is also slightly different. In addition to that, there are terms like \(P_{\alpha\beta}\), \(Q_{\beta}\), and \(U\), which involve the derivative with respect to the \(\zeta\) dimension. However, the important thing is that we get a scalar field from the theory, which is similar to the Brans-Dicke equation and can always be treated as the variation of \(G^{-1}\) or some other parameter as explained in their paper. While our theory is completely different from the Brans-Dicke theory, it provides a similar scalar field. ## 4 Static, Spherically symmetric, Vacuum solution for weak gravitation field In a vacuum, the field equation for the theory is \(\widetilde{G}_{AB}=0\), which after some rearrangements can be written as \(\widetilde{R}_{AB}=0\), where \(\widetilde{R}_{AB}\) is the Ricci tensor. If there is no gravitational field and the reference frame has no acceleration with respect to the background created by distant stars, then the matric will be given by the 5-dimensional Minkowski spacetime. If there is some nearby object that is creating a weak gravitational field, then the metric will be perturbed. Let us assume that the perturbation in the metric due to the gravitational field is \(\widetilde{\gamma}_{AB}\). For this weak field limit, only \(\widetilde{R}_{00}=\widetilde{R}_{0C0}^{C}\) is the important term, where term on the right-hand side is the Riemann tensor. The rest of the terms of the Ricci tensor will be either \({\rm O}(1/c)\) or \({\rm O}(\hbar/c)\) smaller and can be ignored in the weak field limit. We can break the Riemann tensor as \[\widetilde{R}^{B}_{0A0}=\partial_{A}\widetilde{\Gamma}^{B}_{00}-\partial_{0} \widetilde{\Gamma}^{B}_{A0}+\widetilde{\Gamma}^{B}_{AC}\widetilde{\Gamma}^{C}_{ 00}-\widetilde{\Gamma}^{B}_{0C}\widetilde{\Gamma}^{C}_{A0} \tag{4.1}\] The second term here is a time derivative, which vanishes for static fields. The third and fourth terms are of the form \((\widetilde{\Gamma})^{2}\), and since \(\widetilde{\Gamma}\) is first-order in the metric perturbation, these contribute only at second order and can be neglected, giving \[\widetilde{R}_{00}=\widetilde{R}^{A}_{0A0}=\partial_{A}\left(\frac{1}{2} \widetilde{g}^{AC}\left(\partial_{0}\widetilde{g}_{C0}+\partial_{0}\widetilde{ g}_{0C}-\partial_{C}\widetilde{g}_{00}\right)\right)=-\frac{1}{2}\widetilde{g}^{ AB}\partial_{A}\partial_{B}\widetilde{\gamma}_{00} \tag{4.2}\] For the static solution, the time derivative also vanishes, and the equation becomes \[\partial_{\zeta}^{2}\widetilde{\gamma}_{00}+\partial_{x}^{2}\widetilde{\gamma }_{00}+\partial_{y}^{2}\widetilde{\gamma}_{00}+\partial_{z}^{2}\widetilde{ \gamma}_{00}=0\,. \tag{4.3}\] Under the assumption of spherical symmetry of the special part, it can be written as \[\partial_{\zeta}^{2}(r\widetilde{\gamma}_{00})+\partial_{r}^{2}(r\tilde{ \gamma}_{00})=0\,. \tag{4.4}\] Using'separation of variables' and considering \((r\tilde{\gamma}_{00})=R(r)\chi(\zeta)\), we get \[\frac{1}{R}\frac{\partial^{2}R}{\partial r^{2}}=-\frac{1}{\chi}\frac{\partial ^{2}\chi}{\partial\zeta^{2}}=\lambda^{2}\,, \tag{4.5}\] where, \(\lambda\) is a constant. This gives \[R=P_{1}e^{\lambda r}+P_{2}e^{-\lambda r}\,,\qquad\qquad\chi=Q_{1}\cos(\lambda \zeta)+Q_{2}\sin(\lambda\zeta)\,, \tag{4.6}\] where, \(P_{1}\), \(P_{2}\), \(Q_{1}\) and \(Q_{2}\) are constants. The term \(\tilde{\gamma}_{00}\), under weak-field approximation, is the Newtonian potential and hence it cannot increase exponentially with distance. Therefore, taking \(P_{1}=0\), we can get \[(r\tilde{\gamma}_{00})=S+S_{1}r+P_{2}e^{-\lambda r}\left(Q_{1}\cos(\lambda \zeta)+Q_{2}\sin(\lambda\zeta)\right)\,. \tag{4.7}\] \(S+S_{1}r\) is the complementary function of the differential equation. \(S_{1}\) term only adds a constant value to the potential and will not affect any of the calculations. Therefore, we ignore this term in the rest of the paper. If we consider that over a place (a scale of the order of a galaxy) the background is almost similar then the change in \(\zeta\) is really small. Therefore, here we may just take \(\lambda\zeta\sim 0\). There is a constant factor of \(\hbar\) multiplied which is also is very small. So, in this limit \(\cos(\lambda\zeta)\to 1\) and \(\sin(\lambda\zeta)\to 0\). The equation for a geodesic path is given by \[\frac{d^{2}x^{A}}{ds^{2}}-\widetilde{\Gamma}^{A}_{BC}\frac{dx^{B}}{ds}\frac{ dx^{C}}{ds}=0\,. \tag{4.8}\] In the weak field limit, \(ds\approx dt\), giving \(\frac{d^{2}x^{A}}{dt^{2}}=\frac{1}{2}\partial_{A}\tilde{\gamma}_{00}\) Relating it with Newtonian gravity, we get \(\tilde{\gamma}_{00}=2\varphi\), where \(\varphi\) is the Newtonian potential of the gravitational field. Replacing these limiting values in Eq.(4.7) and substituting \(P_{2}Q_{1}=2KM\) and \(S=2(1+K)M\) and replacing \(\tilde{\gamma}_{00}=2\varphi\) we can get the potential as \[\Phi=\frac{GM}{r}\left[1+K\left(1-e^{-\lambda r}\right)\right]\,. \tag{4.9}\] Here, M is the mass at the center, and \(G\) is Newton's gravitational constant. \(\lambda\) and \(K\) are the background-dependent quantities. According to our calculations, the \(K\) and \(\lambda\) are independent of \(r\). Also, if there are two particles \(A\) and \(B\), then they will exert gravitational force on each other, and according to Newton's third law of motion, these forces should be equal and opposite. Therefore, if \(K\) and \(\lambda\) depend on the mass of the objects, then they should be symmetric, i.e. they should be functions of both masses. However, that will again violate the weak equivalence principle. For example, let's assume that \(B\) is falling towards \(A\). According to the weak equivalence principle, the acceleration of \(B\) with respect to \(A\) should not depend on the mass of \(B\). However, if \(K\) and \(\lambda\) are functions of \(m_{B}\), then different objects will accelerate towards \(A\) differently, which is not possible. Therefore, \(K\) and \(\lambda\) can not be functions of the individual masses. However, if there is a mass distribution around the objects, these parameters can depend on that mass distribution, which will not contradict any calculations. For example, if there is a mass \(C\) near these two objects, then the gravitational force between \(A\) and \(B\) will be influenced by \(C\). This is what Mach's principle claimed. Galactic velocity profiles show that in galaxies, the \(\lambda\) is of the order of few kpc\({}^{-1}\). When \(r\) is small, \(e^{-\lambda r}\sim 1\). Therefore, \(\Phi\) takes the form of Newtonian potential i.e. \(\Phi=\frac{GM}{r}\). This gives the Newtonian gravitational equation at the solar system scale. In the asymptotic limit of \(r\rightarrow\infty\), the exponential term goes to 0. Hence, for large values of \(r\), it becomes \((1+K)\) times that of Newtonian potential and can provide additional gravitational force in large gravitationally balanced systems, such as galaxies. A similar form of potential has previously been used by other groups to explain the galactic velocity profile correctly [19; 20; 21; 22; 57]. ### The Tully-Fisher relation As the potential due to a static spherically symmetric gravitational field is given by Eq.(4.9), we can calculate the acceleration due to the gravitational field as \[\frac{\partial\Phi}{\partial r}=-\frac{GM}{r^{2}}\left[1+K\left(1-e^{-\lambda r }\left(1+\lambda r\right)\right)\right]\,. \tag{4.10}\] Suppose a particle is orbiting mass \(M\) in a circular orbit of radius \(r\), and its orbital velocity is \(v\). The gravitational field should be equal to the centripetal acceleration of the particle, giving \[v^{2}=\frac{GM}{r}\left[1+K\left(1-e^{-\lambda r}\left(1+\lambda r\right) \right)\right]\,. \tag{4.11}\] This is an interesting equation. For small \(r\), i.e. \(\lambda r\ll 1\), this equation tends to the Keplarian the velocity of a particle in a circular orbit. When \(\lambda r\gg 1\), \(\exp(-\lambda r)\to 0\), it gives \[v^{2}=\frac{GM}{r}(1+K)\,, \tag{4.12}\] which is again a Keplarian velocity with a multiplication constant. Large virial masses of galaxy clusters require additional dark matter components. However, this additional \((1+K)\) factor can help us explain the missing mass in galaxy clusters. We have used this in the following section to explain the observational data. Figure 1: The plot illustrates that for various values of \(\alpha\), there exists a range of \(\lambda r\) where the curve flattens out. However, the velocity in the outer part of the spiral galaxies (rotationally bounded system) does not decrease with increasing radius, as suggested by Keplarian velocity. In fact, it is almost independent of radius \(r\). Interestingly the above equation Eq. 4.11 also has an attractive property. For a range of values of \(r\) and \(K\) the velocity becomes almost independent of \(r\). This was first explained in [58; 59]. \[v^{2}=\frac{GM(1+K)\lambda}{\lambda r}\left[1-\alpha e^{-\lambda r}\left(1+ \lambda r\right)\right]\,. \tag{4.13}\] where \(\alpha=\frac{K}{1+K}\). For \(\alpha\in(0.92,0.95)\) and \(\lambda r\in(0.4,2.5)\) the velocity becomes almost independent of \(r\). This can be seen in Fig. 1. Therefore, this expression can explain the velocity of spiral galaxies too. From the range of \(\alpha\) we can derive the range of \(K\) to be \((11,19)\). Therefore, for this \(r\) range, the velocity of the test particle behaves as \(v^{2}\sim GM(1+K)\lambda\). However, according to the Tully-Fisher relation, the mass of a spiral galaxy is linked to its asymptotic velocity as \(M\sim v^{\gamma}\), where \(\gamma\in(3.5,4)\). If we assume that \(M\sim v^{4}\), then we can take \[(1+K)\propto\frac{1}{\sqrt{M}}\qquad\implies\qquad K=\sqrt{\frac{M_{c}}{M}}-1 \tag{4.14}\] Here \(M_{c}\) is some constant mass. For others e.g. elliptical galaxy or clusters the value of \(K\) can be different. Putting everything together, the expression for the final velocity becomes \[v^{2}=\frac{GM}{r}\left[1+\left(\sqrt{\frac{M_{c}}{M}}-1\right)\left(1-e^{- \lambda r}\left(1+\lambda r\right)\right)\right]\,. \tag{4.15}\] Therefore, for a mass distribution similar to a spiral galaxy, this equation follows Newtonian velocity for a particle in orbit for \(\lambda r\ll 1\). For \(\lambda r\in(0.4,2.5)\), velocity becomes constant and follows the Tully Fisher relation i.e., \(v^{4}\sim M\). Finally for \(r\gg 2.5\), it behaves as \(v^{2}\sim\frac{\sqrt{M}}{r}\). However, for other types of mass distribution, the shape of \(K\) may be different. ## 5 Testing theory against observations Observational results from distributed systems, such as galaxies and galaxy clusters, have shown that the baryonic matter calculated from the luminosity of these objects is not enough to explain their dynamical properties. We need additional matter, known as dark matter. Several relations connect the dynamical properties of galaxies and galaxy clusters with their observed luminosity. The Tully-Fisher relation for spiral galaxies and the Faber-Jackson relation in elliptical galaxies are some of the earliest relations. Since then, several relations have also been put forward to relate the dynamic properties of massive distributed bodies with their observed luminosity [60; 61; 62; 63; 64]. In this section, we try to briefly show whether the Machian gravity model can explain the observations for galaxy clusters and galaxies. The detailed analysis has been discussed in [65; 66]. ### Galactic cluster mass The density distribution of hot gas in a cluster has been well described by the King \(\beta\)-model [67; 21; 68]. \[\rho(r)=\rho_{0}\left[1+\left(\frac{r}{r_{c}}\right)^{2}\right]^{-3\beta/2}\,, \tag{5.1}\] where \(\rho_{0}\) is the central density and \(r_{c}\) is a core radius. By fitting such model to the mean radial profile of X-ray surface brightness in clusters, the quantity \(\beta\) can be found. It is seen that \(\beta\) is of the order of \(\frac{2}{3}\). The baryonic mass of the cluster can be calculated by integrating the density profile, i.e. \[M_{b}(r)=4\pi\int_{0}^{r}\rho(r^{\prime})r^{\prime 2}dr^{\prime}\,, \tag{5.2}\] where \(M_{b}(r)\) is the total baryonic mass contained within a sphere of radius \(r\). For \(r\gg r_{c}\) we can write the baryonic mass density as \(\rho(r)\approx\rho_{0}\left(\frac{r}{r_{c}}\right)^{3\beta}\), which gives \[M_{b}(r)\approx\frac{4\pi\rho_{0}r_{c}^{3}}{3(1-\beta)}\left(\frac{r}{r_{c}} \right)^{3(1-\beta)}\,. \tag{10}\] The gas mass will gradually diverge, as evident from the above equation for \(\beta\leq 1\). Hence, to assign a total gas mass to the cluster, it becomes necessary to assume an outer radius. The density profile given in Eq. 10 is generally valid up to a certain point, beyond which the cluster's intensity diminishes against the background. To address this, in practical applications, an outer radius for the cluster is selected where the density from Eq. 10 becomes several hundred times the cosmological baryon density. In our current analysis, we consider the outer radius of the cluster mass as the point where the density decreases to approximately \(10^{-28}\)g/cm\({}^{3}\) or equivalently, 250 times the mean cosmological density of baryons. The basic assumption that is taken for calculating the gravitational mass of a cluster is the hydrostatic equilibrium, i.e. \[\frac{dP(r)}{dr}=-\frac{\rho GM_{d}(r)}{r^{2}} \tag{11}\] where \(P(r)\) represents the gas pressure, \(G\) is the gravitational constant. \(M_{d}(r)\) is the dynamic (gravitating) mass of the cluster inside a radius \(r\). With the ideal gas equation, \[P=\frac{k_{B}}{\mu m_{p}}\rho T\,, \tag{12}\] this leads to \[M_{d}(r)=\frac{k_{B}Tr^{2}}{\mu m_{p}G}\left(\frac{1}{\rho(r)}\frac{d\rho(r)} {dr}+\frac{1}{T(r)}\frac{dT(r)}{dr}\right)\,. \tag{13}\] where \(T\) is the temperature of the cluster, \(m_{p}\) is the mass of proton, \(k\) is the Boltzmann constant and \(G\) is Newton's gravitational constant. \(\mu\) is mean atomic weight of the contents of the inter-cluster gas, which contains hydrogen, helium and electrons and it gives \(\mu\approx 0.609\)[69]. For isothermal clusters, we can \(\frac{dT(r)}{dr}\approx 0\) giving the dynamical mass of the cluster as \[M_{d}(r)=\frac{3\beta kT}{\mu m_{p}G}\left(\frac{r^{3}}{r^{2}+r_{c}^{2}}\right)\,. \tag{14}\] In Fig. 2, we have shown the mass for 4 clusters as a function of radius \(r\). The solid yellow curve shows the baryonic mass calculated using Eq. 10. The solid red curve shows that Newtonian dynamic mass, calculated using Eq. 14. Therefore, we can see that we need a significant amount of additional matter or dark matter to explain the dynamical properties of galaxy clusters. However, under Machian gravity theory proposed in this paper, the Eq. 14 takes the form \[M_{b}(r)\left[1+K\left(1-\exp(-\lambda r)\left(1+\lambda r\right)\right) \right]=\frac{3\beta kT}{\mu m_{p}G}\left(\frac{r^{3}}{r^{2}+r_{c}^{2}}\right)\,. \tag{15}\] Here we have considered \(K\) to be of the form given in Eq. 12. However, a detailed study is shown in [65]. The best-fit value of \(\sqrt{M_{c}}\) and \(\lambda^{-1}\) are calculated for each of the clusters shown in Fig. 2 through MCMC analysis. We have shown the distribution of these two parameters for each of the galaxy clusters in the figure. The yellow dot-dashed line in the plots show \(M_{b}(r)\left[1+K\left(1-\exp(-\lambda r)\left(1+\lambda r\right)\right)\right]\) quantity. It can be seen that it matches with the solid red line, which essentially gives the right-hand side of the Eq. 15. So we don't require any additional dark matter components. The black dashed curve shows the same quantity for an average value of \(\sqrt{M_{c}}\) and \(\lambda^{-1}\) calculated from 106 clusters given in [69]. The values of different parameters are shown in the Table 1. ### Spiral Galactic rotation curves Spiral galactic rotation curves can provide another essential test for the theory. The circular velocities of stars and gas further from the nucleus of the spiral galaxy generally do not decline following widely expected Keplerian fall-off. Observations confirmed that galaxy rotation curves are primarily flat, with some galaxies showing modestly declining and some accelerating circular velocities further away from a nucleus. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Cluster & \(T\) & \(\rho_{0}\) & \(\beta\) & \(r_{c}\) & \(r_{\rm out}\) & \(M_{c}^{\frac{1}{2}}\) & \(\lambda^{-1}\) \\ & [keV] & \([10^{-25}\,{\rm g/cm^{3}}]\) & & [kpc] & [kpc] & [\(10^{7}M_{\odot}\)] & [\(r_{c}\)] \\ \hline A0085 & \(6.90^{0.40}_{-0.40}\) & 0.34 & \(0.532^{0.004}_{-0.004}\) & \(58.5^{3.3}_{-3.9}\) & \(2241.0^{139.0}_{-162.0}\) & \(3.91^{0.01}_{-0.01}\) & \(0.47^{0.003}_{-0.003}\) \\ \hline A0133 & \(3.80^{2.00}_{-0.90}\) & 0.42 & \(0.530^{0.004}_{-0.004}\) & \(31.7^{1.9}_{-2.3}\) & \(1417.0^{96.0}_{-109.0}\) & \(2.62^{0.04}_{-0.04}\) & \(0.43^{0.009}_{-0.009}\) \\ \hline A0262 & \(2.15^{0.06}_{-0.06}\) & 0.16 & \(0.443^{0.018}_{-0.017}\) & \(29.6^{8.5}_{-7.2}\) & \(1334.0^{432.0}_{-386.0}\) & \(1.93^{0.01}_{-0.01}\) & \(0.49^{0.009}_{-0.009}\) \\ \hline A0400 & \(2.31^{0.14}_{-0.14}\) & 0.04 & \(0.534^{0.014}_{-0.013}\) & \(108.5^{7.8}_{-8.8}\) & \(1062.0^{97.0}_{-108.0}\) & \(2.49^{0.01}_{-0.01}\) & \(0.41^{0.003}_{-0.003}\) \\ \hline \end{tabular} \end{table} Table 1: The table listed the properties of the four X-ray Clusters. We have listed the temperature (\(T\)), central density (\(\rho_{0}\)), \(\beta\), and the core radius \(r_{c}\). \(r_{\rm out}\) is the radius where the gas density of the cluster falls below 250 times the mean cosmological baryonic density. \(M_{c}^{\frac{1}{2}}\) and \(\lambda^{-1}\) are the results from the MCMC run. Figure 2: The plot displays the matter distribution in 4 galaxy clusters. The lime-colored solid plot represents the baryonic mass distribution calculated using King’s-\(\beta\) function (Eq. 11). The solid red curve shows the Newtonian dynamic mass calculated using Eq. 12. The yellow dot-dashed line illustrates the quantity on the left-hand side of Eq. refeq., where \(\sqrt{M_{c}}\) and \(\lambda^{-1}\) are chosen based on the best-fit values obtained from MCMC analysis. Additionally, the right-side plots depict the distributions of \(\sqrt{M_{c}}\) and \(\lambda^{-1}\) from the MCMC analysis. The black dashed line shows the same quantity but with average \(\sqrt{M_{c}}\) and \(\lambda^{-1}\) values calculated by analyzing 106 clusters. The most widely accepted theory to explain this spiral galaxy phenomenon is considering the existence of non-baryonic dark matter. However, it is observed that for any feature in the luminosity profile, there is a corresponding feature in the rotation curve and vice versa, also known as Renzo's rule. Therefore, explaining the above phenomenon is impossible if the dark matter is an independent quantity unrelated to the baryonic matter in the galaxy. Several modified gravity models have been suggested. The most well-known is Mingrim's MOND which provides a phenomenological model for the galactic velocity curve. This section checks how the Machian gravity model can explain the galactic velocity profile. In Eq. 15, we have shown how the velocity profile in a gravitationally bound galaxy is related to the mass. Therefore, we use this equation to check whether it can explain the galactic velocity profiles. We use the SPARC (Spitzer Photometry and Accurate Rotation Curves) data for our analysis [70]. The dataset contains the accurate rotation curve for 175 spiral galaxies. It provides observed rotational velocity and the error bars as a function of radius from the galaxy's center. It also provides the surface brightness of the galaxy disk and bulge, which can be converted to mass by multiplying with the mass-to-light ratio. Once we know the enclosed mass within that radius, we can easily calculate the theoretical velocity using Eq. 15. In Fig. 3, we have shown the observed and theoretical velocity profiles for nine galaxies. The full detail of 175 galaxies with detailed analysis is shown in [66]. The dotted blue curve shows the galactic velocity profile calculated from Newtonian mechanics using baryonic matter. Plots show that the Newtonian velocity using baryonic matter is much less than the observed velocities of the galaxies. However, in all the plots, we can see that the Newtonian velocity for all the galaxies is somehow related Figure 3: The graph displays the rotation curves of 9 galaxies taken from the SPARC dataset. The blue dotted curve represents the velocity profile calculated using Newtonian gravity based on the baryonic matter (measured from the surface brightness). However, it is evident that the baryonic mass alone is insufficient to explain the observed velocity profile. The solid red curve illustrates the velocity profile calculated using Machian Gravity. This model accurately accounts for each feature of the velocity profile, providing a much better fit than Newtonian gravity. to the observed velocity. For example, the observed velocities grow with radius when the Newtonian velocity grows with radius. Similarly, the observed velocity also falls off when the Newtonian velocity starts falling. In the galaxy F583-4, we can see some features at 4kpc. We can see the same feature in the Newtonian velocity too. Therefore, while we can explain the galactic velocity profiles using dark matter independent of baryon, it's impossible to explain all these phenomena. Our analysis has two parameters \(\sqrt{M_{c}}\) and \(\lambda\). We tried to fit these parameters for different galaxies. The red curve shows the best-fit curve using Eq. 15. We can see that all the features can be explained exceptionally well using Macchain gravity. Interestingly our analysis shows that there is an extremely strong correlation between the parameter \(\sqrt{M_{c}}\) and \(\lambda\), giving \(\sqrt{M_{c}}\propto\lambda^{-1}\). Our analysis also shows \(GM_{c}\lambda^{2}\sim a_{0}\), where \(a_{0}\) is constant acceleration for each galaxy. Therefore, we only need to set a single parameter \(a_{0}\) for each of these galaxies. On top of that, \(a_{0}\) for all the galaxies lies within the range of the universe's acceleration. The detailed analysis of these data is beyond the scope of the present paper and is discussed in detail in [66]. ## 6 Machian Gravity in presence of the source terms The Machian gravity provides the same equation as GR except in 5 dimensions. Therefore, following GR, we can take the field equation for MG in the presence of source terms as \(\widetilde{G}_{AB}=8\pi\widetilde{T}_{AB}\). However, now \(\widetilde{T}_{AB}\) is a 5-dimensional stress-energy tensor. For calculating \(\widetilde{T}_{AB}\), firstly, we assume the field equation in a frame that is at rest with respect to the distant objects. Therefore all the fictitious/inertial forces are 0. Also, if we consider that the background particles are almost the same, then all the derivatives with respect to \(x^{4}\) will become 0, and \(\phi\) is constant. Under these circumstances, MG will tend to GR. Therefore, we must have \(\widetilde{T}_{\alpha\beta}=T_{\alpha\beta}\). For a perfect fluid, we should have \(\widetilde{T}_{\alpha\beta}=(\rho+p)\widetilde{u}_{\alpha}\widetilde{u}_{ \beta}-p\widetilde{g}_{\alpha\beta}\), where \(\rho\), \(p\) are the density and the pressure of the fluid. Even in case the above conditions are not matched, the fifth dimension is similar to any other spatial dimension. Therefore, there is no reason to believe that there can be a different form of stress-energy tensors. Following General theory of relativity, the stress-energy tensor of a fluid can be defined as \[\widetilde{T}_{AB}=(\rho+p)\widetilde{u}_{A}\widetilde{u}_{B}+p \widetilde{g}_{AB}\,. \tag{18}\] As \(g_{a4}\) and \(g_{44}\) contain \(\hbar\), we can assume that these terms are significantly small and can be assumed to be \(O(\hbar)\) and \(O(\hbar^{2})\) respectively; for any classical scenario, these can be taken as 0. Similarly \(\widetilde{u}_{4}=g_{A4}\widetilde{u}^{A}\). Therefore, \(\widetilde{u}_{4}\) again contains a term \(\hbar\) and is of the \(O(\hbar)\). Therefore, the \(\widetilde{T}_{4\alpha}\) components of the stress-energy tensor are of order \(O(\hbar)\) and \(\widetilde{T}_{44}\) is of the order \(O(\hbar^{2})\). In the limit \(\hbar\to 0\), these \(T_{4A}\) terms can be approximated as 0. In that case, the stress-energy tensor becomes equivalent to the general relativistic stress-energy tensor. The five conservation equations, i.e., \(\widetilde{T}_{;B}^{AB}=0\) along with the relation between \(p\) and \(\rho\) and the equation for line element i.e., \(\widetilde{g}_{AB}d\widetilde{x}^{A}d\widetilde{x}^{B}=0\), provides a complete solution to the motion of the fluid provided \(g_{AB}\) is given. This is because there are a total of 7 equations to satisfy 7 unknowns, which are \(p\), \(\rho\), \(u_{0}\), \(\ldots\),\(u_{4}\). Interestingly, as we can notice, the conservation equation is given by \(\widetilde{T}_{;\beta}^{AB}+\widetilde{T}_{;4}^{\mathcal{A}4}=0\). This additional term of the 5-dimension is very small and is of the order of \(O(\hbar)\), which may correspond to some quantum phenomenon. We are investigating this effect, and it will be discussed in future work. However, if we ignore this component, then it gives us the normal conservation equation. If \(g_{AB}\) are also unknown, then the field equation, along with some normalization condition \(\sqrt{-\widetilde{g}}=1\), can be brought in. This provides 16 equations for fixing 15 independent components of \(\widetilde{g}_{AB}\). Therefore, the equations may appear over-determined. However, it should also be noted that there are 5 equations, \(\widetilde{G}_{;B}^{AB}=0\), which \(\widetilde{G}^{AB}\) should satisfy. Therefore, there are essentially 11 independent equations in the field equation. From the 11 equations, we need to determine the 15 independent component of \(\widetilde{g}_{AB}\). It gives us 4 degrees of freedom to choose the five-dimensional coordinate system. Cosmological solution from a generalized metric We can write the metric for a homogeneous and isotropic space as \[ds^{2}=e^{\omega}dt^{2}-e^{\kappa}dr^{2}-R^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right)-e^{\mu}d\zeta^{2}\,. \tag{108}\] The exponentials are taken to ensure that these quantities can't be negative. For this line element, we can calculate \(\widetilde{G}_{AB}\) and express them in terms of 4-dimensional Einstein tensor \(G_{\alpha\beta}\) along with additional terms. For full calculation check Appendix C. \[\widetilde{G}_{0}^{0} = G_{0}^{0}+e^{-\omega}\Bigg{(}\frac{\dot{\mu}\dot{\kappa}}{4}+ \frac{\dot{\mu}\dot{R}}{R}\Bigg{)}-e^{-\kappa}\Bigg{(}\frac{R^{\prime}\mu^{ \prime}}{R}-\frac{\kappa^{\prime}\mu^{\prime}}{4}+\frac{\mu^{\prime\prime}}{2} +\frac{\mu^{\prime 2}}{2}\Bigg{)}-e^{-\mu}\left(\frac{\kappa^{**}}{2}+\frac{ \kappa^{*2}}{4}\right.\] \[\left.-\frac{\kappa^{*}\mu^{*}}{4}+\frac{R^{*}}{R}\left(\kappa^ {*}-\mu^{*}\right)+\frac{R^{*2}}{R^{2}}+\frac{2R^{**}}{R}\right)\,,\] \[\widetilde{G}_{0}^{1} = G_{0}^{1}+e^{-\kappa}\left(\frac{\dot{\mu}^{\prime}}{2}+\frac{ \dot{\mu}\mu^{\prime}}{4}-\frac{\omega^{\prime}\dot{\mu}}{4}-\frac{\dot{\kappa }\mu^{\prime}}{4}\right)\,,\] \[\widetilde{G}_{1}^{1} = G_{1}^{1}+e^{-\omega}\left(\frac{\ddot{\mu}}{2}+\frac{\dot{\mu} }{4}-\frac{\dot{\omega}\dot{\mu}}{4}+\frac{\dot{R}\dot{\mu}}{R}\right)-e^{- \kappa}\left(\frac{\mu^{\prime}\omega^{\prime}}{4}+\frac{\mu^{\prime}R^{ \prime}}{R}\right)-e^{-\mu}\left(\frac{\omega^{**}}{2}+\frac{\omega^{*2}}{4}\right.\] \[\left.+\frac{R^{*2}}{R^{2}}+\frac{2R^{**}}{R}+\frac{R^{*}}{2R} \left(\omega^{*}-\mu^{*}\right)-\frac{\mu^{*}\omega^{*}}{4}\right)\,,\] \[\widetilde{G}_{2}^{2} = G_{2}^{2}+e^{-\omega}\left(\frac{\dot{R}\dot{\mu}}{2R}-\frac{ \dot{\omega}\dot{\mu}}{4}+\frac{\dot{\mu}\dot{\kappa}}{4}+\frac{\ddot{\mu}}{2 }+\frac{\dot{\mu}^{2}}{4}\right)-e^{-\kappa}\Bigg{(}\frac{R^{\prime}\mu^{ \prime}}{2R}+\frac{\mu^{\prime\prime}}{2}+\frac{\mu^{\prime 2}}{4}+\frac{ \omega^{\prime}\mu^{\prime}}{4}\] \[-\frac{\mu^{\prime}\kappa^{\prime}}{4}\Bigg{)}-e^{-\mu}\Bigg{(} \frac{R^{**}}{R}+\frac{R^{*}\omega^{*}}{2R}+\frac{R^{*}\kappa^{*}}{2R}-\frac{ R^{*}\mu^{*}}{2R}+\frac{\omega^{**}}{2}+\frac{\omega^{*2}}{4}+\frac{\kappa^{**}}{2}+ \frac{\kappa^{*2}}{4}\] \[+\frac{\kappa^{*}\omega^{*}}{4}-\frac{\kappa^{*}\mu^{*}}{4}-\frac {\mu^{*}\omega^{*}}{4}\Bigg{)}\,,\] \[\widetilde{G}_{3}^{3} = \widetilde{G}_{2}^{2}\,.\] Here \((\,\dot{\ldots})\), \((\ldots)^{\prime}\) and \((\ldots)^{*}\) represent the derivative with respect to the \(t\), \(r\) and \(\zeta\) respectively. According to the field equation \(\widetilde{G}_{AB}=\widetilde{T}_{AB}\). Assuming that \(\widetilde{T}_{A4}=\widetilde{T}_{4A}\approx 0\) and \(\widetilde{T}_{\alpha\beta}\approx T_{\alpha\beta}\), we can write \[\widetilde{G}_{\beta}^{\alpha}=T_{\beta}^{\alpha}\,,\qquad\qquad G_{\beta}^{ \alpha}=T_{\beta}^{\alpha}+Q_{\beta}^{\alpha} \tag{109}\] Here \(Q_{\beta}^{\alpha}\) are the expressions in the right-hand side of Eq.(108). \(Q_{\beta}^{\alpha}\) are some geometric terms. However, these terms behave as if there is some additional matter component and contribute to the 4-dimensional Einstein tensor. \(Q_{\beta}^{\alpha}\) can be treated as the stress-energy tensor from these geometric components. These are purely geometric terms. To simplify these components, we associate a density and the pressure to these geometric components as \(\rho_{g}\) and \(p_{g}\). For time-dependent spherical symmetry, the usual stress-energy tensor in 4-dimension is given as \[Q_{\beta}^{\alpha}=(\rho_{g}+p_{g})u^{\alpha}u_{\beta}+p_{g}g_{\beta}^{\alpha}\,, \tag{110}\] \(u^{\alpha}\) is the four-velocity of the fluid. In our case \(u^{0}\neq 0\), \(u^{1}\neq 0\) and \(u^{2}=u^{3}=0\). Also we have \(u^{\alpha}u_{\alpha}=1\). Putting these values in the above expression one can obtain \[\rho_{g}=Q_{0}^{0}+Q_{1}^{1}-Q_{2}^{2}\,,\qquad\qquad p_{g}=-Q_{2}^{2}\,. \tag{111}\] The subscript \(g\) represents that these terms are purely geometric. Using the expressions from Eq.(108)-Eq.(108) we can get a simplified expression for \(\rho_{g}\). However, the expression for \(p_{g}\) will still remain complex. To simplify this, we need to use the expression \(\tilde{R}_{4}^{4}\approx 0\). As \(R_{4}^{4}\approx 0\) we can add it with the expression of \(p_{g}\), and simplify the expression for \(p_{g}\). (For detailed calculation, please refer Appendix C) After a few algebraic manipulations, we can obtain \[\rho_{g} = \frac{3}{2}\left(\frac{e^{-\kappa}\mu^{\prime}R^{\prime}}{R}- \frac{e^{-\omega}\dot{\mu}\dot{R}}{R}\right)-\frac{3}{2}e^{-\mu}\left(\frac{R^ {*}\mu^{*}}{R}-\frac{2R^{**}}{R}\right)+e^{-\mu}\frac{R^{*2}}{R^{2}}-e^{-\mu} \left(\frac{\omega^{*}\kappa^{*}}{4}\right) \tag{111}\] \[+e^{-\mu}\frac{R^{*}}{2R}\left(\kappa^{*}+\omega^{*}\right)\,,\] \[p_{g} = \frac{1}{2}\left(\frac{e^{-\kappa}\mu^{\prime}R^{\prime}}{R}- \frac{e^{-\omega}\dot{\mu}\dot{R}}{R}\right)-\frac{1}{2}e^{-\mu}\left(\frac{R ^{*}\mu^{*}}{R}-\frac{2R^{**}}{R}\right)-e^{-\mu}\left(\frac{\omega^{*}\kappa ^{*}}{4}\right)\] (112) \[-e^{-\mu}\frac{R^{*}}{2R}\left(\kappa^{*}+\omega^{*}\right)\,.\] These expressions show that there are four different types of components of these geometric pressure and density components, i.e. \[\rho_{gr} = 3p_{gr}=\frac{3}{2}\left(\frac{e^{-\kappa}\mu^{\prime}R^{\prime }}{R}-\frac{e^{-\omega}\dot{\mu}\dot{R}}{R}\right)-\frac{3}{2}e^{-\mu}\left( \frac{R^{*}\mu^{*}}{R}-\frac{2R^{**}}{R}\right)\,, \tag{113}\] \[\rho_{gd} = e^{-\mu}\frac{R^{*2}}{R^{2}}\,,\] (114) \[\rho_{gs} = p_{gs}=-e^{-\mu}\left(\frac{\omega^{*}\kappa^{*}}{4}\right)\,,\] (115) \[\rho_{g\Lambda} = -p_{g\Lambda}=e^{-\mu}\frac{R^{*}}{2R}\left(\kappa^{*}+\omega^{*} \right)\,. \tag{116}\] The pressure and density of \(p_{gr}\) or \(\rho_{gr}\) by Eq.(113) follows the relation \(p=\frac{\rho}{3}\). Therefore, they behave exactly as photons or massless neutrinos in the universe and can be treated as dark radiation in standard cosmology. The \(2^{nd}\) component, given by Eq.(114), behaves as a non-relativistic matter with \(0\) pressure. This fulfills all the criteria for cold dark matter. The third component, i.e., Eq.(115), is another exciting component where pressure and density are equal. This is the stiffest equation of state that a fluid can have because, after this, the speed of sound inside a fluid will exceed the speed of light, violating the consistency relation. This kind of fluid was once proposed by Zeldovich and named as stiff matter. The last component, given by Eq.(116), has the \(\rho=-p\). Thus it behaves as dark energy of the standard cosmological model. Our equation shows that, in Machian gravity, the dark components emerge automatically from geometry. Thus, the theory can provide a cosmological model exactly similar to the standard cosmological model without demanding any external dark matter or dark energy. ### Cosmology in a Robertson Walker metric The above solution was proposed by Wesson in his induced matter hypothesis and has been explored by various researchers [71; 72; 73; 74]. It is interesting to see that the density given by Eq. 113 - Eq. 116 are mostly dependent on the \(\zeta\) derivative of the variables, except that there are some terms in the radiation part which are dependent on the spatial and the temporal derivative. Now on a constant \(\zeta\) hyperspace, we want it to satisfy Weyl's postulate, i.e., the world lines of the particles are perpendicular to the spatial hyperspace. Therefore, \(\omega\) can not be a function of \(r\). If we don't want any stiff matter, then we can assume \(\kappa\) to be independent of \(\zeta\). Also if we don't want any dark radiation type of component then we can take \(\mu(\zeta)\) to be a function of only \(\zeta\). This will make the first part of the radiation component in Eq. 113 to be \(0\). To make the second part of Eq. 113 zero we need \[\frac{\partial}{\partial\zeta}\left[R^{*}e^{-\frac{\pi}{2}}\right]=0\qquad \Longrightarrow\qquad R^{*}e^{-\frac{\pi}{2}}=h(t,r)\qquad\Longrightarrow \qquad R^{*}=h(t,r)e^{\frac{\mu(\zeta)}{2}} \tag{117}\] Here \(h(t,r)\) is some function of \(t\) and \(r\). If we consider the scale factor to be \(a(t)\), then Eq. 7.9 leads us to \(\frac{R^{*}}{R}=\frac{e^{\mu(\zeta)/2}}{a^{3/2}}\). This gives \[R=R_{0}(t,r)\exp\left[\frac{\int e^{\mu(\zeta)/2}d\zeta}{a^{3/2}}\right]\qquad \text{and}\qquad R^{*}=R_{0}(t,r)\frac{e^{\mu(\zeta)/2}}{a^{3/2}}\exp\left[ \frac{\int e^{\mu(\zeta)/2}d\zeta}{a^{3/2}}\right] \tag{7.13}\] Provided the integral is significantly small the \(\exp\left[\frac{\int e^{\mu(\zeta)/2}d\zeta}{a^{3/2}}\right]\to 1\) in all cases except when \(a\to 0\) ( because whenever we are integrating with \(\zeta\), there is a \(\hbar\) multiplied with it ). Therefore, we recover the expression of Eq. 7.12. Finally from expression Eq. 7.11 we get \[e^{-\mu}\frac{R^{*}}{2R}\omega^{*}=\Lambda\qquad\implies\qquad\omega^{*}=2 \Lambda a^{3/2}e^{\mu(\zeta)/2}\qquad\implies\qquad\omega=2\Lambda a^{3/2} \int e^{\mu(\zeta)/2}d\zeta\,. \tag{7.14}\] Again, provided the integrtion tends to \(0\), \(\omega\to 0\) and \(\exp(\omega)\to 1\). Assuming \(R_{0}(t,r)=a(t)r\) and putting everything back into the line element we can get \[ds^{2}=e^{\left[2\Lambda a^{3/2}\int e^{\mu(\zeta)/2}d\zeta\right]}dt^{2}-a^{2 }(t)\left[dr^{2}+\left(\exp\left[\frac{\int e^{\mu(\zeta)/2}d\zeta}{a^{3/2}} \right]\right)^{2}r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right)\right] -e^{\mu(\zeta)}d\zeta^{2}\,. \tag{7.15}\] On a constant \(\zeta\) hyperspace where the integrals can be ignored we can recover the standard FRW line element, i.e. \[ds^{2}=dt^{2}-a^{2}(t)\left[dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi ^{2}\right)\right]\,. \tag{7.16}\] Therefore, we recover the FRW line element along with a dark matter and dark energy component originating from the geometry of the 5-dimensional space-time. ## 8 Discussion and Conclusion A new theory of gravitation based on Mach's principle is introduced. This metric theory can be derived from the action principle, ensuring compliance with all conservation principles. Unlike the General Theory of Relativity or Newtonian gravity, which are only directly applicable to inertial observers, the proposed Machian Gravity theory is valid in all reference frames, including non-inertial or accelerated ones. It accounts for the effects of acceleration of the reference frames and generates inertial forces caused by the motion of distant objects in the universe. Remarkably, this theory successfully explains galactic rotation curves and the mass profile of galaxy clusters without invoking any additional dark matter candidates. Moreover, the theory exhibits behavior akin to dark matter and dark energy in cosmology, offering an explanation for the expansion history of the universe without requiring any extra dark matter or dark energy components. While the Machian Gravity theory shows promise in explaining various observational results, it is still in its preliminary stage, necessitating further in-depth analysis with additional observational data. In particular, the bullet cluster data is recognized as the most compelling evidence for the existence of dark matter, and testing the theory against this data is crucial. Furthermore, the efficacy of the theory needs to be tested against other significant cosmological phenomena, such as inflation and Big Bang Nucleosynthesis (BBN), as well as other observational data. These comprehensive tests will ultimately provide a definitive assessment of the theory's validity and its potential to advance our understanding of gravity and the fundamental workings of the universe. A brief discussion about Kaluza-Klein mechanism In this work, we have extensively use the Kaluza-Klein mechanism for projecting the 5-dimensional geometry to a 4-dimensional metric and a scalar and vector field. Therefore, in this section, we have shown the step-by-step calculations of the Kaluza-Klein equation. These complete mathematical expressions are used for various calculations in the paper. The 5-dimensional metric can be expressed in terms of a 4-dimensional metric as \[\widetilde{g}_{AB}=\left(\begin{array}{cc}g_{\alpha\beta}+\phi^{2}A_{\alpha }A_{\beta}&\phi^{2}A_{\alpha}\\ \phi^{2}A_{\beta}&\phi^{2}\end{array}\right)\quad\text{ or }\qquad \widetilde{g}^{AB}=\left(\begin{array}{cc}g^{\alpha\beta}&-A^{\alpha}\\ -A^{\beta}&A^{\beta}A_{\beta}+\frac{1}{\phi^{2}}\end{array}\right)\,. \tag{108}\] ### Cristoffel Symbols We can calculate the Christoffel symbols as \(\widetilde{\Gamma}^{C}_{AB}=\frac{1}{2}\widetilde{g}^{CD}\left(\partial_{B} \widetilde{g}_{DA}+\partial_{A}\widetilde{g}_{DB}-\partial_{D}\widetilde{g}_{ AB}\right)\) and write those in terms of the 4 dimensional Christoffel symbols as \[\widetilde{\Gamma}^{4}_{44} = -\frac{1}{2}\partial_{\alpha}\widetilde{g}_{44}\widetilde{g}^{4 \alpha}+\left[\partial_{4}\widetilde{g}_{4\alpha}\widetilde{g}^{4\alpha}+ \frac{1}{2}\partial_{4}\widetilde{g}_{44}\widetilde{g}^{44}\right] \tag{109}\] \[= \frac{1}{2}A^{\nu}\partial_{\nu}\phi^{2}+\left[-\phi^{2}A^{ \alpha}\partial_{4}A_{\alpha}+\frac{1}{2}\partial_{4}(\ln\phi^{2})-\frac{1}{2 }A^{\alpha}A_{\alpha}\partial_{4}\phi^{2}\right],\] \[\widetilde{\Gamma}^{4}_{4v} = \frac{1}{2}\widetilde{g}^{4\alpha}\left(\partial_{\nu}\widetilde{ g}_{\alpha 4}-\partial_{\alpha}\widetilde{g}_{4v}\right)+\frac{1}{2}\widetilde{g}^{44}\partial _{\nu}\widetilde{g}_{44}+\left[\frac{1}{2}\partial_{4}\widetilde{g}_{\nu\alpha }\widetilde{g}^{4\alpha}\right]\] (110) \[= \frac{1}{2}\phi^{2}A^{\alpha}F_{\alpha\nu}+\frac{1}{2\phi^{2}} \partial_{\nu}\phi^{2}+\frac{1}{2}A_{\nu}A^{\alpha}\partial_{\alpha}\phi^{2}+\] \[\left[-\frac{1}{2}A^{\alpha}\partial_{4}h_{\nu\alpha}-\frac{1}{2 }A_{\nu}A_{\alpha}A^{\alpha}\partial_{4}\phi^{2}-\frac{1}{2}A_{\alpha}A^{ \alpha}\partial_{4}A_{\nu}\phi^{2}-\frac{1}{2}A_{\nu}A^{\alpha}\partial_{4}A_ {\alpha}\phi^{2}\right]\,,\] \[\widetilde{\Gamma}^{v}_{44} = \widetilde{g}^{m\alpha}\partial_{4}\widetilde{g}_{4\alpha}+\left[ \frac{1}{2}\widetilde{g}^{4\nu}\partial_{4}\widetilde{g}_{44}-\frac{1}{2} \widetilde{g}^{\nu\alpha}\partial_{\alpha}\widetilde{g}_{44}\right]\] (111) \[= -\frac{1}{2}g^{\nu\alpha}\partial_{\alpha}\phi^{2}+\left[\frac{1} {2}\partial_{4}\phi^{2}A^{\nu}+\phi^{2}g^{\nu\alpha}\partial_{4}A_{\alpha}\right]\] \[\widetilde{\Gamma}^{4}_{\alpha\nu} = \frac{1}{2}\widetilde{g}^{4\beta}\left(\partial_{\alpha}\tilde{g}_ {\beta\nu}+\partial_{\nu}\widetilde{g}_{\beta\alpha}-\partial_{\beta}\widetilde {g}_{\alpha\nu}\right)+\frac{1}{2}\widetilde{g}^{44}\left(\partial_{\alpha} \widetilde{g}_{4v}+\partial_{\nu}\widetilde{g}_{4\alpha}\right)-\left[\frac{1} {2}\partial_{4}\widetilde{g}_{\alpha\nu}\widetilde{g}^{44}\right]\] (112) \[= -A_{\beta}\Gamma^{\beta}_{\alpha\nu}+\frac{1}{2}A^{\beta}A_{\nu} \phi^{2}F_{\beta\alpha}+\frac{1}{2}A_{\alpha}A^{\beta}\phi^{2}F_{\beta\nu}+ \frac{1}{2}A_{\alpha}A^{\beta}A_{\nu}\partial_{\beta}\phi^{2}+\frac{1}{2} \left(\partial_{\alpha}A_{\nu}+\partial_{\nu}A_{\alpha}\right)\] \[+\frac{1}{2\phi^{2}}\left(A_{\nu}\partial_{\alpha}\phi^{2}+A_{ \alpha}\partial_{\nu}\phi^{2}\right)-\left[\frac{1}{2}(\phi^{-2}+A^{\beta}A_{ \beta})\partial_{4}(g_{\alpha\nu}+\phi^{2}A_{\alpha}A_{\nu})\right]\] \[\widetilde{\Gamma}^{v}_{4\alpha} = \frac{1}{2}\widetilde{g}^{v\mu}\left(\partial_{\alpha}\tilde{g}_{ \mu 4}-\partial_{\mu}\tilde{g}_{4\alpha}\right)+\frac{1}{2}\widetilde{g}^{v4} \partial_{\alpha}\tilde{g}_{44}+\left[\frac{1}{2}\widetilde{g}^{\nu\beta} \partial_{4}\widetilde{g}_{\alpha\beta}\right]\] (113) \[= \frac{1}{2}g^{\gamma\mu}\left(\phi^{2}F_{\alpha\mu}-A_{\alpha} \partial_{\mu}\phi^{2}\right)+\left[\frac{1}{2}g^{\nu\beta}\partial_{4}(g_{ \alpha\beta}+\phi^{2}A_{\alpha}A_{\beta})\right],\] \[\widetilde{\Gamma}^{\beta}_{\mu\nu} = \frac{1}{2}\widetilde{g}^{\beta\alpha}\left(\partial_{\mu}\tilde{g} _{\alpha\nu}+\partial_{\nu}\tilde{g}_{\alpha\mu}-\partial_{\alpha}\tilde{g}_{ \mu\nu}\right)+\left[\frac{1}{2}\widetilde{g}^{\beta\delta}\left(\partial_{ \mu}\tilde{g}_{4v}+\partial_{\nu}\tilde{g}_{4\mu}\right)-\frac{1}{2}\partial_{ 4}\tilde{g}_{\mu\nu}\widetilde{g}^{4\beta}\right]\] (114) \[= \Gamma^{\beta}_{\mu\nu}+\frac{1}{2}g^{\beta\alpha}\left(\phi^{2}A_{ \mu}F_{\nu\alpha}+\phi^{2}A_{\nu}F_{\mu\alpha}-\partial_{\alpha}\phi^{2}A_{\mu }A_{\nu}\right)\] \[+\left[\frac{1}{2}A^{\beta}(\partial_{4}g_{\mu\nu}+\partial_{4} \phi^{2}A_{\mu}A_{\nu})+\frac{1}{2}\phi^{2}A^{\beta}\partial_{4}(A_{\mu}A_{\nu})\right]\] ### Kaluza-Klein mechanism In Sec. 3.1, we have shown that inertial forces like Centrifugal Force, Coriolis Force, Euler Force, etc. can be produced due to the motion of distant objects in the universe. To produce these forces, we use the Kaluza-Klein theory. We consider \(\phi\) to be constant and all the variables are independent on \(x^{4}\). Under these conditions the Cristoffel's symbols are given by Eq. 3.5. The straightforward calculation shows that \[\frac{d}{d\tau}\left(A_{\lambda}\frac{dx^{\lambda}}{\mathrm{d}\tau}+\frac{dx^{4} }{\mathrm{d}\tau}\right)=-\widetilde{\Gamma}^{4}_{AB}\frac{dx^{A}}{d\tau}\frac {dx^{B}}{d\tau}-A_{\alpha}\widetilde{\Gamma}^{\alpha}_{AB}\frac{dx^{A}}{d\tau} \frac{dx^{B}}{d\tau}+\partial_{A}A_{B}\frac{dx^{A}}{d\tau}\frac{dx^{B}}{d\tau} =0\,.\] (A.8) Here we just replace the geodesic equations. Also \(A_{4}=1\), and hence its derivative is \(0\). This makes \(\left(A_{\lambda}\frac{dx^{\lambda}}{\mathrm{d}\tau}+\frac{dx^{4}}{\mathrm{d} \tau}\right)=K\), where \(K\) is a constant of integration. Without the loss of generality, we can take \(K=1/\phi^{2}\) by redefining the coordinate system. Note that in this case, we assume the \(\phi\) is a constant. If it's not a constant, then this redefinition will not work. We can use the above condition to get \[\frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\tau^{2}}+\widetilde{\Gamma }^{\mu}_{BC}\frac{\mathrm{d}x^{B}}{\mathrm{d}\tau}\frac{dx^{C}}{\mathrm{d} \tau}=\frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\tau^{2}}+\widetilde{\Gamma}^{ \mu}_{\nu\lambda}\frac{\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\frac{dx^{\lambda}} {\mathrm{d}\tau}+2\widetilde{\Gamma}^{\mu}_{\nu 4}\frac{\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\frac{dx^{4}}{ \mathrm{d}\tau}+\widetilde{\Gamma}^{\mu}_{44}\frac{\mathrm{d}x^{4}}{\mathrm{ d}\tau}\frac{dx^{4}}{\mathrm{d}\tau}\] \[= \frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\tau^{2}}+\left(\Gamma^{ \mu}_{\nu\lambda}+\frac{1}{2}g^{\mu\alpha}\left(A_{\nu}\phi^{2}F_{\lambda \alpha}+A_{\lambda}\phi^{2}F_{\nu\alpha}\right)\right)\frac{\mathrm{d}x^{\nu} }{\mathrm{d}\tau}\frac{dx^{\lambda}}{\mathrm{d}\tau}+2\left(\frac{1}{2}g^{\mu \beta}\left(\phi^{2}F_{\nu\beta}\right)\right)\frac{\mathrm{d}x^{\nu}}{ \mathrm{d}\tau}\frac{dx^{4}}{\mathrm{d}\tau}\] \[= \frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\tau^{2}}+\Gamma^{\mu}_{ \nu\lambda}\frac{\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\frac{dx^{\lambda}}{ \mathrm{d}\tau}+g^{\mu\alpha}\phi^{2}\left[\left(A_{\lambda}\frac{dx^{\lambda} }{\mathrm{d}\tau}+\frac{dx^{4}}{\mathrm{d}\tau}\right)F_{\nu\alpha}\right] \frac{\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\] \[= \frac{\mathrm{d}^{2}x^{\mu}}{\mathrm{d}\tau^{2}}+\Gamma^{\mu}_{ \nu\lambda}\frac{\mathrm{d}x^{\nu}}{\mathrm{d}\tau}\frac{dx^{\lambda}}{ \mathrm{d}\tau}+g^{\mu\alpha}F_{\nu\alpha}\frac{\mathrm{d}x^{\nu}}{\mathrm{d }\tau}\] (A.9) Therefore we get an extra vector field in the geodesic equation coming from the 5-dimensional coordinate system. If we assume that this vector field represents the velocity of the background (distant stars, galaxies etc.) with respect to the coordinate system, then we can use this vector field to generate all the pseudo-inertial forces. ## Appendix B Understanding the Hoyle-Narlikar's argument with C field In this section, I discuss the argument put forward by Hoyle and Narlikar in [75] to explain Mach's principle and how the Machian gravity model satisfies the view. Newton, in his work, discussed his experiments of a rotating water-filled bucket suspended from a twisted thread. The crucial point was that whenever rotation occurred relative to some particular reference frame, the surface of the water became depressed, an absolute effect, not a relative one. It was also clear that the reference frame, relative to which inertial forces were observed, coincided within experimental error with the frame in which distant objects in the universe were non-rotating. More accurate later experiments have confirmed this coincidence. Since the coincidence can scarcely be accidental, it is necessary to attempt an explanation of it. Mach suggested that the correlation between the water curvature in Newton's bucket and the rotation of distant matter in the universe can be addressed if we consider that the distant matters of the universe are affecting the inertial properties of matter. The standard model of cosmology is based on general relativity along with two postulates, namely, 1. The Weyl postulate, which says that the world lines of matter form a geodesic congruence are normal to the spacelike hypersurface, which leads us to the line element of the form \(ds^{2}=dt^{2}-g_{ij}dx^{i}dx^{j}\). and 2. The cosmological principle that says that at \(t=constant\) hypersurface, the universe is isotropic and homogeneous. This leads us to the Robertson-Walker line element. Since \(\theta\) and \(\phi\) are not changed by transformation (isotropy), we can fix the \(\theta\) and \(\phi\) coordinates by looking at a distant galaxy. Then we can use Einstein's equation \[G^{\mu\nu}+\Lambda g^{\mu\nu}=T^{\mu\nu}\,. \tag{104}\] Under the previous considions we can show that the stress-energy tensor takes the form \[T^{\mu\nu}=(\rho+p)\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}+pg^{\mu\nu} \tag{105}\] where, \(\rho\) and \(p\) are the density and the pressure of the matter content. However, this in direct contradiction to Mach's principle. Mach's principle requires us to read the Eq. 104 from the right, i.e., given the \(T^{\mu\nu}\), is it possible to get some unique line element from the equations. If the answer is affirmative, only then we can say that the theory can explain the observations related to the rotating frame. However, Godel (1949) [76] showed that for the normal form of \(T^{\mu\nu}\), the answer is not affirmative. Godel obtained an explicit solution in which the line element is of the form \[\mathrm{d}s^{2}=\mathrm{d}t^{2}+2\mathrm{e}^{x^{1}}\ \mathrm{d}t\ \mathrm{d}x^{2}- \left(\mathrm{d}x^{1}\right)^{2}+\frac{1}{2}\mathrm{e}^{2x^{1}}\left(dx^{2} \right)^{2}-\left(\mathrm{d}x^{3}\right)^{2} \tag{106}\] and where \(T^{ik}\) is given by Eq. 105 with \(u=p=0,\rho=1/\kappa,\Lambda=-\frac{1}{2}\kappa\rho\). This solution is fundamentally different from the Robertson-Walker line element, i.e., it cannot be obtained from it by a coordinate transformation. The importance of Godel's solution is that it exhibits a vorticity of matter. So, in general relativity, the maximum that we can do is to take a spacelike surface and, on it, define the coordinate systems. This simply removes the arbitrariness of the coordinate systems. On top of this, define the matter and the kinematical situations and the quantities \[g_{\mu\nu},\frac{\partial g_{\mu\nu}}{\partial x^{\mu}},\frac{\partial^{2}g_{ \mu\nu}}{\partial x^{\mu}\partial x^{\nu}} \tag{107}\] consistently with Eq. 104. Then Eq. 104. allows one, in principle, to calculate both the dynamical situation and the form of the metric tensor at points of the initial surface. While the specifications can be made such that we can get the Robertson-Walker line element, we need to put conditions on the \(g^{\mu\nu}\). In other words, Newton's absolute space has been replaced in the GR theory by initial boundary conditions on the metric tensor. To avoid setting these initial conditions, Hoyle and Narlikar [75] introduced an additional scalar field with negative density. By doing so, we don't need to fix the initial conditions independently. In other words \(G^{\mu\nu}\) and \(T^{\mu\nu}\) can be anything. The rest can be absorbed into the \(C\)-field. They use action principle to deduce the field equations as \[R^{\mu\nu}-\frac{1}{2}g^{\mu\nu}R=-\kappa\left[T^{\mu\nu}-f\left\{ C^{\mu}C^{\nu}-\frac{1}{2}g^{\mu\nu}C^{\kappa}C_{\kappa}\right\}\right] \tag{108}\] \[C^{\mu}_{;\mu}=(1/f)j^{\mu}_{;},\quad T^{\mu\nu}_{;\nu}=fC^{\mu} C^{\nu}_{;\nu}, \tag{109}\] where \(C\) is a scalar field, \(f\) is a coupling constant that determines the expansion rate of the universe. \(j^{\mu}=\rho\frac{dx^{\mu}}{ds}\) is the mass current. While their logic seems correct, the choice of the scalar \(C\)-field is completely arbitrary. They also failed to explain how this additional field will recover all Mach's ideas, e.g., the different pseudo forces. In Machian Gravity theory, instead of an ad hoc single field, there is a scalar field and a vector field and also several additional terms with derivatives with respect to the \(\zeta\) terms. We have discussed before how these terms give rise to all the pseudo forces. These terms also complement the logic put forward by Hoyle and Narlikar, and don't require any special boundary condition at the beginning. Calculations for Cosmology In section 7 we describe the field equations involved in the cosmology calculation. This section describes the detailed calculations involved in projecting the the five-dimensional field equation to four-dimensional space [28; 29; 30]. Any generalized five-dimensional metric can be written in terms of a four-dimensional metric, a scalar field, and a vector field as Eq. 11. To simplify the calculations, we can choose a coordinate system so that the off-diagonal terms corresponding to the fifth dimension become \(0\). We can write the metric as \[\tilde{g}_{AB}=\begin{pmatrix}&&&0\\ &g_{\alpha\beta}&&0\\ &&&0\\ &&&0\\ 0&0&0&0&g_{44}\end{pmatrix}\,. \tag{123}\] Here \(g_{\alpha\beta}\) is the 4-dimensional metric; hence, it doesn't have any \(g_{44}\) component. We use the term \(g_{44}\) for notation simplification, which is same as \(g_{44}=\widetilde{g}_{44}\). The nonzero components of Christoffel's symbols are given by \[\widetilde{\Gamma}^{4}_{44} =\frac{g^{44}\partial_{4}g_{44}}{2}\,, \widetilde{\Gamma}^{4}_{4\nu} =\frac{g^{44}\partial_{\nu}g_{44}}{2}\,, \widetilde{\Gamma}^{\nu}_{44} =-\frac{1}{2}g^{\nu\mu}\partial_{\mu}g_{44}\,,\] \[\widetilde{\Gamma}^{4}_{\alpha\beta} =-\frac{1}{2}g^{44}\partial_{4}g_{\alpha\beta}\,, \widetilde{\Gamma}^{\alpha}_{\beta 4} =\frac{1}{2}g^{\alpha\gamma}\partial_{4}g_{\beta\gamma}\,, \widetilde{\Gamma}^{\alpha}_{\mu\nu} =\Gamma^{\alpha}_{\mu\nu}\,. \tag{124}\] The five-dimensional Ricci tensor i.e., \(\widetilde{R}_{AB}\) can be written in terms of Christoffel's symbol as \[\widetilde{R}_{AB}=\left(\widetilde{\Gamma}^{C}_{AB}\right)_{,C}-\left( \widetilde{\Gamma}^{C}_{AC}\right)_{,B}+\widetilde{\Gamma}^{C}_{AB} \widetilde{\Gamma}^{D}_{CD}-\widetilde{\Gamma}^{C}_{AD}\widetilde{\Gamma}^{D} _{CB}\,. \tag{125}\] Separating the indices \((A,B,\ldots)\), into \((\alpha,\beta,\ldots)\) and (4), we can get the Ricci tensor as \[\widetilde{R}_{\alpha\beta} = \left(\widetilde{\Gamma}^{\gamma}_{\alpha\beta}\right)_{,\gamma}- \left(\widetilde{\Gamma}^{\gamma}_{\alpha\gamma}\right)_{,\beta}+\widetilde{ \Gamma}^{\gamma}_{\alpha\beta}\widetilde{\Gamma}^{\delta}_{\gamma\delta}- \widetilde{\Gamma}^{\gamma}_{\alpha\delta}\widetilde{\Gamma}^{\delta}_{\gamma\beta} \tag{126}\] \[+\left(\widetilde{\Gamma}^{4}_{\alpha\beta}\right)_{,4}-\left( \widetilde{\Gamma}^{4}_{\alpha 4}\right)_{,\beta}+\widetilde{\Gamma}^{4}_{\alpha\beta} \widetilde{\Gamma}^{\delta}_{4\delta}+\widetilde{\Gamma}^{\gamma}_{\alpha\beta }\widetilde{\Gamma}^{\gamma}_{\gamma 4}-\widetilde{\Gamma}^{4}_{\alpha\delta}\widetilde{\Gamma}^{\delta}_{4 \beta}-\widetilde{\Gamma}^{\gamma}_{\alpha 4}\widetilde{\Gamma}^{4}_{\gamma\beta}\] \[= R_{\alpha\beta}+\left(\widetilde{\Gamma}^{4}_{\alpha\beta} \right)_{,4}-\left(\widetilde{\Gamma}^{4}_{\alpha 4}\right)_{,\beta}+\widetilde{\Gamma}^{4}_{\alpha\beta} \widetilde{\Gamma}^{\delta}_{4\delta}+\widetilde{\Gamma}^{\gamma}_{\alpha\beta }\widetilde{\Gamma}^{4}_{\gamma 4}-\widetilde{\Gamma}^{4}_{\alpha\delta}\widetilde{\Gamma}^{\delta}_{4 \beta}-\widetilde{\Gamma}^{\gamma}_{\alpha 4}\widetilde{\Gamma}^{4}_{\gamma\beta}\] Note that the 4 dimensional \(\Gamma\) matrices are same as their 5 dimensional counterpart for the spacetime component as shown in Eq. 124. Replacing the values of Christoffel's symbols from Eq. 124, we can obtain the Ricci tensor as \[\widetilde{R}_{\alpha\beta} = R_{\alpha\beta}-\frac{\partial_{4}g^{44}\partial_{4}g_{\alpha \beta}}{2}-\frac{\partial_{4}\partial_{4}g^{44}g_{\alpha\beta}}{2}-\frac{g^{ 44},\beta g_{44,\alpha}}{2}-\frac{g^{44}g_{44,\alpha}}{2}+\frac{g^{44}g_{44, \lambda}\Gamma^{\lambda}_{\alpha\beta}}{2}\] \[-\frac{\partial_{4}g^{\mu\nu}g_{\mu\nu}\partial_{4}g^{44}g_{ \alpha\beta}}{4}-\frac{\left(g^{44}\right)^{2}\partial_{4}g_{\alpha\beta} \partial_{4}g_{44}}{4}+\frac{g^{\lambda\mu}g^{44}\partial_{4}g_{\alpha\lambda }\partial_{4}g_{\beta\mu}}{2}-\frac{\left(g^{44}\right)^{2}g_{44,\alpha}g_{44, \beta}}{4}\,,\] \[\widetilde{R}_{44} = -\frac{g^{\beta\beta}}{2}\frac{g^{\lambda\beta}g_{44,\beta}}{2}- \frac{g_{4}g^{\beta\lambda}g_{44,\beta\lambda}}{2}-\frac{\partial_{4}g^{\beta \lambda}g_{4,\beta\beta}}{2}-\frac{\partial_{4}\partial_{4}g^{\lambda\beta}g_{ \lambda\beta}}{2}-\frac{g^{\lambda\beta}g_{44,\beta}g^{\mu\sigma}g_{\mu\sigma,\lambda}}{4}\] \[+\frac{g^{44}\partial_{4}g_{44}g^{\lambda\beta}\partial_{4}g_{ \lambda\beta}}{4}-\frac{g^{\mu\beta}\partial_{4}g_{\lambda\beta}g^{\lambda \alpha}\partial_{4}g_{\mu\sigma}}{4}+\frac{g^{44}g_{44,\beta}g^{\lambda\beta}g _{44,\beta}}{4}\,,\] \[\widetilde{R}_{4\alpha} = \sqrt{g_{44}}P^{\beta}_{\alpha;\beta}\,. \tag{127}\] In the last equation, '\(;\)' represents the covariant derivative. \(P^{\beta}_{\alpha}\) is a \(2^{nd}\) rank tensor and given by \(P^{\beta}_{\alpha}=\frac{1}{2\sqrt{g_{44}}}\left(g^{\beta\lambda}\partial_{4}g _{\lambda\alpha}-\delta^{\beta}_{\alpha}g^{\mu\nu}\partial_{4}g_{\mu\nu}\right)\). Earlier, we discussed the stress-energy tensor for the 5-dimensional equation. The terms corresponding to the 5th dimension are significantly small as they contain a factor \(\hbar\) ( for the off-diagonal terms ) or \(\hbar^{2}\) (for the diagonal term). This virtually sets the terms corresponding to the 5-th dimension to 0, giving \(\widetilde{T}^{\alpha\beta}\approx T^{\alpha\beta}\). This gives \[\widetilde{R}^{4\alpha}=\widetilde{R}^{44}=0 \tag{104}\] As we can see that the 4-dimensional Ricci tensor contains additional terms. Therefore in the field equation when projected in 4-dimensional gives \[R_{\alpha\beta}=T_{\alpha\beta}-g_{\alpha\beta}T+\text{Additional geometric terms}. \tag{105}\] If we can show that these terms have the same property as that of dark matter and dark energy, then our theory can predict everything in the same way as that of the standard cosmology without demanding any form of dark matter and dark energy. ### Calculating the components for a diagonal metric The most general line element for explaining cosmology should satisfy two postulates, the Weyl postulate and the isotropy condition. Therefore, similar to the Robertson-Walker line element, we can choose the line element as \[ds^{2}=e^{\omega}dt^{2}-e^{\kappa}dr^{2}-R^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right)+\epsilon e^{\mu}d\zeta^{2} \tag{106}\] The exponentials are taken to make sure that these quantities can't be negative. The extra parameter \(\epsilon\) gives us a library of changing the signature of the background dimension, as well as we can put \(\epsilon=0\) to get the four-dimensional components of the Ricci tensor. The nonzero Christoffel's symbols from this metric can be calculated as \[\Gamma^{0}_{00} =\frac{\dot{\omega}}{2}, \Gamma^{1}_{00} =\frac{\omega^{\prime}}{2}e^{\omega-\kappa}, \Gamma^{4}_{00} =-\frac{\epsilon}{2}\omega^{*}e^{\omega-\mu}, \Gamma^{0}_{01} =\frac{\omega^{\prime}}{2},\] \[\Gamma^{1}_{01} =\frac{\omega}{2}, \Gamma^{2}_{02} =\frac{\dot{R}}{R}, \Gamma^{3}_{03} =\frac{\dot{R}}{R}, \Gamma^{0}_{04} =\frac{\omega^{*}}{2},\] \[\Gamma^{0}_{11} =\frac{\kappa}{2}e^{\kappa-\omega}, \Gamma^{1}_{11} =\frac{\kappa^{\prime}}{2}, \Gamma^{4}_{11} =\frac{\epsilon}{2}\kappa^{*}e^{\kappa-\mu}, \Gamma^{2}_{12} =\Gamma^{3}_{13}=\frac{R^{\prime}}{R},\] \[\Gamma^{1}_{14} =\frac{\kappa^{*}}{2}, \Gamma^{4}_{41} =\frac{\mu^{\prime}}{2}, \Gamma^{0}_{22} =R\dot{R}e^{-\omega}, \Gamma^{1}_{22} =-RR^{\prime}e^{-\kappa},\] \[\Gamma^{4}_{22} =\epsilon RR^{*}e^{-\mu}, \Gamma^{3}_{23} =\cot\theta, \Gamma^{2}_{24} =\Gamma^{3}_{34}=\frac{R^{*}}{R}, \Gamma^{0}_{33} =R\dot{R}e^{-\omega}\sin^{2}\theta,\] \[\Gamma^{1}_{33} =-R\dot{R}e^{-\kappa}\sin^{2}\theta, \Gamma^{2}_{33} =-\sin\theta\cos\theta, \Gamma^{4}_{04} =\frac{\dot{\mu}}{2}\,, \Gamma^{4}_{33} =\epsilon RR^{*}e^{-\mu}\sin^{2}\theta\,,\] \[\Gamma^{0}_{44} =-\frac{\epsilon}{2}\dot{\mu}e^{\mu-\omega}, \Gamma^{1}_{44} =\frac{\epsilon}{2}\mu^{\prime}e^{\mu-\kappa}, \Gamma^{4}_{44} =\frac{\mu^{*}}{2}\,. \tag{107}\] In these expressions, \(\dot{x}\) represents the derivative with respect to the normal time, \(x^{\prime}\) represents the derivative with respect to the radius vector, i.e. \(r\), and finally, \(x^{*}\) represents the derivatives with respect to the fifth-dimensional coordinate \(\zeta\). As we know the Christoffel's symbols, the 5D Ricci tensor for this metric can be calculated that are given by \[\widetilde{R}_{00} = -\frac{\ddot{\kappa}}{2}-\frac{\ddot{\mu}}{2}-2\frac{\ddot{R}}{R}+ \frac{\dot{\omega}\dot{\kappa}}{4}+\frac{\dot{\omega}\dot{\mu}}{4}+\frac{\dot{ \omega}\dot{R}}{R}-\frac{\dot{\kappa}^{2}}{4}-\frac{\dot{\mu}^{2}}{4}+e^{\omega -\kappa}\left(\frac{\omega^{\prime\prime}}{2}+\frac{\omega^{\prime 2}}{4}-\frac{\omega^{ \prime}\kappa^{\prime}}{4}+\frac{\omega^{\prime}\mu^{\prime}}{4}+\frac{\omega ^{\prime}R^{\prime}}{R}\right) \tag{111}\] \[+\epsilon e^{\omega-\mu}\left(-\frac{\omega^{**}}{2}-\frac{ \omega^{*2}}{4}+\frac{\omega^{*}\mu^{*}}{4}-\frac{\omega^{*}\kappa^{*}}{4}- \frac{\omega^{*}R^{*}}{R}\right)\,,\] \[\widetilde{R}_{01} = -\frac{\dot{\mu}^{\prime}}{2}-\frac{\dot{\mu}\mu^{\prime}}{4}+ \frac{\dot{\omega}^{\prime}\dot{\mu}}{4}+\frac{\dot{\kappa}\mu^{\prime}}{4}+ \frac{\dot{\kappa}R^{\prime}}{R}+\frac{\omega^{\prime}\dot{R}}{R}-\frac{2\dot{ R}^{\prime}}{R}\,,\] (112) \[\widetilde{R}_{04} = -\frac{\dot{\kappa}^{*}}{2}-\frac{\dot{\kappa}\kappa^{*}}{4}+ \frac{\dot{\omega}\omega^{*}}{4}+\frac{\kappa^{*}\dot{\mu}}{4}+\frac{\dot{ \mu}R^{*}}{R}+\frac{\omega^{*}\dot{R}}{R}-\frac{2\dot{R}^{*}}{R}\,,\] (113) \[\widetilde{R}_{11} = -\frac{\omega^{\prime\prime}}{2}-\frac{\mu^{\prime\prime}}{2}- \frac{\omega^{\prime 2}}{4}-\frac{\mu^{\prime 2}}{4}+\frac{\kappa^{\prime}\omega^{ \prime}}{4}+\frac{\kappa^{\prime}\mu^{\prime}}{4}+\frac{\kappa^{\prime}R^{ \prime}}{R}-\frac{2R^{\prime\prime}}{R}+e^{\kappa-\omega}\left(\frac{\ddot{ \kappa}}{2}+\frac{\dot{\kappa}^{2}}{4}-\frac{\dot{\kappa}\dot{\omega}}{4}+ \frac{\dot{\mu}\dot{\mu}}{4}+\frac{\dot{\kappa}\dot{R}}{R}\right)\] (114) \[+\epsilon e^{\kappa-\mu}\left(\frac{\kappa^{**}}{2}+\frac{\kappa^ {*2}}{4}+\frac{\kappa^{*}\omega^{*}}{4}-\frac{\kappa^{*}\mu^{*}}{4}+\frac{ \kappa^{*}R^{*}}{R}\right)\,,\] \[\widetilde{R}_{14} = -\frac{\omega^{\prime*}}{2}-\frac{\omega^{\prime}\omega^{*}}{4}+ \frac{\kappa^{*}\omega^{\prime}}{4}+\frac{\mu^{\prime}\omega^{*}}{4}+\frac{ \kappa^{*}R^{\prime}}{R}+\frac{\mu^{\prime}R^{*}}{R}-\frac{2R^{\prime*}}{R}\,,\] (115) \[\widetilde{R}_{22} = 1+R^{2}e^{-\omega}\left(\frac{\dot{R}^{2}}{R^{2}}+\frac{\dot{R}} {R}-\frac{\dot{R}}{2R}(\dot{\omega}-\dot{\kappa}-\dot{\mu})\right)-R^{2}e^{- \kappa}\left(\frac{R^{\prime 2}}{R^{2}}+\frac{R^{\prime\prime}}{R}+\frac{R^{\prime}}{2R} \left(\omega^{\prime}-\kappa^{\prime}+\mu^{\prime}\right)\right)\] (116) \[\epsilon R^{2}e^{-\mu}\left(\frac{R^{*2}}{R^{2}}+\frac{R^{**}}{R} +\frac{R^{*}}{2R}\left(\omega^{*}+\kappa^{*}-\mu^{*}\right)\right)\,,\] \[\widetilde{R}_{33} = \widetilde{R}_{22}\sin^{2}\theta\,.\] (117) \[\widetilde{R}_{44} = \left(-\frac{\omega^{**}}{2}-\frac{\omega^{*2}}{4}-\frac{\kappa^ {**}}{2}-\frac{\kappa^{*2}}{4}+\frac{\mu^{*}\omega^{*}}{4}+\frac{\mu^{*} \kappa^{*}}{4}+\frac{\mu^{*}R^{*}}{R}-\frac{2R^{**}}{R}\right)-\epsilon e^{\mu -\omega}\left(\frac{\ddot{\mu}}{2}+\frac{\dot{\mu}^{2}}{4}\right.\] (118) \[\left.-\frac{\dot{\mu}\dot{\omega}}{4}+\frac{\dot{\mu}\dot{\kappa }}{4}+\frac{\dot{\mu}\dot{R}}{R}\right)+\epsilon e^{\mu-\kappa}\left(\frac{ \mu^{\prime\prime}}{2}+\frac{\mu^{\prime 2}}{4}+\frac{\mu^{\prime}\omega^{\prime}}{4}-\frac{\mu^{ \prime}\kappa^{\prime}}{4}+\frac{\mu^{\prime}R^{\prime}}{R}\right)\,.\] Using the components of the 5D Ricci tensor the Ricci scalar can be calculated as \[\widetilde{R} = -\frac{2}{R^{2}}-e^{-\omega}\left(\ddot{\kappa}+\frac{\dot{\kappa }^{2}}{2}+\ddot{\mu}+\frac{\dot{\mu}^{2}}{2}-\frac{\dot{\omega}\dot{\kappa}}{2} -\frac{\dot{\omega}\dot{\mu}}{2}-\frac{2\dot{R}}{R}(\dot{\omega}-\dot{\kappa}- \dot{\mu})+\frac{\dot{\mu}\dot{\kappa}}{2}+\frac{2\dot{R}^{2}}{R^{2}}+\frac{4 \ddot{R}}{R}\right) \tag{119}\] \[+e^{-\kappa}\left(\omega^{\prime\prime}+\frac{\omega^{\prime 2}}{2}+\mu^{ \prime\prime}+\frac{2R^{\prime}}{R}\left(\omega^{\prime}-\kappa^{\prime}+\mu^{ \prime}\right)+\frac{\mu^{\prime 2}}{2}-\frac{\omega^{\prime}\kappa^{\prime}}{2}+\frac{\omega^{ \prime}\mu^{\prime}}{2}-\frac{\mu^{\prime}\kappa^{\prime}}{2}+\frac{2R^{\prime 2}}{R^{2}}+\frac{4R^{ \prime\prime}}{R}\right)\] \[-\epsilon e^{-\mu}\left(\omega^{**}+\frac{\omega^{*2}}{2}+\kappa^ {**}+\frac{\kappa^{*2}}{2}+\frac{\kappa^{*}\omega^{*}}{2}-\frac{\kappa^{*}\mu^{ *}}{2}+\frac{2R^{*}}{R}\left(\omega^{*}+\kappa^{*}-\mu^{*}\right)\right.\] \[\left.-\frac{\mu^{*}\omega^{*}}{2}+\frac{2R^{*2}}{R^{2}}+\frac{4 R^{**}}{R}\right).\] The above expressions give the Ricci tensor and Ricci scalar in a five-dimensional universe. To get the 4-dimensional components of the Ricci tensor and Ricci scalar we can drop all the derivatives with respect to the \(\zeta\) to 0. We can also set \(\epsilon=0\). This gives \[R_{00} = -\frac{\ddot{\kappa}}{2}-2\frac{\ddot{R}}{R}+\frac{\dot{\omega} \dot{\kappa}}{4}+\frac{\dot{\omega}\dot{R}}{R}-\frac{\dot{\kappa}^{2}}{4}+e^{ \omega-\kappa}\left(\frac{\omega^{\prime\prime}}{2}+\frac{\omega^{\prime 2}}{4}-\frac{\omega^{\prime}\kappa^{ \prime}}{4}+\frac{\omega^{\prime}R^{\prime}}{R}\right) \tag{120}\] \[R_{01} = \frac{\dot{\kappa}R^{\prime}}{R}+\frac{\omega^{\prime}\dot{R}}{R}- \frac{2\dot{R}^{\prime}}{R} \tag{121}\] \[R_{11} = -\frac{\omega^{\prime\prime}}{2}-\frac{\omega^{\prime 2}}{4}+ \frac{\kappa^{\prime}\omega^{\prime}}{4}+\frac{\kappa^{\prime}R^{\prime}}{R}- \frac{2R^{\prime\prime}}{R}+e^{\kappa-\omega}\left(\frac{\ddot{\kappa}}{2}+ \frac{\dot{\kappa}^{2}}{4}-\frac{\dot{\kappa}\dot{\omega}}{4}+\frac{\dot{ \kappa}\dot{R}}{R}\right) \tag{104}\] \[R_{22} = 1+R^{2}e^{-\omega}\left(\frac{\dot{R}^{2}}{R^{2}}+\frac{\ddot{R} }{R}-\frac{\dot{R}}{2R}(\dot{\omega}-\dot{\kappa})\right)-R^{2}e^{-\kappa} \left(\frac{R^{\prime 2}}{R^{2}}+\frac{R^{\prime\prime}}{R}+\frac{R^{\prime}}{2R} \left(\omega^{\prime}-\kappa^{\prime}\right)\right)\] (105) \[R_{33} = R_{22}\sin^{2}\theta \tag{106}\] From these four-dimensional Ricci tensor the Ricci scalar can be calculated as \[R = -\frac{2}{R^{2}}-e^{-\omega}\left(\ddot{\kappa}+\frac{\dot{ \kappa}^{2}}{2}-\frac{\dot{\omega}\dot{\kappa}}{2}-\frac{2\dot{R}}{R}(\dot{ \omega}-\dot{\kappa})+\frac{2\dot{R}^{2}}{R^{2}}+\frac{4\ddot{R}}{R}\right) \tag{107}\] \[+e^{-\kappa}\left(\omega^{\prime\prime}+\frac{\omega^{\prime 2}}{2} -\frac{\omega^{\prime}\kappa^{\prime}}{2}+\frac{2R^{\prime}}{R}\left(\omega^{ \prime}-\kappa^{\prime}\right)+\frac{2R^{\prime 2}}{R^{2}}+\frac{4R^{\prime\prime}}{R} \right).\] We can use these above expressions and a few algebraic manipulations to obtain the expressions for \(G_{\nu}^{\mu}\) used in Eq. 103. ### Calculating the stress-energy tensor According to the field equations \(\widetilde{G}_{AB}=\widetilde{T}_{AB}\). The 4-dimensional part of the stress-energy tensor for fluid in equilibrium can be written as \[\widetilde{T}_{\mu\nu}=(\rho+p)\widetilde{u}_{\mu}\widetilde{u}_{\nu}-p \widetilde{g}_{\mu\nu}\,. \tag{108}\] The 5th component of the stress-energy tensor i.e. \(\widetilde{T}_{44}\) and \(\widetilde{T}_{4\mu}\) are assumed to be 0 for the cosmological calculations. Therefore the field equation gives us \(\widetilde{R}_{44}\sim 0\). As \(\widetilde{g}_{AB}\) is a diagonal matrix, raising or lowering some components in a tensor is same as multiplying with the respective diagonal component of the metric, i.e. \(\widetilde{R}_{4}^{4}=\widetilde{R}_{44}\widetilde{g}^{44}\). So from the field equation, we can write \(\widetilde{G}_{\nu}^{\mu}=T_{\nu}^{\mu}\), which gives \(G_{\nu}^{\mu}=T_{\nu}^{\mu}+Q_{\nu}^{\mu}\), where \(Q_{\nu}^{\mu}=G_{\nu}^{\mu}-\widetilde{G}_{\nu}^{\mu}\), are the additional geometric terms while projecting 5 dimensional Einstein's tensor to the 4-dimensional format. As we discussed earlier, if we assume that these geometric quantities, \(Q_{\nu}^{\mu}\), are getting generated from some geometric fluid, we can define the density and the pressure of these geometric fluids as \(\rho_{g}\) and \(p_{g}\). These can be calculated from \(Q_{\nu}^{\mu}\) as \[\rho_{g}=Q_{0}^{0}+Q_{1}^{1}-Q_{2}^{2},\qquad\qquad\text{and}\qquad\qquad p_{g }=-Q_{2}^{2}\,, \tag{109}\] While the above expression provides a simplified equation for \(\rho_{g}\), the components of \(p_{g}\) are not simplified. However, we know \(\widetilde{R}_{44}\sim 0\). If we add the expression for \(\widetilde{R}_{4}^{4}\) with the \(Q_{2}^{2}\), it will not change \(p_{g}\). However, this can simplify the equations, giving the expressions for the density and pressure as \[\rho_{g} = \frac{3}{2}\left(\frac{e^{-\kappa}\mu^{\prime}R^{\prime}}{R}- \frac{e^{-\omega}\dot{\mu}\dot{R}}{R}\right)+\frac{3}{2}\epsilon e^{-\mu} \left(\frac{R^{*}\mu^{*}}{R}-\frac{2R^{**}}{R}\right)-\epsilon e^{-\mu}\frac{ R^{*2}}{R^{2}}+\epsilon e^{-\mu}\left(\frac{\omega^{*}\kappa^{*}}{4}\right)\] \[-\epsilon e^{-\mu}\frac{R^{*}}{2R}\left(\kappa^{*}+\omega^{*} \right),\] \[p_{g} = \frac{1}{2}\left(\frac{e^{-\kappa}\mu^{\prime}R^{\prime}}{R}- \frac{e^{-\omega}\dot{\mu}\dot{R}}{R}\right)+\frac{1}{2}\epsilon e^{-\mu} \left(\frac{R^{*}\mu^{*}}{R}-\frac{2R^{**}}{R}\right)+\epsilon e^{-\mu}\left( \frac{\omega^{*}\kappa^{*}}{4}\right)+\epsilon e^{-\mu}\frac{R^{*}}{2R}\left( \kappa^{*}+\omega^{*}\right)\] These equations show that pressure and density consist of 4 clearly defined components, a radiation-like component, a matter-like component, a stiff matter-like component, and a dark energy-like component.
2302.07051
Adversarial Path Planning for Optimal Camera Positioning
The use of visual sensors is flourishing, driven among others by the several applications in detection and prevention of crimes or dangerous events. While the problem of optimal camera placement for total coverage has been solved for a decade or so, that of the arrangement of cameras maximizing the recognition of objects "in-transit" is still open. The objective of this paper is to attack this problem by providing an adversarial method of proven optimality based on the resolution of Hamilton-Jacobi equations. The problem is attacked by first assuming the perspective of an adversary, i.e. computing explicitly the path minimizing the probability of detection and the quality of reconstruction. Building on this result, we introduce an optimality measure for camera configurations and perform a simulated annealing algorithm to find the optimal camera placement.
Gaia Carenini, Alexandre Duplessis
2023-02-14T14:06:00Z
http://arxiv.org/abs/2302.07051v2
# Adversarial Path Planning for Optimal Camera Positioning ###### Abstract The use of visual sensors is flourishing, driven among others by the several applications in detection and prevention of crimes or dangerous events. While the problem of optimal camera placement for total coverage has been solved for a decade or so, that of the arrangement of cameras maximizing the recognition of objects "in-transit" is still open. The objective of this paper is to attack this problem by providing an adversarial method of proven optimality based on the resolution of Hamilton-Jacobi equations. The problem is attacked by first assuming the perspective of an adversary, i.e. computing explicitly the path minimizing the probability of detection and the quality of reconstruction. Building on this result, we introduce an optimality measure for camera configurations and perform a simulated annealing algorithm to find the optimal camera placement. ## I Introduction Networks of cameras, or generally of visual sensors, are widely used throughout industrial processes, detection and prevention of crimes or dangerous events, military purposes and so forth. The vast availability of different types of cameras, the decreasing cost of associated hardware, together with an increasing need for such systems are among the reasons intriguing more and more researchers to focus in this field. A crucial concern raised by the design of a camera network is positioning optimally individual visual sensors under a set of default constraints. In fact, resulting visual measurements can be made significantly more accurate by selecting a suitable configuration established through a proper mathematical model. It's clear that different visual tasks have fairly distinct requirements, e.g., a multi-view reconstruction task stands in need of a minimum number of video sensors with predetermined ranges of angular separation, instead, some aggregate video sensor network must be fault-tolerant to camera drop out and still layouts of sensors in video sensor networks should assure a minimum level of image quality in order to get a sufficient resolution, depth of field, etc. Independently from the specific task, resolution is always a fundamental and primary information bottlenecks for vision applications. The problem of automating the camera network design process for attaining highly accurate measurements has received comparatively little attention given its practical importance. Our goal is to address the problem of camera placement to optimize the aggregate observability. One possible application of this research is the development of a design tool for surveillance camera placement in areas of high traffic, where each subject may take a different path through the area. This work assumes the cameras are statically mounted to view an area. Optimizing the observability of such a system means jointly maximizing the power of observation of the cameras for the area of interest. Related work\(\rightarrow\) Suitable camera placement for the purpose of optimizing the sensors ability to capture information about a desired environment or task has been studied extensively. In [2], O'Rourke provides an in-depth theoretical analysis of the problem of maximizing camera coverage of an area, where the camera fields of view do not overlap (the so-called "art gallery" problem). Several other results have been published in this direction, e.g., [1, 3]. In recent years, research has sought to extend the framework to include limited field of view cameras and to incorporate resolution metrics into the formulation (see [22, 5, 27]). More specifically, in [34], the art gallery framework is refined by introducing a resolution quality metric. Moreover in [35], the formulation of the minimum guard coverage at gallery problem was extended in order to incorporate minimum-set cover. In the same work, reduced upper bounds were derived for two cases of exterior visibility for two- and three-dimensions. Contribution\(\rightarrow\) Our method differs from the art gallery framework in various salient aspects. First, we study the best attacks to visibility conditioned to a given camera configuration and destination, i.e., we find out the paths minimizing the observability of the object "in-transit" from a couple of selected positions. This is done by modelling this task as an optimal motion planning problem that can be solved by applying the technique presented in [36]. We then derive an optimality measure that can be used to assess a configuration. We finally use a simulated annealing based algorithm to find a near-optimal camera placement according to this measure. ## II Preliminaries We start by reducing the design of "attacks to observability" to a problem of optimal motion planning in a space presenting an anisotropic field of velocities; there the goal is to reach a final position \((x_{f},y_{f})\) from a start position \((x_{0},y_{0})\) in a minimum time, while avoiding obstacles and minimizing the risk of being recognized. For sake of simplicity, we assume that all the cameras are punctiform and we consider a finite 2D environment defined as \(\mathcal{R}\subset[a,b]\times[c,d]\) with \(a,b,c,d\in\mathbb{R}\). We define an obstacle, \(obs\), as a proper subset of \(\mathcal{R}\) and we consider the set of the obstacles \(\mathcal{O}\). We define each camera as a tuple \(C_{i}=(p_{i},\alpha_{i},r_{i})\) in which \(p_{i}\) is of the form \((a_{i},b_{i},\beta_{i})\) where \((a_{i},b_{i})\) are the coordinates of the camera \(C_{i}\), \(\beta_{i}\) is the angle that the first ray of the vision field of \(C_{i}\) defines with the vertical according to the standard reference system, \(\alpha_{i}\) is the angular opening of the camera, and \(r_{i}\) is a function describing the resolution that the camera has of an object when the distance from it changes. The vision field is the space swept by any vector moving from the position \(\beta_{i}\) to the position \(\beta_{i}+\alpha_{i}\), we call it \(\mathcal{F}_{i}\). For the model proposed below, we will assume that the recognition is jointly inversely proportional to the distance from the camera and directly proportional to the time spent in the field of view of the camera. Other assumptions could be added in a straightforward manner in the model presented that is kept simple for sake of clarity. Fix the start at \((x_{0},y_{0})\in\mathcal{R}\), the destination point in \((x_{f},y_{f})\in\mathcal{R}\) and \(N\) cameras \(\mathcal{C}=\{C_{i}\}_{i=1}^{N}\), we model cameras visual field as an anisotropic speed field \(\overrightarrow{W}\) that depends on \((x_{f},y_{f})\) as follows: \(\forall(x,y)\in\mathcal{R}\quad\overrightarrow{w}(x,y)\) is either 0 if \((x,y)\notin\bigcup_{i\in[1,N]}\mathcal{F}_{i}\) or as the vector that has direction the opposite of the conjunction of \((x,y)\) and \((x_{f},y_{f})\) and has module \(1/dist((a_{i},b_{i}),(x,y))^{2}\). Under this assumption, the speed of the motion that we consider is defined by the equations: \[\begin{cases}\dot{x}(t)=(V_{c}+w(x,y))\sin(\theta(t))\\ \dot{y}(t)=(V_{c}+w(x,y))\cos(\theta(t))\end{cases} \tag{1}\] where (x,y) is the mobile position, \(\theta\) is the heading angle relative to north direction and \(V_{c}\) is a constant finite speed of the object "in transit" that has direction corresponding to the conjunctive from the start position and the destination and constant module \(c\). With the formalism introduced above, we can define the resolving optimization problem that can be written as: \[\begin{cases}\min&(t_{f}-t_{0})\\ s.t.&\dot{x}(t)=(V_{c}+w(x,y))\sin(\theta(t))\\ &\dot{y}(t)=(V_{c}+w(x,y))\cos(\theta(t))\\ &(x(t_{0}),y(t_{0}))=(x_{0},y_{0})\\ &(x(t_{f}),y(t_{f}))=(x_{f},y_{f})\end{cases} \tag{2}\] We observe that the control parameter of (2) is the the heading angle \(\theta\) and that the solution of the optimization problem is therefore finding the \(\theta\) that over time minimize the total travel time. For sake of simplicity, we consider as control variable the unit vector naturally associated with it, i.e. \(\overrightarrow{a}(t)=(sin(\theta(t)),cos(\theta(t)))\). In this case, the problem can be restated simply as: \[\begin{cases}\min_{\overrightarrow{a}}\quad(t_{f}-t_{0})\\ s.t&\dot{X}=f(X(t),\overrightarrow{a}(t))\\ &X(t_{0})=X_{0}\wedge X(t_{f})=X_{f}\end{cases} \tag{3}\] where \(X\) is the position of the object "in-transit" and \(f(X(t),\overrightarrow{a}(t))\) is the real speed of the mobile at time \(t\). The optimal control problem introduced (3) is classical and we can observe how the corresponding Hamilton-Jacobi equation is given by: \[\max_{\overrightarrow{a}\in A}\{-\nabla u(X),f(X,\overrightarrow{a})\}=1 \tag{4}\] where \(u(X)\) represents the minimum time to reach the destination starting from the point \(X\). ## III Motion planning In this section, we discuss the resolution of (4). The same issue has been solved in [36] in the context of optimal motion planning in presence of wind. For seek of completeness, we outline below the method. The idea behind the resolution is to decompose recursively the problem into linked sub-problems as it happens in dynamic programming. More specifically, the resolution can be seen as a front expansion problem, where the wavefront represents the minimum time to reach the arrival point. The computation is based on the classical Huygen's principle, which that states "every point reached by a wavefront becomes a source of a spherical wavefront". The evolution of the wavefront is given by: \[||\nabla u(X)||F\left(X,\frac{\nabla u(x)}{||\nabla u(X)||}\right)=1 \tag{5}\] where \(F(X,\overrightarrow{n})\) is the front speed in the direction \(\overrightarrow{n}\) of the outward unit vector normal to the front at point \(X\). Through some algebraic manipulation (see [36]), the optimal path problem can therefore be written as a front expansion problem where the speed of the wavefront is given by: \[F(X,\overrightarrow{n})=\max_{\overrightarrow{d}}\{-\overrightarrow{n}.f(X, \overrightarrow{a})\} \tag{6}\] To design the optimal path between the departure point and the final point, it is sufficient to exploit the characteristics of the Hamilton-Jacobi PDE. Several methods exist for finding an approximation of the solution of these Hamilton-Jacobi equations, in this context, we apply the so-called _ordered upwind algorithm_. Presented in [38], this technique was proven to converge to a weak solution of the PDE, in particular to the _viscosity_ one. Its basic principle is to avoid useless iterations, common in Dijkstra-like methods, thanks to a careful use of the information about the characteristic directions of the PDE. The first step consists in computing the value function, \(u\), considering a \(2\mathrm{D}\) nonregular triangular mesh. The method applied is described extensively in [36]. Other methods exist, e.g. semi-Lagrangian and Eulerian discretization [38], however both of them have disadvantages requiring multiple local minimization and finding the roots of a non-linear equation respectively. In our case, the speed of the wavefront \(F\) has a closed form, and the value can be computed using a finite-differences upwind formula of the Hamilton-Jacobi equation. By modelling the object "in-transit" speed as \(f(\mathrm{X},a)=V_{a}a+W\), the speed of the wavefront is equal to: \[F(\mathrm{X},n)=V_{a}-\langle n,W\rangle \tag{7}\] For more details, read [37]. Fixed this definition of the wavefront speed, to the problem is applied an upwind finite-difference discretization on the simplex \((\mathrm{X},\mathrm{X}_{j},\mathrm{X}_{k})\). The associated Hamilton-Jacobi equation becomes: \[\left\|P^{-1}w(\mathrm{X})\right\|^{2}V_{a}^{2}=\left(1+\left\langle P^{-1}w( \mathrm{X}),W\right\rangle\right)^{2} \tag{8}\] where the vector \(P^{-1}w(\mathrm{X})\) is the discretization of \(\nabla u(\mathrm{X})\) from the directional derivatives of \(u\) in the directions defined by the edges of the simplex \((\mathbf{x},\mathbf{x}_{j},\mathbf{x}_{k})\). This equation has the property of being quadratic and has the following form: \[Av_{\mathbf{x}_{j}\mathbf{x}_{k}}^{2}(\mathrm{X})+Bv_{\mathbf{x}_{j}\mathbf{x }_{k}}(\mathrm{X})+C=0 \tag{9}\] where the coefficients are given by: \[A =V_{a}^{2}\left\langle P^{-1}\alpha,P^{-1}\alpha\right\rangle- \left\langle P^{-1}\alpha,W\right\rangle^{2}\] \[B =2V_{a}^{2}\left\langle P^{-1}\alpha,P^{-1}\beta\right\rangle-2 \left\langle P^{-1}\alpha,W\right\rangle\left(\left\langle P^{-1}\beta,W \right\rangle+1\right)\] \[C =V_{a}^{2}\left\langle P^{-1}\beta,P^{-1}\beta\right\rangle- \left[\left\langle P^{-1}\beta,W\right\rangle+1\right]^{2}\] The value \(v_{\mathbf{x}_{j}\mathbf{x}_{k}}\) is then computed by the classical resolution formula of quadratic equation. To ensure that \(v_{\mathbf{x}_{j}\mathbf{x}_{k}}\) is a good approximation of the value function, \(u\), at the point \(\mathrm{X}\), the characteristic direction for the mesh point \(\mathrm{X}\) needs to lie inside the simplex \((\mathrm{X},\mathrm{X}_{j},\mathrm{X}_{k})\). The optimal trajectory is built by moving from the initial point to the destination point along the characteristic direction determined by: \[\frac{d\mathrm{X}}{dt}=-V_{a}\frac{\nabla u(\mathrm{X})}{\left\|\nabla u( \mathrm{X})\right\|}+W(\mathrm{X}) \tag{10}\] The computational complexity of this algorithm is \(O(\gamma N\log N)\), where \(N\) is the number of mesh points and \(\gamma\) the anisotropy ratio (see [38]). Obstacle avoidance\(\rightarrow\) Two approaches can be used. The first one, derived from [36], consists in decreasing the speed of propagation of the wavefront in the parts of the environment corresponding to obstacles. In this way, we register a growth of the value function \(u\) that penalizes the passage through these areas. We define a map of values \(\xi\) as a function of the obstacles, \(\mathcal{O}\). The values \(\xi\) are between 0 and an upper bound \(\xi_{max}\). The scaled values \(\xi\) are then exploited in order to slow down the wavefront speed as follows: \((1-\epsilon)F(X,n)\). The maximum value \(\xi_{max}\) needs to be less than 1 to keep the wavefront speed strictly positive. Another approach, which we find simpler - especially from a computational viewpoint, is based on the fact that the Hamilton-Jacobi equation's resolution method proposed above can be seen as a shortest-path algorithm in the graph whose vertices are the mesh points, and edges represent the neighborhood relationship. Thus taking in account obstacles can done by simply deleting the obstacle's vertices from the graph. DL approach\(\rightarrow\) The formalization of the problem proposed leads to the resolution of a Hamilton-Jacobi equation. Problems involving these kind of equations have been studied in several areas of mathematics, numerical computing and more recently in deep learning and several networks have been designed including [25, 24]. In this work, we do not pursue this direction, having decided to privilege convergence guarantees rather then speed. ## IV Optimal camera placement In this section, fixed a camera configuration \(C=\{C_{i}\}_{i=1}^{N}\), we provide a measure for assessing how much effective this configuration is when it comes to preventing an object "in-transit" to go unnoticed. For sake of simplicity, we start assuming that the adversary has fixed starting position \((x_{0},y_{0})\) and final destination \((x_{f},y_{f})\). A fairly immediate generalization consists in averaging the optimality measure, defined below, on every pair of positions \(((x_{0},y_{0}),(x_{f},y_{f}))\) that an adversary could take (or more realistically on a sampling of them). We define our objective so that it takes into account the integral of the "portion" of the path, returned by applying the ordered upwind algorithm (from now on, called algorithm \(\mathcal{A}\)), that intersects at least one of the camera's field of view. A detail to notice is that we normalize this quantity over the distance between the initial position and the end position, since this distance should not influence the complexity of an adversarial path. Therefore, for any parametrization \(\gamma:[0,1]\rightarrow\mathcal{R}\) of a valid path (i.e. a path avoiding obstacles), we define: \[\mathcal{L}_{(p_{i})_{1\leq i\leq N}}(\gamma)=\int_{0}^{1}1+\eta\mathbf{1} \left(\gamma(t)\in\bigcup_{1\leq i\leq N}\text{scope}(C_{i})\right)\text{d}t \tag{11}\] where \(\eta\) controls the tradeoff between path length and camera visibility, and \(\text{scope}(C_{n})\subset\mathcal{R}\) is the set of points of \(R\) that a camera \(C_{n}\) has in its field of view ; more formally, it is defined as: \[\text{scope}(C_{i}) =\left\{(x,y)\in\mathcal{R}\text{ s.t. }\right.\] \[\left.\beta_{i}\leq\arctan\left(\frac{y-b_{i}}{x-a_{i}}\right) \leq\beta_{i}+\alpha_{i}\right. \tag{12}\] \[\left.\text{and }\right](a_{i},b_{i}),(x,y)[\cap\mathcal{O}=\emptyset\right\}\] where we have used the natural extension of \(\arctan\) to \(\overline{\mathbb{R}}\). Simulated Annealing\(\rightarrow\) Given (11), the initial problem can be restated as follows: \[\min_{(x_{i},y_{i},\theta_{i})_{1\leq i\leq N}}\mathcal{L}(\mathcal{A}(\{(x_{i}, y_{i},\theta_{i})_{i}\})) \tag{13}\] for \(N\) a given number of cameras. An important remark is that we actually do not need to compute \(\mathcal{L}(\mathcal{A}(\{(x_{i},y_{i},\theta_{i})_{i}\})\) since a similar measure is already computed during the path computation, i.e. \(u(x_{f})\). Therefore in practice the simulated annealing algorithm is performed using \(u(x_{f})\) as optimality measure. The optimization required by (13) cannot be performed using classical methods, e.g. gradient descent, both because of the complexity of gradients' estimation, an because of the existence of several local minima (see [23]). For this reason, we decided to use simulated annealing (SA, see [6]), an effective method for approximating global optima. The principle behind this technique was inspired by annealing in metallurgy and consists in proposing a new potentially optimizing candidate at each step and accepting it always if it scores better, but still with some probability \(p\) otherwise. In our framework, at each step, SA randomly selects one of the \(n\) cameras and sets its parameters randomly. The new configuration obtained \(C^{t+1}\) is then evaluated thanks to our optimality measure. If the new configuration scores better than the previous one \(C^{t}\), we accept it. Otherwise we still accept the new configuration with some probability. This is equivalent to accepting the new configuration with probability: \[\begin{split} p(\text{accept}|& C^{t},C^{t+1},T)\\ &=\min\{1,\exp(\frac{\mathcal{L}(\mathcal{A}(C^{t+1}))-\mathcal{L }(\mathcal{A}(C^{t}))}{T})\}\end{split} \tag{14}\] where \(T\) is called the annealing temperature, and is typically chosen high at the beginning and then decreased gradually. In this work, we adopt a linear schedule for \(T\). The complete pseudocode for the algorithm is the following. ``` 1:procedureSA(\(T_{0}\)) 2:\(C\leftarrow\text{RANDOM}()\) 3:for\(k=0\) to \(K\)do 4:\(T\leftarrow\max(0,T_{0}(1-\frac{k+1}{K}))\) 5:\(C_{\text{prop}}\leftarrow\text{RANDOM}()\) 6:if\(p(\text{accept}|C,C_{\text{prop}},T)>\text{random}(0,1)\)then 7:\(C\gets C_{\text{prop}}\) 8:endif 9:endfor 10:return\(C\) 11:endprocedure ``` Concerning optimality, SA is guaranteed to converge to the global optimum in a finite time if the candidates and the temperature satisfy some well-known weak conditions [39]. From a practical point of view, since the search-space is infinite, convergence to the global minimum may be slow, but results are still very satisfying and may be further improved by using local optimization methods. ## V Results After a sequence of tests, the model developed seems to achieve fairly good performance. We discuss them below making a distinction among the results concerning path planning and camera placement. Path planning\(\rightarrow\) We have implemented the path planning algorithm fully described in III. For seek of clarity, we show the results in a specific case, where the norm of the vector field \(w\) does not depend on the distance with respect to camera position ; this is motivated by the will of visualizing easily the trade-off among minimizing the path length and avoiding on cameras. Figures 1 and 2 show an example of output of our algorithm in a complex environment. Figure 3 shows how the tradeoff between path length and camera visibility influences the optimal path. The results can be interpreted very intuitively1, since the more we privilege camera avoidance (compared to path length minimization), the more the agent tries to spend less time in the camera scope, and thus either makes a detour around the obstacle or (in the intermediate case) goes closer to the camera position 2. Footnote 1: Because we allow exclusively four moves from each position in the grid, straight lines in the continuous space are either parallel to an axis or a bisector of them that is achieved thanks to two straight lines, giving a longer path. This is a minor issue that does not affect the placement algorithm for a fixed number of cameras. However a potential improvement to the current implementation could consist in “post-replacing” the paths by trying to directly join points not separated by a camera scope (if they do not intersect an obstacle). Footnote 2: Due to the discretization, the number of points in the scope of the camera is not an increasing function of the distance to the camera. This is a minor problem that affects neither the general behavior of the algorithm, nor the following camera placement, especially when using high-dimension grids. ## VI Conclusion This work proposes a new multi-camera placement modeling set-up to support the design of network of visual sensor. The model allows for taking into account important constraints that are involved in computer-vision applications operated on the cameras' recordings. In fact, the anisotropic speed field is easily adaptable by changing the definition of \(w\) (in equation 1), so that it explicitly accounts for resolution or any other desired property. The formalism reduces the camera placement problem to a coupled optimization that builds partially on prior work in the field of motion planning. Furthermore, in addition to an optimal camera placement algorithm, we provide a way to assess the optimality of a given camera network. Limitations and Further Work\(\rightarrow\) This work presents a few limitations that we list below (in decreasing order of importance) to be addressed in future work. First of all, we have not taken into account the triangulation constraint. This could be easily solved by adapting the SA algorithm. A second improvement that could be made would be to allow a non-fixed number of cameras during the optimal placement process. This could be solved by introducing a joint optimization of the number of cameras and their parameters through transdimentional SA [40]. One last limitation of our work is related to the speed of computation that is quite modest, and the algorithm could be considerably accelerated (for instance by discretizing the positions of the cameras with a step size chosen according to the size of the mesh and a "characteristic size" of the obstacles). However given the type of applications, a high computing speed is not necessary since the computation must be done only once for a given enviornment.
2307.13822
Classification of finite depth objects in bicommutant categories via anchored planar algebras
In our article [arXiv:1511.05226], we studied the commutant $\mathcal{C}'\subset \operatorname{Bim}(R)$ of a unitary fusion category $\mathcal{C}$, where $R$ is a hyperfinite factor of type $\rm II_1$, $\rm II_\infty$, or $\rm III_1$, and showed that it is a bicommutant category. In other recent work [arXiv:1607.06041, arXiv:2301.11114] we introduced the notion of a (unitary) anchored planar algebra in a (unitary) braided pivotal category $\mathcal{D}$, and showed that they classify (unitary) module tensor categories for $\mathcal{D}$ equipped with a distinguished object. Here, we connect these two notions and show that finite depth objects of $\mathcal{C}'$ are classified by connected finite depth unitary anchored planar algebras in $\mathcal{Z}(\mathcal{C})$. This extends the classification of finite depth objects of $\operatorname{Bim}(R)$ by connected finite depth unitary planar algebras.
André Henriques, David Penneys, James Tener
2023-07-25T21:27:39Z
http://arxiv.org/abs/2307.13822v1
# Classification of finite depth objects in bicommutant categories via anchored planar algebras ###### Abstract In our article [arXiv:1511.05226], we studied the commutant \(\mathcal{C}^{\prime}\subset\mathrm{Bim}(R)\) of a unitary fusion category \(\mathcal{C}\), where \(R\) is a hyperfinite factor of type \(\mathrm{II}_{1}\), \(\mathrm{II}_{\infty}\), or \(\mathrm{III}_{1}\), and showed that it is a bicommutant category. In other recent work [arXiv:1607.06041, arXiv:2301.11114] we introduced the notion of a (unitary) anchored planar algebra in a (unitary) braided pivotal category \(\mathcal{D}\), and showed that they classify (unitary) module tensor categories for \(\mathcal{D}\) equipped with a distinguished object. Here, we connect these two notions and show that finite depth objects of \(\mathcal{C}^{\prime}\) are classified by connected finite depth unitary anchored planar algebras in \(\mathcal{Z}(\mathcal{C})\). This extends the classification of finite depth objects of \(\mathrm{Bim}(R)\) by connected finite depth unitary planar algebras. ###### Contents * 1 Introduction * 2 Background * 2.1 Unitary anchored planar algebras and unitary module tensor categories * 2.2 Bicommutant categories * 3 Relative tensor product of module tensor categories * 3.1 The ladder category model * 3.2 Canonical centralizing structure * 3.3 The anchored planar algebra model for \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) * 3.4 The unitary setting * 4 Classification of finite depth objects in \(\mathcal{C}^{\prime}\) * 4.1 From finite depth objects to anchored planar algebras * 4.2 From anchored planar algebras to finite depth objects * 4.3 Uniqueness of \(\mathcal{M}\to\mathcal{C}^{\prime}\) ## 1 Introduction Let \(R\) be a hyperfinite factor which is either of type \(\mathrm{II}_{1}\), \(\mathrm{II}_{\infty}\), or \(\mathrm{III}_{1}\). And let \(X\in\mathsf{Bim}(R)\) be a finite index (=dualizable) self-dual bimodule. Recall that \(X\) is said to have _finite depth if the subcategory of \(\mathsf{Bim}(R)\) that it generates is a fusion category: the object \(X\) has finite depth iff the total numer of isomorphism classes of irreducible \(R\)-\(R\)-bimodules which appear as summands of \(X^{\boxtimes n}\), \(n\in\mathbb{N}\), is finite. The bimodule \(X\) is called _symmetrically self-dual_ if it comes equipped with a unitary isomorphism \(r:X\stackrel{{\approx}}{{\to}}\bar{X}\), which is fixed under the involution \(r\mapsto\bar{r}^{*}:\mathrm{Hom}(X,\bar{X})\,\mathfrak{O}\). For \(R\) as above, there is a well-known correspondence (see [12, 13, 14, 15], or [16, SS3.2] for a review) between conjugacy classes of finite depth symmetrically self-dual bimodules, and isomorphism classes of unitary connected finite depth planar algebras: \[\left\{\begin{matrix}\mathrm{Finite\ depth,\ symm.\ self-dual}\\ \mathrm{\ ### Unitary anchored planar algebras and unitary module tensor categories We rapidly recall the notion of unitary anchored planar algebra from [11, 12]. Let \(\mathcal{V}\) be a unitary ribbon fusion category. Its unique unitary spherical structure induces a twist denoted \(\theta_{v}:v\to v\). **Definition 2.1**.: A unitary anchored planar algebra \((\mathcal{P},r,\psi)\) over \(\mathcal{V}\) consists of: 1. A sequence of _box objects_\(\mathcal{P}[n]\) of \(\mathcal{V}\) for each \(n\in\mathbb{N}_{\geq 0}\) together with a map \(Z(T)\) in \(\mathcal{V}\) from the tensor product of the input box objects to the output box object corresponding to each anchored planar tangle \(T\). For example: \[Z\left(\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{figs/2011.eps}} \right):\mathcal{P}[3]\otimes\mathcal{P}[5]\to\mathcal{P}[6].\] This data should satisfy: * (identity) the identity anchored tangle acts as the identity morphism * (composition) if \(S\) and \(T\) are composable anchored planar tangles at input disk \(i\) of \(S\), then \(Z(S\circ_{i}T)=Z(S)\circ_{i}Z(T)\). * (anchor dependence) the following relations hold: \[-\,\text{(braiding)}\qquad Z\left(\raisebox{-14.226378pt}{\includegraphics[ height=14.226378pt]{figs/2011.eps}}\right)=Z\left(\raisebox{-14.226378pt}{ \includegraphics[height=14.226378pt]{figs/2011.eps}}\right)\circ\beta_{ \mathcal{P}[j],\mathcal{P}[i+k]}\] \[-\,\text{(twist)}\qquad\qquad Z\left(\raisebox{-14.226378pt}{ \includegraphics[height=14.226378pt]{figs/2011.eps}}\right)=\theta_{\mathcal{P}[ n]}.\] (Here, an \(n\) next to a string indicates \(n\) parallel strings.) 2. Real structures \(r_{n}:\mathcal{P}[n]\to\overline{\mathcal{P}[n]}\) satisfying for every anchored planar tangle \(T\) \[\overline{Z(T)}\circ(r\otimes\cdots\otimes r)=r\circ Z(\overline{T}),\] where \(\overline{T}\) is the reflection of \(T\). 3. A morphism \(\psi_{\mathcal{P}}:\mathcal{P}[0]\to 1_{\mathcal{V}}\) satisfying \(\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{figs/2011.eps}} \right)\)\(\raisebox{-14.226378pt}{\includegraphics[height=14. We now rapidly recall the notion of a unitary module fusion category from [11, 12]. Let \(\mathcal{V}\) be as above. **Definition 2.2**.: A _unitary module fusion category_\((\mathcal{C},\Phi^{Z})\) consists of a unitary fusion category \(\mathcal{C}\) together with a pivotal braided unitary tensor functor \(\Phi^{Z}:\mathcal{V}\to Z(\mathcal{C})\). A _pointing_ of \((\mathcal{C},\Phi^{Z})\) consists of a real object \(c\in\mathcal{C}\) such that \(c\) and \(\Phi^{Z}(\mathcal{V})\) generate \(\mathcal{C}\) under taking tensor product, orthogonal direct sum, and orthogonal direct summands. Our main theorems in [11, 12] established an equivalence of categories \[\left\{\begin{aligned} &\text{Connected finite depth unitary}\\ &\text{anchored planar algebras in $\mathcal{V}$}\end{aligned}\right\} \ \cong\ \left\{\begin{aligned} &\text{Pointed unitary module}\\ &\text{fusion categories over $\mathcal{V}$}\end{aligned}\right\}.\] ### Bicommutant categories The first author introduced the notion of bicommutant category in [11], and examples were constructed from unitary fusion categories and from conformal nets in [11] and [12], respectively. We refer the reader to [10, 11, 12] for the basics of \(\mathrm{C}^{*}\)-tensor categories. **Definition 2.3**.: A _bi-involutive_ tensor category is a \(\mathrm{C}^{*}\)-tensor category \(\mathcal{C}\) equipped with a covariant anti-linear unitary functor \(\overline{\cdot}:\mathcal{C}\to\mathcal{C}\) called the _conjugate_. There are coherence natural isomorphisms \(\varphi_{c}:c\to\overline{\overline{c}}\), \(\nu_{a,b}:\overline{a}\otimes\overline{b}\to\overline{b\otimes a}\) and \(r:1\to\overline{1}\) satisfying monoidal coherences (see [11, Def. 2.3]). Basic examples include \(\mathsf{Hilb}\), \(\mathsf{Bim}(R)\) for a von Neumann algebra \(R\), and unitary tensor categories, a.k.a. semisimple rigid \(\mathrm{C}^{*}\)-tensor categories with simple unit object. We refer the reader to [11, p.5] or [12, SS3.5] for more details on this last example. **Definition 2.4**.: A _bi-involutive tensor functor_\(F:\mathcal{A}\to\mathcal{B}\) between bi-involutive tensor categories is a unitary tensor functor equipped with a unitary natural isomorphism \(\chi_{a}:F(\overline{a})\to\overline{F(a)}\) for \(a\in\mathcal{A}\) satisfying monoidal and involutive coherences (see [11, Def. 2.5] or [12, Def. 3.35]). A _representation_ of a bi-involutive category \(\mathcal{C}\) is a von Neumann algebra \(R\) together with a bi-involutive tensor functor \(\alpha:\mathcal{C}\to\mathsf{Bim}(R)\). In [11, SS5], we introduced a certain extra structure on \(\mathsf{Bim}(R)\) which we called a _positive structure_. These positive structures play no role in the present paper, as representations of unitary fusion categories uniquely extend to positive representations by [11, Thm. A], so we will not emphasize them here. **Definition 2.5**.: Suppose \((\mathcal{C},\alpha)\) is a bi-involutive tensor category equipped with a representation into \(\mathsf{Bim}(R)\). The _commutant category_\(\mathcal{C}^{\prime}\) is the unitary relative center of \(\alpha(\mathcal{C})\) inside \(\mathsf{Bim}(R)\)[11, SS2.3]. It has: * objects bimodules \(X\in\mathsf{Bim}(R)\) equipped with half-braidings \(e_{X}=\{e_{X,c}:X\boxtimes\alpha(c)\to\alpha(c)\boxtimes X\}_{c\in\mathcal{C}}\) satisfying the usual hexagon relation and naturality relation. Our graphical convention for the half-braiding \(e_{X}\) is \[e_{X,c}=\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{images-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-braid-id Proof.: To construct a functor \(Z(\mathcal{B})^{\mathrm{rev}}\to Z(\mathcal{B}^{\prime})\), an object \[\underline{X}:=(X,e_{X}=\{e_{X,Y}:X\otimes Y\to Y\otimes X\}_{Y\in\mathcal{C}})\] in \(Z(\mathcal{B})^{\mathrm{rev}}\) maps to \[\underline{X}=(\underline{X},e_{\underline{X}})=((X,e_{X}),e_{\underline{X}})\] in \(Z(\mathcal{B}^{\prime})\), where for \(\underline{Y}:=(Y,e_{Y})\in\mathcal{B}^{\prime}\) we have \(e_{\underline{X},\underline{Y}}=e_{Y,X}^{-1}\). This is evidently a strict tensor functor. It is a braided tensor functor because the braiding on \(Z(\mathcal{B}^{\mathrm{rev}})\) is given by \[\beta^{Z(\mathcal{B})^{\mathrm{rev}}}\underline{X,\underline{Y}}=(\beta^{Z( \mathcal{B})}\underline{Y,\underline{X}})^{-1}=e_{Y,X}^{-1}\] which agrees with the braiding \(Z(\mathcal{B}^{\prime})\): \[\beta^{\mathcal{B}^{\prime}}_{\underline{X},\underline{Y}}=e_{\underline{X}, \underline{Y}}=e_{Y,X}^{-1}.\] The diagram (1) visibly commutes. It remains to show the top arrow is an equivalence. The same construction replacing \(\mathcal{B}\) with \(\mathcal{B}^{\prime}\) gives a strict braided tensor functor \(Z(\mathcal{B}^{\prime})^{\mathrm{rev}}\to Z(\mathcal{B}^{\prime\prime})\). Hence we also get a strict braided tensor functor \(Z(\mathcal{B}^{\prime})\to Z(\mathcal{B}^{\prime\prime})^{\mathrm{rev}}\) by taking the reverse braiding on each side. The composite map \(Z(\mathcal{B})^{\mathrm{rev}}\to Z(\mathcal{B}^{\prime})\to Z(\mathcal{B}^{ \prime\prime})^{\mathrm{rev}}\) is the map induced from the equivalence \(\mathcal{B}\to\mathcal{B}^{\prime\prime}\). Again, replacing \(\mathcal{B}\) with \(\mathcal{B}^{\prime}\), the composite map \(Z(\mathcal{B}^{\prime})\to Z(\mathcal{B}^{\prime\prime})^{\mathrm{rev}}\to Z( \mathcal{B}^{\prime\prime\prime})\) is also the map induced from the equivalence \(\mathcal{B}^{\prime}\to\mathcal{B}^{\prime\prime\prime}\). This completes the proof. We now prove some useful results in the case that \(\mathcal{C}\) is a unitary fusion category fully faithfully embedded in \(\mathsf{Bim}(R)\), for a von Neumann factor \(R\) not of type I. In [10], we proved that both \(\mathcal{C}^{\prime\prime}=\mathsf{Hilb}(\mathcal{C})=\mathcal{C}\otimes_{ \mathsf{Hilb}_{\mathsf{Id}}}\mathsf{Hilb}\) and \(\mathcal{C}^{\prime}\) are bicommutant categories. **Lemma 2.8**.: _If \(\mathcal{C}\) is a unitary fusion category, then \(Z(\mathsf{Hilb}(\mathcal{C}))\cong\mathsf{Hilb}(Z(\mathcal{C}))\)._ Proof.: The canonical functor \(\mathsf{Hilb}(Z(\mathcal{C}))\to Z_{\mathcal{C}}(\mathsf{Hilb}(\mathcal{C}))=Z (\mathsf{Hilb}(\mathcal{C}))\) is visibly fully faithful, but it is not obvious that it is essentially surjective. The induction functor \(I:\mathcal{C}\to Z(\mathcal{C})\) (adjoint to the forgetful functor) is such that every object \((X,e_{X})\in Z(\mathcal{C})\) is a direct summand of \(I(X)\). The same property holds true for the corresponding functor \(\mathsf{Hilb}(I):\mathsf{Hilb}(\mathcal{C})\to\mathsf{Hilb}(Z(\mathcal{C}))\). Every \((X,e_{X})\in Z(\mathsf{Hilb}(\mathcal{C}))\) is a direct summand of \(\mathsf{Hilb}(I)(X)\in\mathsf{Hilb}(Z(\mathcal{C}))\), hence lives in \(\mathsf{Hilb}(Z(\mathcal{C}))\). **Proposition 2.9**.: _The functors \(Z(\mathcal{C})\to\mathcal{C}^{\prime}\) and \(Z(\mathcal{C}^{\prime})\to\mathcal{C}^{\prime}\) are fully faithful._ Proof.: We first show that \(Z(\mathcal{C})\to\mathcal{C}^{\prime}\) is fully faithful. Let \((X,e_{X}),(Y,e_{Y})\in Z(\mathcal{C})\). Every morphism \((X,e_{X})\to(Y,e_{Y})\) in \(\mathcal{C}^{\prime}\) is a morphism \(X\to Y\) in \(\mathcal{C}\subset\mathsf{Bim}(R)\) compatible with the half-braidings. This is exactly the definition of a morphism in \(Z(\mathcal{C})\). By Lemma 2.8, it follows that \(\mathsf{Hilb}(Z(\mathcal{C}))\cong Z(\mathsf{Hilb}(\mathcal{C}))\to\mathcal{C}^ {\prime}\) is also fully faithful. Using that \(\mathsf{Hilb}(\mathcal{C})\) is a bicommutant category, the result now follows by commutativity of (1) for \(\mathcal{B}=\mathsf{Hilb}(\mathcal{C})\). We will need the following corollary in our construction later on. **Corollary 2.10**.: _Suppose \(\mathcal{M}\subset\mathcal{C}^{\prime}\) is a full tensor subcategory containing the image of \(Z(\mathcal{C})\) in \(\mathcal{C}^{\prime}\). Then there is a fully faithful braided tensor functor \(Z(\mathcal{C})^{\mathrm{rev}}\to Z(\mathcal{M})\) such that the following diagram commutes:_ Proof.: The image of \(Z(\mathcal{C})\) in \(\mathcal{C}^{\prime}\) lifts to a commutative diagram in which the horizontal arrows are braided tensor functors. The image of \(Z(\mathcal{C})^{\mathrm{rev}}\hookrightarrow Z(\mathcal{C}^{\prime})\) lies in \(Z(\mathcal{M})\) because a half-braiding with \(\mathcal{C}^{\prime}\) resticts to a half-braiding with \(\mathcal{M}\). Finally, the braided tensor functor \(Z(\mathcal{C})^{\mathrm{rev}}\to Z(\mathcal{M})\) is automatically fully faithful as \(Z(\mathcal{C})^{\mathrm{rev}}\) is modular [13, Cor. 3.26]. ## 3 Relative tensor product of module tensor categories In this section, we analyze the relative tensor product \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) of a \(\mathcal{V}\)-module tensor category \((\mathcal{D},\Phi^{Z}:\mathcal{V}\to\mathcal{D})\) and a \(\mathcal{V}^{\mathrm{rev}}\)-module tensor category \((\mathcal{C},\Psi^{Z}:\mathcal{V}\to\mathcal{C})\). Here, \(\Psi^{Z}\) is a reverse-braided functor, meaning that it satisfies \(\Psi^{Z}(\beta_{u,v})=\beta_{\Psi^{Z}(v),\Psi^{Z}(u)}^{-1}\) for \(u,v\in\mathcal{V}\). We shall use the notational convention \(\Phi:=\mathrm{Forget}\circ\Phi^{Z}\) and \(\Psi:=\mathrm{Forget}\circ\Psi^{Z}\). Throughout this section, we assume \(\mathcal{V}\) is semisimple with simple unit object. The monoidal category \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) is defined via the universal property which states that for every tensor category \(\mathcal{E}\), the data of a \(\mathcal{V}\)-balanced tensor functor \(B:\mathcal{C}\boxtimes\mathcal{D}\to\mathcal{E}\) is equivalent to the data of a tensor functor \(B^{\prime}:\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\to\mathcal{E}\), via the commutative diagram Here, a tensor functor \(\mathcal{C}\boxtimes\mathcal{D}\to\mathcal{E}\) is called \(\mathcal{V}\)-_balanced_ if it comes with monoidal natural isomorphisms \[\eta_{c,v,d}:B\big{(}(c\otimes\Psi v)\boxtimes d\big{)}\to B\big{(}c\boxtimes( \Phi v\otimes d\big{)}\big{)}\] for \(c\in\mathcal{C}\), \(d\in\mathcal{D}\), and \(v\in\mathcal{V}\), satisfying the coherence which states that passing \(v_{1}\otimes v_{2}\) from one side to the other is the same as first passing \(v_{2}\) and then passing \(v_{1}\). Note that the monoidal coherence involves the two half-braidings (the bottom vertical arrows): \[\begin{CD}B((c_{1}\otimes\Psi v_{1})\boxtimes d_{1})\otimes B((c_{2} \otimes\Psi v_{2})\boxtimes d_{2})@>{\eta\otimes\eta}>{}>B(c_{1}\boxtimes(\Phi v _{1}\otimes d_{1}))\otimes B(c_{2}\boxtimes(\Phi v_{2}\otimes d_{2}))\\ @V{}V{}V@V{}V{}V\\ B((c_{1}\otimes\Psi v_{1})\boxtimes d_{1})\otimes((c_{2}\otimes\Psi v_{2}) \boxtimes d_{2})@V{}V{}V\\ B((c_{1}\otimes\Psi v_{1}\otimes c_{2}\otimes\Psi v_{2})\boxtimes(d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes d_{1}\otimes\Phi v_{2}\otimes d _{2}))@V{}V{}V\\ B((c_{1}\otimes c_{2}\otimes\Psi v_{1}\otimes\Psi v_{2})\boxtimes(d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V{}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2})\boxtimes(\Phi v_{1}\otimes\Phi v_{2}\otimes d_{1} \otimes d_{2}))@V{\eta}V\\ B((c_{1}\otimes c_{2} Proof.: We only prove (1) as (2) is similar. We use the ladder category model \(\mathcal{C}\boxdot_{\mathcal{V}}\,\mathcal{D}\) for \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\). Observe that \[(\mathcal{C}\boxdot_{\mathcal{V}}\mathcal{D})\big{(}F(a)\to F(b)\big{)}=( \mathcal{C}\boxdot_{\mathcal{V}}\mathcal{D})\big{(}a\boxdot_{\mathcal{V}}1_{ \mathcal{D}}\to b\boxdot_{\mathcal{V}}1_{\mathcal{D}}\big{)}=\bigoplus_{v\in \operatorname{Irr}(\mathcal{V})}\mathcal{C}\big{(}a\to b\otimes\Psi v\big{)} \otimes\mathcal{D}\big{(}\Phi v\to 1_{\mathcal{D}}\big{)}.\] If \(\Phi\) is fully faithful, the only \(v\in\operatorname{Irr}(\mathcal{V})\) with \(\mathcal{D}(\Phi v\to 1_{\mathcal{D}})\neq 0\) is \(1_{\mathcal{V}}\), and moreover, \(\mathcal{D}(\Phi 1_{\mathcal{V}}\to 1_{\mathcal{D}})=\mathcal{D}(\Phi 1_{\mathcal{V}}\to\Phi 1_{\mathcal{V}})\cong\mathcal{V}(1_{\mathcal{V}}\to 1_{ \mathcal{V}})=\mathbb{C}\). The result follows. **Corollary 3.4**.: _Suppose \(\mathcal{C},\mathcal{D},\mathcal{V}\) are all fusion. If \(\Psi:\mathcal{V}\to\mathcal{C}\) or \(\Phi:\mathcal{V}\to\mathcal{D}\) is fully faithful, then \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) is fusion._ Proof.: When \(\mathcal{C},\mathcal{D},\mathcal{V}\) are all fusion, then \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) is always multifusion as it is \(1\)-composition in the \(4\)-category of braided fusion categories (see Remark 3.1). When \(\Psi:\mathcal{V}\to\mathcal{C}\) or \(\Phi:\mathcal{V}\to\mathcal{D}\) is fully faithful, then \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) has simple unit object by Lemma 3.3, and is thus fusion. ### Canonical centralizing structure We now discuss the notion of _centralizing structure_ for two tensor categories \(\mathcal{A},\mathcal{B}\) equipped with tensor functors to some other tensor category \(\mathcal{C}\) - see [1, Def. 3.24]. **Definition 3.5**.: Let \(\mathcal{A},\mathcal{B},\mathcal{C}\) be tensor categories, and suppose we have tensor functors \(F:\mathcal{A}\to\mathcal{C}\) and \(G:\mathcal{B}\to\mathcal{C}\). A _centralizing structure_ is a family of natural isomorphisms \(\{\sigma_{a,b}:F(a)\otimes G(b)\to G(b)\otimes F(a)\}_{a\in\mathcal{A},b\in \mathcal{B}}\) satisfying the following conditions, where coherence isomorphisms have been suppressed: * For \(a\in\mathcal{A}\) and \(b,b^{\prime}\in\mathcal{B}\), \((\operatorname{id}_{G(b)}\otimes\sigma_{a,b^{\prime}})\circ(\sigma_{a,b} \otimes\operatorname{id}_{G(b^{\prime})})=\sigma_{a,b\otimes b^{\prime}}\) * For \(a,a^{\prime}\in\mathcal{A}\) and \(b\in\mathcal{B}\), \((\sigma_{a,b}\otimes\operatorname{id}_{F(a^{\prime})})\circ(\operatorname{id }_{F(a)}\otimes\sigma_{a^{\prime},b})=\sigma_{a\otimes a^{\prime},b}\). A centralizing structure for \(F:\mathcal{A}\to\mathcal{C}\) and \(G:\mathcal{B}\to\mathcal{C}\) is equivalent to the data needed to promote \(F\boxtimes G:\mathcal{A}\boxtimes\mathcal{B}\to\mathcal{C}\) to a tensor functor. **Construction 3.6**.: By universality of \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) for \(\mathcal{V}\)-balanced functors out of \(\mathcal{C}\boxtimes\mathcal{D}\), \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) comes equipped with a canonical centralizing structure for the canonical functors from Lemma 3.3. Indeed, since \[(c\boxtimes 1)\otimes(1\boxtimes d)=c\boxtimes d=(1\boxtimes d)\otimes(c \boxtimes 1)\] in \(\mathcal{C}\boxtimes\mathcal{D}\) for all \(c\in\mathcal{C}\) and \(d\in\mathcal{D}\), we get a canonical isomorphism \(\sigma_{c,d}:F(c)\otimes G(d)\to G(d)\otimes F(c)\) from the tensorator of \(-\boxtimes_{\mathcal{V}}-:\mathcal{C}\boxtimes\mathcal{D}\to\mathcal{C} \boxtimes_{\mathcal{V}}\mathcal{D}\): \[(c\boxtimes_{\mathcal{V}}1)\otimes(1\boxtimes_{\mathcal{V}}d)\xrightarrow{ \cong}c\boxtimes_{\mathcal{V}}d\xrightarrow{\cong}(1\boxtimes_{\mathcal{V}}d) \otimes(c\boxtimes_{\mathcal{V}}1).\] The centralizing axioms are satisfied by associativity of the tensorator. Moreover, we have the following additional property of this centralizing structure: whenever \(c\) or \(d\) is in the image of \(\mathcal{V}\), the centralizing structure \(\sigma\) is compatible with the half-braiding coming from the image of \(\mathcal{V}\) in \(Z(\mathcal{C})\) or \(Z(\mathcal{D})\) respectively. For example, when \(c=\Psi v\), the following diagram commutes: The top square is naturality of \(\eta\) (recall \(\zeta_{v}=\eta_{1,v,1}\)), the left triangle is the definition of \(\sigma\), and the bottom right pentagon is (2) for \(B=-\boxtimes_{\mathcal{V}}-\) setting \(c_{1}=c_{2}=1_{\mathcal{C}}\), \(v_{1}=1_{\mathcal{V}}\) and \(v_{2}=v\), and \(d_{1}=d\) and \(d_{2}=1_{\mathcal{D}}\). Similarly, when \(d=\Phi v\), the following diagram commutes (3) **Lemma 3.7**.: _If \(\Psi^{Z}:\mathcal{V}\to Z(\mathcal{C})\) is fully faithful, then the functor which maps \(\mathcal{D}\) to the relative commutant \(Z_{\mathcal{C}}(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D})\) of \(\mathcal{C}\) inside \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) is fully faithful._ Proof.: Using the ladder category model \(\mathcal{C}\boxplus_{\mathcal{V}}\mathcal{D}\), \[(\mathcal{C}\boxplus_{\mathcal{V}}\mathcal{D})(1\boxplus d_{1}\to 1\boxplus d_{2})= \bigoplus_{v\in\operatorname{Irr}(\mathcal{V})}\mathcal{C}\big{(}1\to 1 \otimes\Psi v\big{)}\otimes\mathcal{D}\big{(}\Phi v\otimes d_{1}\to d_{2} \big{)}.\] A morphism \(f:1\boxplus d_{1}\to 1\boxplus d_{2}\) lies in \(Z_{\mathcal{C}}(\mathcal{C}\boxplus_{\mathcal{V}}\mathcal{D})\) exactly when it is compatible with the half-braidings, which are induced from the centralizing structure. Writing \(f=\sum_{v\in\operatorname{Irr}(\mathcal{V}),i}g_{v,i}\otimes h_{v,i}\) as a sum of its \(v\)-components, where \(g_{v,i}:1_{\mathcal{C}}\to\Psi(v)\) and \(h_{v,i}:\Phi(v)\otimes d_{1}\to d_{2}\), where the \(h_{v,i}\) form a basis of \(\mathcal{D}\big{(}\Phi v\otimes d_{1}\to d_{2}\big{)}\), this compatibility reduces via (3) to \[e_{\Psi v,c}\circ(g_{v,i}\otimes\operatorname{id}_{c})=\operatorname{id}_{c} \otimes g_{v,i}\hskip 56.905512pt\forall\,c\in\mathcal{C},\,\forall\,v\in \operatorname{Irr}(\mathcal{V}).\] This implies that \(g_{v,i}\in\mathcal{C}(1\to\Psi v)\) is actually a morphism in \(Z(\mathcal{C})\) and thus \(g_{v,i}\in Z(\mathcal{C})(1\to\Psi^{Z}v)\). Since \(\Psi^{Z}\) was assumed to be fully faithful, \(g_{v,i}=0\) unless \(v=1_{\mathcal{V}}\), in which case each \(g_{1,i}:1_{\mathcal{C}}\to 1_{\mathcal{C}}\) is a scalar. We conclude that \(Z_{\mathcal{C}}(\mathcal{C}\boxplus_{\mathcal{V}}\mathcal{D})(1\boxplus d_{1} \to 1\boxplus d_{2})\cong\mathcal{D}(d_{1}\to d_{2})\). **Theorem 3.8**.: _Suppose \(\mathcal{C}\) is a spherical fusion category, \(\mathcal{E}\) is any tensor category with simple unit, and \(\mathcal{C}\hookrightarrow\mathcal{E}\) is a fully-faithful embedding. The tensor product map \(\otimes:Z_{\mathcal{C}}(\mathcal{E})\boxtimes_{Z(\mathcal{C})}\mathcal{C}\to \mathcal{E}\) is fully faithful._ Proof.: We must show that whenever \(c_{1},c_{2}\in\mathcal{C}\) and \(e_{1},e_{2}\in Z_{\mathcal{C}}(\mathcal{E})\), we have \[\bigoplus_{z\in\operatorname{Irr}(Z(\mathcal{C}))}Z_{\mathcal{C}}(\mathcal{E})(e _{1}\to e_{2}\otimes z)\otimes\mathcal{C}(z\otimes c_{1}\to c_{2})\quad\cong \quad\mathcal{E}(e_{1}\otimes c_{1}\to e_{2}\otimes c_{2}).\] Using rigidity and semisimplicity of \(\mathcal{C}\), we may assume \(c_{1}=1\) and \(c_{2}=X:=\bigoplus_{c\in\operatorname{Irr}(\mathcal{C})}c\). Our question becomes: \[\bigoplus_{z\in\operatorname{Irr}(Z(\mathcal{C}))}Z_{\mathcal{C}}(\mathcal{E}) (e_{1}\to e_{2}\otimes z)\otimes\mathcal{C}(z\to X)\quad\stackrel{{?}}{{\cong}}\quad\mathcal{E}(e_{1}\to e_{2}\otimes X).\] The right hand side \(\mathcal{E}(e_{1}\to e_{2}\otimes X)\) carries a canonical action of Ocneanu's tube algebra \(\operatorname{\mathsf{Tube}}(\mathcal{C})\): We can decompose the above representation into irreps of \(\operatorname{\mathsf{Tube}}(\mathcal{C})\). Recall from [13, 14] that \(\operatorname{\mathsf{Rep}}(\operatorname{\mathsf{Tube}}(\mathcal{C}))\cong Z (\mathcal{C})^{\operatorname{op}}\), and every irrep is of the form \(H_{z}=\mathcal{C}(z\to X)\) where \(z\in\operatorname{Irr}(Z(\mathcal{C}))\), which carries a similar \(\operatorname{\mathsf{Tube}}(\mathcal{C})\)-action as above. Hence we can decompose \[\mathcal{E}(e_{1}\to e_{2}\otimes X)\cong\bigoplus_{z\in\operatorname{Irr}(Z (\mathcal{C}))}M_{z}\otimes H_{z}=\bigoplus_{z\in\operatorname{Irr}(Z( \mathcal{C}))}M_{z}\otimes\mathcal{C}(z\to X)\] where \(M_{z}\) is a multiplicity space. It remains to identify \(M_{z}\) with \(Z_{\mathcal{C}}(\mathcal{E})(e_{1}\to e_{2}\otimes z)\). Since \(\mathcal{C}\hookrightarrow\mathcal{E}\) fully faithfully, \(H_{z}=\mathcal{C}(z\to X)\cong\mathcal{E}(z\to X)\), so \[M_{z}\cong\operatorname{\mathsf{Rep}}(\operatorname{\mathsf{Tube}}(\mathcal{C }))(\mathcal{E}(z\to X)\to\mathcal{E}(e_{1}\to e_{2}\otimes X)).\] Observe that \(\mathcal{E}(z\to-)\) and \(\mathcal{E}(e_{1}\to e_{2}\otimes-)\) are both functors \(\mathcal{E}\to\operatorname{\mathsf{Vec}}\). By the Yoneda Lemma, \(\operatorname{Hom}(\mathcal{E}(z\to-)\to\mathcal{E}(e_{1}\to e_{2}\otimes-)) \cong\mathcal{E}(e_{1}\to e_{2}\otimes z)\) canonically. Since maps of \(\operatorname{\mathsf{Tube}}(\mathcal{C})\) representations are maps between the underlying vector spaces which intertwine the \(\operatorname{\mathsf{Tube}}(\mathcal{C})\)-actions, we see that \(M_{z}\) is exactly the subspace of \(\mathcal{E}(e_{1}\to e_{2}\otimes z)\) which intertwines the two \(\operatorname{\mathsf{Tube}}(\mathcal{C})\)-actions, i.e., \(Z_{\mathcal{C}}(\mathcal{E})(e_{1}\to e_{2}\otimes z)\). Using the fact that \(\mathcal{C}^{\prime}=Z_{\mathcal{C}}(\operatorname{\mathsf{Bim}}(R))\), we have the following immediate corollary. **Corollary 3.9**.: \(\operatorname{\mathsf{Bim}}(R)\cong\mathcal{C}^{\prime}\boxtimes_{Z(\mathcal{ C})}\mathcal{C}\)_._ Proof.: After applying Theorem 3.8, the only thing that remains to show is that the functor \(\mathcal{C}^{\prime}\boxtimes_{Z(\mathcal{C})}\mathcal{C}\to\operatorname{ \mathsf{Bim}}(R)\) is dominant. Indeed, for any object \(X\in\operatorname{\mathsf{Bim}}(R)\), \(X\) is a summand of \(\bigoplus_{c\in\operatorname{Irr}(\mathcal{C})}c\otimes X\otimes\overline{c}\in \mathcal{C}^{\prime}\) by [13, Lem. 6.3]. ### The anchored planar algebra model for \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) In this section, we give a model \(\mathcal{E}\) for \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) using anchored planar algebras in the setting that \(\Phi:\mathcal{V}\to\mathcal{D}\) admits a right adjoint (so that there is an anchored planar algebra associated to \(\mathcal{D}\)). We begin by defining a full subcategory \(\mathcal{E}_{0}\), and \(\mathcal{E}\) is the (unitary) Cauchy completion. * Objects in \(\mathcal{E}_{0}\) are formal symbols \(c\otimes x^{i}\) for \(c\in\mathcal{C}\) and \(i\geq 0\). By convention, \(x^{0}=1\). * Morphism spaces are defined by \[\mathcal{E}_{0}(a\otimes x^{i}\to b\otimes x^{j}):=\mathcal{C}(a\to b \otimes\Psi\mathcal{P}[j+i]),\] where \(\mathcal{P}\) is the anchored planar algebra in \(\mathcal{V}\) corresponding to \((\mathcal{D},\Phi^{Z}:\mathcal{V}\to Z(\mathcal{D}),x)\). We represent morphisms in the graphical calculus by \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)* {\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0*{\xyxyxy(0*{\bullet we have \[\includegraphics[]{figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures//figures//figures/figures//figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures//figures//figures//figures//figures//figures/figures//figures/figures//figures/figures//figures//figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures//figures/figures//figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures/ **Remark 3.10**.: When \(\Phi:\mathcal{V}\to\mathcal{D}\) is fully faithful and \(\mathcal{C}\) has simple unit object, then the unit of \(\mathcal{E}_{0}\) is also simple. Indeed, in that case, \(\mathcal{P}[0]=1_{\mathcal{V}}\), and hence \[\operatorname{End}_{\mathcal{E}_{0}}(1_{\mathcal{C}}\otimes x^{0})=\mathcal{C}( 1_{\mathcal{C}}\to 1_{\mathcal{C}}\otimes\Psi\mathcal{P}[0])\cong\mathcal{C}(1_{ \mathcal{C}}\to 1_{\mathcal{C}})\cong\mathbb{C}.\] **Remark 3.11**.: When \(\Psi:\mathcal{V}\to\mathcal{C}\) is dominant and admits a left adjoint \(I\), as in [10], \(\mathcal{E}\) can be identified with \(\mathsf{coMod}_{\mathcal{D}}(\Phi A)\) for \(A=I(1_{\mathcal{C}})\). Indeed, \[\mathcal{E}_{0}\big{(}\Psi u\otimes x^{i}\to\Psi v\otimes x^{j} \big{)} =\mathcal{C}\big{(}\Psi u\to\Psi v\otimes\Psi\mathcal{P}[j+i]\big{)}\] \[\cong\mathcal{C}\big{(}1_{\mathcal{C}}\to\Psi(\overline{u} \otimes v\otimes\mathcal{P}[j+i])\big{)}\] \[\cong\mathcal{V}\big{(}A\to\overline{u}\otimes v\otimes\mathrm{ Tr}_{\mathcal{V}}(x^{j+i})\big{)}\] \[\cong\mathcal{V}\big{(}A\to\mathrm{Tr}_{\mathcal{V}}(x^{i} \otimes\Phi\overline{u}\otimes\Phi v\otimes x^{j})\big{)}\] \[=\mathcal{V}\big{(}A\to\underline{\mathrm{Hom}}_{\mathcal{D}}( \Phi u\otimes x^{i}\to\Phi v\otimes x^{j})\big{)}\] \[=\mathcal{D}\big{(}\Phi u\otimes x^{i}\otimes\Phi A\to\Phi v \otimes x^{j}\big{)}\] \[\cong\mathsf{coMod}_{\mathcal{D}}(A)\big{(}\Phi u\otimes x^{i} \otimes\Phi A\to\Phi v\otimes x^{j}\otimes\Phi A\big{)}\] where the \(x\) which appears in the left hand side is a formal symbol, and the \(x\) which appears in the right hand side is our chosen generator of \(\mathcal{D}\). **Proposition 3.12**.: _The monoidal category \(\mathcal{E}\) constructed above is canonically equivalent, as monoidal category, to the balanced tensor product \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\)._ Proof.: The category \(\mathcal{E}\) is the idempotent completion of \(\mathcal{E}_{0}\), and the balanced tensor product \(\mathcal{C}\boxtimes_{\mathcal{V}}\mathcal{D}\) is canonically equivalent to the idempotent completion of the ladder category \(\mathcal{C}\square_{\mathcal{V}}\mathcal{D}_{0}\), where \(\mathcal{D}_{0}\subset\mathcal{D}\) is the full subcategory on the objects of the form \(\Phi v\otimes x^{i}\). So it is enough to construct an equivalence \(F_{0}\) of monoidal categories from \(\mathcal{E}_{0}\) to \(\mathcal{C}\boxplus_{\mathcal{V}}\mathcal{D}_{0}\). We define \(F_{0}\) by \[F_{0}(c\otimes x^{i}):=c\boxplus x^{i}.\] On morphism spaces, we define \(F_{0}\) by the following sequence of isomorphisms: \[\mathcal{E}_{0}(a\otimes x^{i}\to b\otimes x^{j}) =\mathcal{C}(a\to b\otimes\Psi\mathcal{P}[j+i])\] \[\cong\bigoplus_{v\in\mathrm{Irr}(\mathcal{V})}\mathcal{C}(a\to b \otimes\Psi v)\otimes\mathcal{V}(v\to\mathcal{P}[j+i])\] \[=\bigoplus_{v\in\mathrm{Irr}(\mathcal{V})}\mathcal{C}(a\to b \otimes\Psi v)\otimes\mathcal{V}(v\to\mathrm{Tr}_{\mathcal{V}}(x^{j+i}))\] \[\cong\bigoplus_{v\in\mathrm{Irr}(\mathcal{V})}\mathcal{C}(a\to b \otimes\Psi v)\otimes\mathcal{D}(\Phi v\to x^{j+i})\] \[\cong\bigoplus_{v\in\mathrm{Irr}(\mathcal{V})}\mathcal{C}(a\to b \otimes\Psi v)\otimes\mathcal{D}(\Phi v\otimes x^{i}\to x^{j})\] \[=(\mathcal{C}\boxplus_{\mathcal{V}}\mathcal{D}_{0})\big{(}a\boxplus x ^{i}\to b\boxplus x^{j}\big{)}.\] The first isomorphism in the second line above uses the fact that each object \(w\in\mathcal{V}\) can be canonically decomposed into simple objects as \(w=\bigoplus_{v\in\mathrm{Irr}(\mathcal{V})}\mathcal{V}(v\to w)\otimes v\). It is a somewhat tricky exercise to show that \(F_{0}\) is a tensor functor, which is then automatically fully faithful. Essential surjectivity follows since \(a\boxplus(\Phi v\otimes x^{i})\cong(a\otimes\Psi v)\boxplus x^{i}\). ### The unitary setting We now consider the case that \(\mathcal{V}\) is a unitary tensor category equipped with a unitary dual functor \(\mathcal{V}\). Suppose we have a pointed unitary \(\mathcal{V}\)-module multitensor category \((\mathcal{D},\Phi^{Z}:\mathcal{V}\to Z(\mathcal{D}),x,\vee_{\mathcal{D}},\psi_{ \mathcal{D}})\) and a unitary \(\mathcal{V}^{\text{rev}}\)-module multitensor category \((\mathcal{C},\Psi^{Z}:\mathcal{V}\to Z(\mathcal{C}),\vee_{\mathcal{C}},\psi_{ \mathcal{C}})\) where \(\Psi^{Z}\) is a reverse-braided functor. We further assume that \(\Phi:\mathcal{V}\to\mathcal{D}\) admits a unitary adjoint [11, SS2.1]. As in [11, SS5.2.1], we can promote \(\mathcal{E}\) to a unitary multitensor category equipped with a unitary dual functor and canonical state. This amounts to the following tasks: 1. construct a \(\dagger\)-structure on \(\mathcal{E}_{0}\) under which is is a unitary multitensor category, 2. construct a faithful state \(\psi_{\mathcal{E}}\) on \(\operatorname{End}_{\mathcal{E}}(1_{\mathcal{E}})\), and 3. check \(\vee_{\mathcal{E}}\) is unitary, and the canonical unitary pivotal structure induced by \(\vee_{\mathcal{E}}\) is \(\varphi^{\mathcal{E}}\). To accomplish (E1) above, we define \(\dagger\) on \(\mathcal{E}_{0}\) as in [11, (25)], but we apply \(\Psi\) when necessary to push objects and morphisms from \(\mathcal{V}\) into \(\mathcal{C}\): The proof that this defines a dagger structure on \(\mathcal{E}_{0}\) is entirely similar to the one in [11]. Since \(\Psi\) is a unitary pivotal tensor functor between unitary multitensor categories equipped with unitary dual functors, it automatically preserves the unitary pivotal structure, and thus \(\Psi\operatorname{coev}^{\dagger}_{\mathcal{P}[i+j]}=\operatorname{coev}^{ \dagger}_{\Psi\mathcal{P}[i+j]}\). Thus by [11, Lem. 5.9], the sesquilinear form on \(\mathcal{E}_{0}(a\otimes x^{i}\to b\otimes x^{j})\) is positive definite. This immediately implies as in [11, Prop. 5.10] that the \(2\times 2\) linking algebras \[L:=\begin{pmatrix}\mathcal{E}_{0}(a\otimes x^{i}\to a\otimes x^{i})& \mathcal{E}_{0}(b\otimes x^{j}\to a\otimes x^{i})\\ \mathcal{E}_{0}(a\otimes x^{i}\to b\otimes x^{j})&\mathcal{E}_{0}(b\otimes x^ {j}\to b\otimes x^{j})\end{pmatrix}\] are finite dimensional \(\mathrm{C}^{*}\)-algebras, proving \(\mathcal{E}_{0}\) is \(\mathrm{C}^{*}\). We have thus established (E1). The construction of a faithful state \(\psi_{\mathcal{E}}\) on \(\operatorname{End}_{\mathcal{E}}(1_{\mathcal{E}})\) is similar to [11, Cor. 5.12] \[\psi_{\mathcal{E}}\left(\begin{array}{c}\raisebox{-1.5pt}{\includegraphics[ 14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}} \raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{ \includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378 pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{ \includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378 pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.2 26378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{ -1.5pt}{\includegraphics[14.226378pt]{.eps}}\raisebox{-1.5pt}{\includegraphics[14.22 6378pt]{.eps}}\raisebox{-1. establishing (E2). Finally, that \(\vee_{\mathcal{E}}\) is unitary and induces \(\varphi^{\mathcal{E}}\) is similar to [12, Prop. 5.13] adding \(\Psi\)s when necessary, establishing (E3). By the same argument as [12, Prop. 5.15], when \(\vee_{\mathcal{V}}\) and \(\psi_{\mathcal{P}}\) are spherical, then so is \(\psi_{\mathcal{E}}\). In fact, instead of demanding that \(\psi_{\mathcal{P}}\) be spherical, the following weaker condition is sufficient (in addition to sphericality of \(\vee_{\mathcal{V}}\)): ## 4 Classification of finite depth objects in \(\mathcal{C}^{\prime}\) Let \(\mathcal{C}\) be a unitary fusion category fully faithfully embedded in \(\mathsf{Bim}(R)\) where \(R\) is a hyperfinite factor of type II\({}_{1}\), II\({}_{\infty}\), or III\({}_{1}\). We denote by \(\mathcal{C}^{\prime}\) its commutant category defined in SS2.2 above. Recall that our main goal is to prove Conjecture 1.1 in the case that \(\mathcal{B}=\mathcal{C}^{\prime}\): finite depth real (symmetrically self-dual) objects of \(\mathcal{C}^{\prime}\) up to conjugation by invertible elements of \(\mathcal{C}^{\prime}\) are in bijective correspondence with connected finite depth unitary anchored planar algebras in \(\mathcal{Z}(\mathcal{C})^{\mathrm{rev}}\) up to isomorphism. ### From finite depth objects to anchored planar algebras We first fix a symmetrically self-dual object \(m\in\mathcal{C}^{\prime}\), and we construct a connected unitary anchored planar algebra \(\mathcal{P}\) in \(\mathcal{Z}(\mathcal{C})^{\mathrm{rev}}\cong\mathcal{Z}(\mathcal{C}^{\prime})\). Since \(m\in\mathcal{C}^{\prime}\), it is equipped with a half-braiding, which we'll call \(\{\eta_{m,d}:m\otimes d\to d\otimes m\}_{d\in\mathcal{C}}\). Let \(\mathcal{M}:=\langle m,Z(\mathcal{C})\rangle\) be the tensor C\({}^{*}\)-subcategory of \(\mathcal{C}^{\prime}\) generated by \(m\) and the image of \(Z(\mathcal{C})\) in \(\mathcal{C}^{\prime}\) (under the operations of tensor product, orthogonal direct sums, and orthogonal direct summands). By Corollary 2.10, there is a canonical \(Z(\mathcal{C})^{\mathrm{rev}}\)-module tensor category structure on \(\mathcal{M}\). We equip \(\mathcal{M}\) with its unique spherical unitary dual functor. Finally, we obtain a unitary anchored planar algebra \(\mathcal{P}\) from the unitary pivotal \(Z(\mathcal{C})\)-module tensor category \((\mathcal{M},m)\) by [12]. Observe that when \(m\) has finite depth, then \(\mathcal{M}\) is fusion by Lemma 4.1 below. In this case, \(\mathcal{P}\) is finite depth. **Lemma 4.1**.: _Let \(\mathcal{M}\) be a semisimple rigid tensor category with simple unit object, and suppose \(\mathcal{C},\mathcal{D}\subset\mathcal{M}\) are fusion subcategories that generate \(\mathcal{M}\) under tensor products and subobjects. If \(c\otimes d\cong d\otimes c\) for every \(c\in\mathcal{C}\) and \(d\in\mathcal{D}\), then \(\mathcal{M}\) is again fusion._ Proof.: A complete set of simples for \(\mathcal{M}\) is obtained by taking all distinct simple summands of the finitely many objects of the form \(c\otimes d\) where \(c\in\mathrm{Irr}(\mathcal{C})\) and \(d\in\mathrm{Irr}(\mathcal{D})\). ### From anchored planar algebras to finite depth objects Suppose \(\mathcal{P}\) is a connected unitary anchored planar algebra in \(Z(\mathcal{C})^{\mathrm{rev}}\). By the main result of our previous paper [12], \(\mathcal{P}\) corresponds to a pointed unitary module tensor category \((\mathcal{M},m)\). By the construction from SS3, noting that the forgetful functor \(Z(\mathcal{C})^{\mathrm{rev}}\to\mathcal{C}\) is reverse braided, we can then form the balanced tensor product \(\mathcal{E}:=\mathcal{C}\boxtimes_{Z(\mathcal{C})^{\mathrm{rev}}}\mathcal{M}\). It will be convenient to also think of \(\mathcal{E}=\mathcal{M}\boxtimes_{Z(\mathcal{C})}\mathcal{C}\) during this construction. By Construction 3.6, \(\mathcal{E}\) can be equipped with a canonical centralizing structure \(\{\sigma_{m,c}:m\otimes c\to c\otimes m\}_{m\in\mathcal{M},c\in\mathcal{C}}\). Since \(\mathcal{C}\) is fusion, it has simple unit object. Since \(\mathcal{P}\) is connected, \(\Phi:Z(\mathcal{C})\to\mathcal{M}\) is fully faithful. Thus \(\mathcal{E}\) has simple unit object and \(\mathcal{C}\hookrightarrow\mathcal{E}=\mathcal{M}\boxtimes_{Z(\mathcal{C})} \mathcal{C}\) is fully faithful by Lemma 3.3. When \(\mathcal{P}\) has finite depth, \(\mathcal{M}\) is fusion, and thus \(\mathcal{E}\) is also fusion by Corollary 3.4. As reviewed in [10, SS3.2], by [11, 12, 13], there is a fully faithful unitary tensor functor \(\alpha:\mathcal{E}\to\mathsf{Bim}(R)\), which is unique up to conjugation by an invertible object of \(\mathsf{Bim}(R)\). The restriction of \(\alpha|_{\mathcal{C}}:\mathcal{C}\to\mathsf{Bim}(R)\) is similarly unique up to conjugation by an invertible object of \(\mathsf{Bim}(R)\). Since \(\mathcal{C}\) was already given to us as a full subcategory of \(\mathsf{Bim}(R)\), conjugating by a suitable invertible object of \(\mathsf{Bim}(R)\), we may assume that \(\alpha|_{\mathcal{C}}\) agrees with our initial presentation of \(\mathcal{C}\) (that is, \(\alpha|_{\mathcal{C}}=\mathrm{id}_{\mathcal{C}}\)). Such tensor functors \(\alpha:\mathcal{E}\to\mathsf{Bim}(R)\) satisfying \(\alpha|_{\mathcal{C}}=\mathrm{id}_{\mathcal{C}}\) are unique up to conjugation by an invertible object of \(\mathcal{C}^{\prime}\). Since \(\mathcal{C}\hookrightarrow\mathcal{E}\) is fully faithful, our centralizing structure \(\sigma\) canonically promotes the image of each object \(n\in\mathcal{M}\) in \(\mathcal{E}\) to the relative center \(Z_{\mathcal{C}}(\mathcal{E})\), i.e., objects in \(\mathcal{E}\) which are equipped with half-braidings with objects in \(\mathcal{C}\). Taking the image of \(\sigma\) in \(\mathsf{Bim}(R)\), we immediately get a unitary tensor functor \(\mathcal{M}\to\mathcal{C}^{\prime}\) such that the following diagram commutes: Now the image of \(m\) in \(\mathcal{C}^{\prime}\) is our desired symmetrically self-dual finite depth object. The functor \(\mathcal{M}\hookrightarrow Z_{\mathcal{C}}(\mathcal{E})\) is fully faithful by Lemma 3.7. It follows that \(\mathcal{M}\to\mathcal{C}^{\prime}\) is also fully faithful. ### Uniqueness of \(\mathcal{M}\to\mathcal{C}^{\prime}\) Let \(\mathcal{M}\) be a unitary \(Z(\mathcal{C})^{\mathrm{rev}}\)-module tensor category such that the composite unitary tensor functor \(\Phi:Z(\mathcal{C})\to\mathcal{M}\) is fully faithful (in terms of the associated anchored planar algebra, this is the condition that \(\mathcal{P}[0]\) is connected). Our next goal is to show that given two fully faithful \(Z(\mathcal{C})^{\mathrm{rev}}\)-central unitary tensor functors \(\mathcal{M}\to\mathcal{C}^{\prime}\) (a.k.a. functors of \(Z(\mathcal{C})^{\mathrm{rev}}\)-module tensor categories), there exists an invertible object of \(\mathcal{C}^{\prime}\) that conjugates one into the other. Consider \(\mathcal{C}\subset\mathsf{Bim}(R)\). We claim that the unitary tensor functor \(\mathcal{M}\hookrightarrow\mathcal{C}^{\prime}\to\mathsf{Bim}(R)\) and the inclusion \(\mathcal{C}\subset\mathsf{Bim}(R)\) assemble to a fully faithful unitary tensor functor \(\beta:\mathcal{E}=\mathcal{M}\boxtimes_{Z(\mathcal{C})}\mathcal{C}\to\mathsf{ Bim}(R)\) such that \(\beta|_{\mathcal{C}}=\mathrm{id}_{\mathcal{C}}\). Observe that the underlying \(R-R\) bimodule of an object of \(Z(\mathcal{C})^{\mathrm{rev}}\) lives in \(\mathcal{C}\subset\mathsf{Bim}(R)\). Moreover, our tensor functor \(G:\mathcal{M}\to\mathcal{C}^{\prime}\) is a morphism of unitary \(Z(\mathcal{C})^{\mathrm{rev}}\)-module tensor categories. This means that we have a unitary action-coherence morphism \(\gamma:\Phi_{2}\Rightarrow G\Phi_{1}\) where \(\Phi_{1}:Z(\mathcal{C})^{\mathrm{rev}}\to\mathcal{M}\) and \(\Phi_{2}:Z(\mathcal{C})^{\mathrm{rev}}\to\mathcal{C}^{\prime}\) satisfying the coherence conditions from [11, Def. 3.2] (see also [11, Def. 3.3]). Since Forget \(\circ\Phi_{2}\) is identically the forgetful functor \(Z(\mathcal{C})\to\mathcal{C}\), the external square of the following digram commutes on the nose. Thus the image of \(Z(\mathcal{C})^{\mathrm{rev}}\) inside the images of \(\mathcal{C}\) and \(\mathcal{M}\) in \(\mathsf{Bim}(R)\) are identical. This equips the canonical map \(\mathcal{C}\boxtimes\mathcal{M}\to\mathsf{Bim}(R)\) (which is the identity on \(\mathcal{C}\)) with a \(Z(\mathcal{C})^{\mathrm{rev}}\)-balancing structure. This map thus descends to a map \(\beta:\mathcal{C}\boxtimes_{Z(\mathcal{C})^{\mathrm{rev}}}\mathcal{M}\to \mathsf{Bim}(R)\) (which is still the identity on \(\mathcal{C}\)) by the universal property of the relative tensor product. As in the previous section, we identify \(\mathcal{C}\boxtimes_{Z(\mathcal{C})^{\mathrm{rev}}}\mathcal{M}=\mathcal{M} \boxtimes_{Z(\mathcal{C})}\mathcal{C}=\mathcal{E}\). Since the inclusion \(\mathcal{M}\to\mathcal{C}^{\prime}\) is fully faithful, \(\mathcal{E}\) is a full subcategory of \(\mathcal{C}^{\prime}\boxtimes_{Z(\mathcal{C})}\mathcal{C}\), the latter of which is equivalent to \(\mathsf{Bim}(R)\) by Corollary 3.9. We conclude the map \(\beta:\mathcal{E}\to\mathsf{Bim}(R)\) is fully faithful. We now have two fully faithful unitary tensor functors \(\alpha,\beta:\mathcal{E}\to\mathsf{Bim}(R)\). As reviewed in [11, SS3.2], by [11, 12, 13], there is an isomorphism of representations \((\Phi,\phi):\alpha\to\beta\), i.e., an invertible \(\Phi\in\mathsf{Bim}(R)\) equipped with a natural family of unitary isomorphisms \[\big{\{}\phi_{e}:\Phi\boxtimes\alpha(e)\to\beta(e)\boxtimes\Phi\big{\}}_{e\in \mathcal{E}}\] satisfying the coherence condition [11, (7)] (\((\Phi,\phi)\) is a natural transformation between \(2\)-functors). Since \(\alpha|_{\mathcal{C}}=\beta|_{\mathcal{C}}=\mathrm{id}_{\mathcal{C}}\), we see that \(\phi|_{\mathcal{C}}\) promotes \(\Phi\) to an object of \(\mathcal{C}^{\prime}\). We also claim each \(\phi_{m}\) is morphism in \(\mathcal{C}^{\prime}\). Indeed, representing the canonical centralizing structure for objects \(c\in\mathcal{C}\) and \(m\in\mathcal{M}\) by a crossing, we have We conclude that the two fully faithful unitary tensor functors \(\mathcal{M}\hookrightarrow\mathcal{C}^{\prime}\) are conjugate by an invertible object in \(\mathcal{C}^{\prime}\). Using the equivalence between unitary anchored planar algebras and pointed unitary module tensor categories, the above argument shows that the composite \[\left\{\begin{array}{l}\text{Finite depth}\\ \text{objects of }\mathcal{C}^{\prime}\end{array}\right\}\Bigg{/}\text{conj.}\ \to\ \ \left\{\begin{array}{l}\text{Connected finite depth}\\ \text{unitary APAs in }Z(\mathcal{C})^{\text{rev}}\end{array}\right\}\Bigg{/}\text{iso.}\ \to\ \ \left\{ \begin{array}{l}\text{Finite depth}\\ \text{objects of }\mathcal{C}^{\prime}\end{array}\right\}\Bigg{/}\text{conj.}\] is the identity. The other composite, from unitary anchored planar algebras to finite depth objects of \(\mathcal{C}^{\prime}\) back to unitary anchored planar algebras produces an isomorphic unitary anchored planar algebra because unitarily equivalent pointed unitary module tensor categories give equivalent unitary anchored planar algebras by [12, Thm. A].
2305.03290
Semicubic cages and small graphs of even girth from voltage graphs
An \emph{$(3,m;g)$ semicubic graph} is a graph in which all vertices have degrees either $3$ or $m$ and fixed girth $g$. In this paper, we construct families of semicubic graphs of even girth and small order using two different techniques. The first technique generalizes a previous construction which glues cubic cages of girth $g$ together at remote vertices (vertices at distance at least $g/2$). The second technique, the main content of this paper, produces bipartite semicubic $(3,m; g)$-graphs with fixed even girth $g = 4t$ or $4t+2$ using voltage graphs over $\mathbb{Z}_{m}$. When $g = 4t+2$, the graphs have two vertices of degree $m$, while when $g = 4t$ they have exactly three vertices of degree $m$ (the remaining vertices are of degree $3$ in both cases). Specifically, we describe infinite families of semicubic graphs $(3,m; g)$ for $g = \{6, 8, 10, 12\}$ for infinitely many values of $m$. The cases $g = \{6,8\}$ include the unique $6$-cage and the unique $8$-cage when $m = 3$. The families obtained in this paper for girth $g=\{10,12\}$ include examples with the best known bounds for semicubic graphs $(3,m; g)$
Flor Aguilar, Gabriela Araujo-Pardo, Leah Bermann
2023-05-05T05:44:01Z
http://arxiv.org/abs/2305.03290v1
# Semicubic cages and small graphs of even girth from voltage graphs ###### Abstract An \((3,m;g)\)_semicubic graph_ is a graph in which all vertices have degrees either \(3\) or \(m\) and fixed girth \(g\). In this paper, we construct families of semicubic graphs of even girth and small order using two different techniques. The first technique generalizes a previous construction which glues cubic cages of girth \(g\) together at remote vertices (vertices at distance at least \(g/2\)). The second technique, the main content of this paper, produces bipartite semicubic \((3,m;g)\)-graphs with fixed even girth \(g=4t\) or \(4t+2\) using voltage graphs over \(\mathbb{Z}_{m}\). When \(g=4t+2\), the graphs have two vertices of degree \(m\), while when \(g=4t\) they have exactly three vertices of degree \(m\) (the remaining vertices are of degree \(3\) in both cases). Specifically, we describe infinite families of semicubic graphs \((3,m;g)\) for \(g=\{6,8,10,12\}\) for infinitely many values of \(m\). The cases \(g=\{6,8\}\) include the unique \(6\)-cage and the unique \(8\)-cage when \(m=3\). The families obtained in this paper for girth \(g=\{10,12\}\) include examples with the best known bounds for semicubic graphs \((3,m;g)\). ## 1 Introduction In this paper, we work with simple and finite graphs. We study a generalization of the _Cage Problem_, which has been widely studied since cages were introduced by Tutte [24] in 1947 and after Erdos and Sachs [10] proved their existence in 1963. An \((r,g)\)_-graph_ is a \(r\)-regular graph in which the shortest cycle has length equal to \(g\); that is, it is a \(r\)-regular graph with girth \(g\). An \((r,g)\)_-cage_ is an \((r,g)\)-graph with the smallest possible number of vertices among all \((r,g)\)-graphs. The Cage Problem consists of finding \((r,g)\)-cages; it is well known that cages exist only for very limited sets of parameter pairs \((r,g)\). In the case where the orders of the \((r,g)\)-cages match a simple lower bound due to Moore [13], the cages are called _Moore cages_. Biregular graphs, denoted as \((r,m;g)\)_-graphs_, generalize \((r,g)\)-graphs, with cages generalizing to _biregular cages_. Specifically, given three positive integers \(r,m,g\) with \(2\leq r<m\), an \((r,m;g)\)_-graph_ is a graph of girth \(g\) in which all vertices have degrees \(r\) and \(m\). We denote the number of vertices of an \((r,m;g)\)-graph as \(n(r,m;g)\), and a _biregular cage_ is an \((r,m;g)\)-graph in which \(n(r,m;g)\) is as small as possible. Biregular graphs have been studied by many authors (see [1, 2, 3, 4, 8, 9, 15, 17, 25, 26]) since Chartrand, Gould and Kapoor [8] proved their existence. A biregular graph with \(r=3\) is often called a _semicubic graph_, and naturally, a semicubic \((3,m;g)\)-graph with a minimal number of vertices for a fixed \(m>3\) and fixed \(g\) is called a semicubic cage. In this paper, we construct families of semicubic graphs of even girth and small order using two different techniques. The first technique generalizes a construction used in [2, 3] in which cubic cages of girth \(g\) are glued together using _remote vertices_, that is vertices at distance \(g/2\). The second technique, which is the main content of this paper, consists of constructing semicubic graphs of even girth using voltage graphs. With this technique, we improve the graphs given using the "identifying remote vertices" technique for girth \(g=\{6,8,10,12\}\). However, graphs with the same orders as those from our voltage graph construction were obtained previously for girth \(g=\{6,8\}\) (in [3, 4, 15]) using different techniques. Our principal contribution is for graphs of girth \(g=\{10,12\}\), where we find new graphs with orders between the lower bounds given in [4] and the upper bounds given in this paper found by identifying remote vertices. The voltage graph construction gives us, naturally, the Heawood graph or \((3;6)\)-cage and the Tutte graph or \((3;8)\)-cage, for \(m=3\) and \(g=\{6,8\}\) respectively. These graphs occur as part of our constructions of families of \((3,m;6)\)-cages of order \(4m+2\) and \((3,m;8)\)-graphs of order \(9m+3\). We will detail how our constructions generalize the constructions given in [3, 15] in the corresponding sections. As the authors state in [3], for girth \(8\) and \(m=\{4,5,6,7\}\) these graphs, and also ours, are cages, while for the rest of the values of \(m\) they are close to the lower bound \(n(3,m;8)\geq\lceil\frac{25m}{3}\rceil+5\) given for \(m\geq 7\). In [4], the authors proved that for \(m\) much larger than \(r\) and even girth \(g\equiv 2\mod 4\) there exist infinite families of \((r,m;g)\)-graphs with few vertices, with order close to the lower bound also given in that paper. Specifically, for girth \(g=6\), the graphs described are biregular cages, because they attain the lower bound given in [26]. However, in that paper, the authors did not give an explicit construction of these graphs; they only proved their existence using a strong result about Hamiltonian graphs and girths given by Sachs in 1963 ([23]). In particular, for girth \(10\) the existence of a semiregular cage continues to be open for small values of \(m\). For girth \(g=10\), using the identifying remote vertices technique, we obtain graphs of order greater than \(22m+2m/3\) (see Section 2). With the voltage graph construction, we give explicit constructions for two different infinite families of \((3,m;10)\)-graphs. The first construction produces \((3,m;10)\)-graphs for \(m\geq 4\) of order \(24m+2\), with \(2\) vertices of degree \(m\) and \(24m\) vertices of degree \(3\). This number of vertices coincides with the parameter obtained in [4] for \(m\) much larger than \(3\). The second construction produces graphs of order \(20m+2\) for \(m\geq 7\) with \(2\) vertices of degree \(m\) and \(20m\) vertices of degree \(3\). This second family clearly improves the upper bound for \((3,m;10)\)-cages given in Section 2 and has a difference of less than \(3m\) to the lower bound \(n(3,m;10)\geq\lceil\frac{53m}{3}\rceil+9\) given in Lemma 3.4 in [4]. Finally, for girth \(12\), using the identifying remote vertices technique, we obtain graphs of order greater than \(41m+m/3\) (see Section 2). Using voltage graphs, we give explicit constructions of two different infinite families of \((3,m;12)\)-graphs. The first gives us \((3,m;12)\)-graphs for \(m\geq 9\) of order \(49m+3\) with \(3\) vertices of degree \(m\) and \(49m\) vertices of degree \(3\). This construction is new and gives us new parameters of semicubic cages of girth \(12\), but the order of these graphs is bigger than the upper bounds from the identifying vertices construction. However, it is based on a general structure that will be considered throughout the paper, and for this reason we consider that it is important to include it among our results. We also present a second family of semicubic graphs of girth \(12\) using voltage graphs, giving us \((3,m;12)\)-graphs for \(m\geq 10\) of order \(41m+3\) with \(3\) vertices of degree \(m\) and \(41m\) vertices of degree \(3\). This family improves the upper bound given in Section 2 and produces graphs with a difference of less than \(5m\) to the lower bound \(n(3,m;12)\geq\lceil\frac{109m}{3}\rceil+17\) given in Lemma 3.4 in [4]. This paper is organized as follows. In Section 2, we construct semicubic graphs with few vertices identifying remote vertices that give us upper bounds for semicubic cages of even girth generalizing. In Section 3 we give some definitions of voltage graphs and derived graphs that we will use in the rest of the paper, including introducing a new definition of _pinned_ vertices. In Section 4, we describe a general construction for graphs that are of girth at most \(4t+2\), with two vertices of degree \(m\) and many vertices of degree \(3\), via lifting certain types of voltage graphs over \(\mathbb{Z}_{m}\). We provide constructions for graphs with girths exactly \(6\) and \(10\), including producing the Heawood graph as the \(m=3\) case of the girth \(6\) family. In Section 7, we similarly describe a general construction for graphs with girth at most \(4t\) with three vertices of degree \(m\) (this general construction is used in [6] to construct bi-regular graphs of even girth \(g=4t\), the paper is in progress.) We provide explicit voltage graphs whose lifts form infinite families of semicubic graphs of girths equal to \(8\) and \(12\), including producing the Tutte \(8\)-cage as a member of the girth \(8\) family, where \(m=3\). ## 2 Constructing upper bounds for semicubic cages of even girth by identifying remote vertices In this section we generalize Theorem 3, given in [2], in which the authors identify copies of \((r;g)\)-cages at _remote vertices_, which are vertices at distance at least \(g/2\). These techniques are also used in [3] to construct biregular graphs of even girth. The results obtained in [2, 3], identifying remote vertices on graphs of girth \(8\), are better than the results given in Theorem 1 for girth \(8\). In particular, for girth \(g=8\), there exist results obtained using the properties of generalized quadrangles that produce five remote vertices in the \((3,8)\)-cage. Using this fact, Corollary 7 in [2] states that the \((\{3,m\};8)\)-cages have order \(8m+\frac{m}{3}+5\) for \(m=3k\) and \(k\geq 1\), and Corollary 3.3 in [3] states that \(n(\{3,m\};8)\leq 8m+\frac{m}{3}-\frac{16}{3}t+21\) for \(m=3k+t\) and \(t\in\{1,2\}\). Consequently, we will use Theorem 1 only to produce bounds of the order of semicubic cages of girth \(g=\{10,12\}\), which are the parameters that we improve on this paper. The constructions described in the proof are illustrated in Figure 1. **Theorem 1**.: _Let \(G\) be a \((3;g)\)-graph of even girth and order \(n_{g}\) with at least two vertices at distance \(g/2\). If \(m=3k+t\), we obtain \((3,m;g)\)-graphs of order:_ \[k(n_{g}-2)+\begin{cases}2&\text{ if }t=0\\ n_{g}+2&\text{ if }t=1\\ n_{g}&\text{ if }t=2\end{cases}\] Proof.: We divide the proof into three cases. 1. Let \(m=3k\), and let \(G_{1}\) and \(G_{2}\) be two copies of a \((3,g)\)-graph of even girth \(g\) and order \(n_{g}\). Let \(x_{1}\) and \(y_{1}\) be two vertices at distance at least \(g/2\) (remote vertices) in \(G_{1}\) and let \(x_{2}\) and \(y_{2}\) be two remote vertices in \(G_{2}\). Construct a graph \(G\) by taking \(G_{1}\) and \(G_{2}\) and identifying \(x_{1}\) with \(x_{2}\) (call this new vertex \(x\)) and \(y_{1}\) with \(y_{2}\) (called \(y\)). It is easy to see that the shortest cycle that passes through two vertices of \(G_{1}\), with at least one of them different from either \(x\) or \(y\), is totally contained in \(G_{1}\) and thus has length at least \(g\) (and analogously for \(G_{2}\)). If the cycle contains both \(x\) and \(y\), then, since the distance between \(x\) and \(y\) is at least \(g/2\), the cycle is given by two disjoint paths (in \(G_{1}\) or \(G_{2}\)) between \(x\) and \(y\), each of them with length at least \(g/2\), so together they form a cycle of length at least \(g\). Now let \(G\) be a graph formed by identifying \(k\) copies, where the \(i\)-th copy is labelled \(G_{i}\), at pairs of remote vertices \(x_{i}\) and \(y_{i}\) in \(G_{i}\), calling the identified vertices \(x\) and \(y\) as before. Since each of the graphs \(G_{i}\) is \(3\)-regular, the identified vertices \(x\) and \(y\) have degree \(m=3k\). Applying the same shortest cycle analysis as above to each pair \((G_{i},G_{j})\), it follows that the girth of \(G\) is also at least \(g\), and \(G\) has order \(kn_{g}-2(k-1)=k(n_{g}-2)+2\), with two vertices of degree \(m\) and \(k(n_{g}-2)\) vertices of degree \(3\). 2. Suppose that \(m=3k+2\). Take \(k+1\) copies of a \((3,g)\)-graph of order \(n_{g}\) of even girth \(g\), and label the \(i\)-th copy as \(G_{i}\). In \(G_{1}\), let \(x_{1}y_{1}\) be any edge. Delete \(x_{1}y_{1}\), and call this new graph \(G_{1}^{\prime}\). Notice that all the vertices in \(G_{1}^{\prime}\) have degree \(3\) except \(x_{1}\) and \(y_{1}\), which have degree \(2\), and since \(G_{1}\) has girth \(g\), \(x_{1}\) and \(y_{1}\) are now at distance at least \(g-1\). Now, suppose that \(x_{i}\) and \(y_{i}\) are two vertices at distance at least \(g/2\) in \(G_{i}\), for \(i\in\{2,\ldots,k+1\}\). Construct a new graph \(G\) by identifying all the vertices \(x_{i}\), calling the new vertex \(x\), and all the vertices \(y_{i}\), calling the new vertex \(y\). As in the previous case, we obtain a graph of girth \(g\), but in this case, the \(n_{g}-2\) vertices in each copy other than \(x\) and \(y\) have degree \(3\), and \(x\) and \(y\) have degree \(m=3k+2\). It follows that \(G\) is a \((3,m;g)\)-graph of order \((n_{g}-2)(k+1)+2=k(n_{g}-2)+n_{g}\). 3. Suppose that \(m=3k+1\). take \(k+1\) copies of a \((3,g)\)-graph of order \(n_{g}\) of even girth \(g\), and label the \(i\)-th copy as \(G_{i}\). As before, choose any edge \(x_{1}y_{1}\) in \(G_{1}\) and delete it. Now add two vertices to \(G_{1}\), one of them a neighbor of \(x_{1}\), called \(x\), and the other a neighbor of \(y_{1}\), called \(y\). Notice that these two vertices are at a distance of at least \(g+1\) in \(G_{1}\). As in the previous cases, construct a graph \(G\) identifying \(x\) and \(y\) with two remote vertices \(x_{i}\) and \(y_{i}\) in each of the \(k\) remaining graphs \(G_{i}\). This graph \(G\) has two vertices of degree \(3k+1\) and \(k(n_{g}-2)+n_{g}\) vertices of degree \(3\), for a total order of \(k(n_{g}-2)+n_{g}+2\). Taking into account that the order of each of the \((3;10)\)-cages is equal to \(70\) (recall that the girth \(10\) cages are the Balaban Cage and two others [21]) and it is easy to find two remote vertices at distance \(5\) in the Balaban cage, for example, we obtain the following corollary: **Corollary 2**.: _There exist \((3,m;10)\)-graphs of order:_ * \(22m+\frac{2m}{3}+2\) _for_ \(m=3k\)__ * \(22m+\frac{2m}{3}+49+\frac{1}{3}\) _for_ \(m=3k+1\)__ * \(22m+\frac{2m}{3}+24+\frac{2}{3}\) _for_ \(m=3k+2\)__ And also, taking into account that the Moore \((3;12)\)-cage has order \(126\) and it is the incidence graph of the generalized hexagon of order \(2\), which also has two vertices at distance \(6\), it follows that: **Corollary 3**.: _There exist \((3,m;12)\)-graphs of order:_ * \(41m+\frac{m}{3}+2\) _for_ \(m=3k\)__ * \(41m+\frac{m}{3}+86+\frac{2}{3}\) _for_ \(m=3k+1\)__ * \(41m+\frac{m}{3}+43+\frac{1}{3}\) _for_ \(m=3k+2\)__ Finally, we would like to calculate these graphs for girth \(14\) using the smallest \((3;14)\)-graph known currently (the _record_ graph), which was given by Exoo in [11] of order \(348\). The current lower bound for a \((3;14)\)-graph is \(258\), given by Mc Kay et. al. in [19]. From this construction, it follows that: Figure 1: Illustrating the construction in Theorem 1, which produces semicubic \((3,m;g)\) graphs of girth \(g\) beginning with input graphs with \(n_{g}\) vertices. For the purposes of illustration, the construction is shown using \(K_{3,3}\), which is the unique \((3,4)\)-cage; (a) is a \((3,9;4)\) graph with \(3(6-2)+2=14\) vertices, (b) is a \((3,8;4)\) graph with \(2(6-2)+6=14\) vertices; (c) is a \((3,7;4)\) graph with \(2(6-2)+6+2=16\) vertices. **Corollary 4**.: _There exist \((3,m;14)\)-graphs of order:_ * \(115m+\frac{m}{3}+2\) _for_ \(m=3k\)__ * \(115m+\frac{m}{3}+234+\frac{2}{3}\) _for_ \(m=3k+1\)__ * \(115m+\frac{m}{3}+117+\frac{1}{3}\) _for_ \(m=3k+2\)__ ## 3 Preliminaries on voltage graphs and derived graphs In this section, we present definitions and preliminary results about voltage graphs, and we exhibit some voltage and derived graphs constructed with them. Following standard references (e.g., [7, 14, 22]), a voltage graph \(G\) is a labeled directed multigraph, often including loops and parallel edges, along with a group \(\Gamma\); the labels on the edges are elements of \(\Gamma\). Throughout this paper, \(\Gamma\) is a cyclic group \(\mathbb{Z}_{m}\) with addition as the group operation. The _derived_ graph \(G_{m}\), also called the _lift_ graph, for a voltage graph with voltage group \(\mathbb{Z}_{m}\) is formed from \(G\) as follows: each vertex \(v\) in \(G\) corresponds to \(m\) vertices in \(G_{m}\), labelled \(v^{0},\cdots,v^{m-1}\). An arrow in \(G\) from \(v\) to \(w\) labelled \(a\) means that vertex \(v^{i}\) and vertex \(w^{i+a}\) are connected by an edge in \(G_{m}\), with all indices throughout the paper taken modulo \(m\).1 Note that we could also have drawn an arrow from \(w\) to \(v\) labeled \(-a\) and produced the same edges in the lift. If vertex \(v\) is incident with a loop labeled \(a\) in \(G\), then in \(G_{m}\), vertices \(v^{i}\) and \(v^{i+a}\) are incident. Figure 2 shows the standard drawing of the Heawood graph, the \((3,6)\)-cage, as a \(\mathbb{Z}_{7}\) lift of a voltage graph on two vertices. Footnote 1: Note that often, elements of a single orbit of elements in the lift graph are labeled with subscripts, but here, we are using superscripts to label the elements in the orbit, and reserving subscripts to label vertices in the voltage graph. See Figure 3 for an example of this indexing. Figure 2: The Heawood graph, the \((3,6)\)-cage, may be drawn with 7-fold rotational symmetry as on the right. It can be drawn naturally as a \(\mathbb{Z}_{7}\) lift of the voltage graph shown to the left; \(\mathbb{Z}_{7}\)-orbits are indicated with color, and the 0th element of each symmetry class is shown larger/thicker. The collection of directed edges and loops in a voltage graph, along with their labels, is called a _voltage assignment_. Similarly, if \(S\) is a subgraph of \(G\), the voltage assignment \(v(S)\) is the set of edges and labels in \(S\). Throughout the paper, an unlabeled edge in a voltage graph is assumed to have voltage assignment \(0\), which we also draw as undirected. A final modification of the voltage graph construction, which as far as we know is new to this paper, is the notion of a _pinned vertex_, which is a special vertex of degree 1 in the voltage graph. We indicate this construction in the voltage graph using the symbol \(*\) (also indicated with \(\square\) in figures). Specifically, the notation \(\boxed{v^{*}}\) over \(\mathbb{Z}_{m}\), where vertex \(v^{*}\) is a pinned vertex, indicates that in the lift graph, there is a single vertex labeled \(v^{*}\), which is connected to each of the vertices \(w^{i}\), \(i=0,\ldots,m-1\). It follows that in a derived graph, when the voltage group is \(\mathbb{Z}_{m}\), the degree of a pinned vertex is \(m\). Figure 3 shows an example of a voltage graph with two pinned vertices and the corresponding derived graph over \(\mathbb{Z}_{3}\). **Observation 1**.: _Given a voltage graph \(G\) with voltage group \(\mathbb{Z}_{m}\) with a collection of pinned vertices \(v_{1}^{*},\ldots v_{s}^{*}\) in which all vertices are degree \(r\) except the pinned vertices, which are of degree 1, the derived graph \(G_{m}\) is a \((r,m)\)-biregular graph with \(s\) vertices of degree \(m\)._ In what follows, we analyze the girth of graphs derived from voltage graphs with pinned vertices but no loops. We begin with the following well-known result (see, e.g., [14, SS2.1.3 - 4]). A _non-reversing closed walk_ in a voltage graph is a closed walk in which no edge is repeated consecutively, although the same edge or vertex might be traversed more than once later in the walk. Note that constructing the (directed) walk in the voltage graph may require reversing some of the displayed arrows in the voltage graph (and consequently negating the labels) in order to have the walk have a consistent direction. For example, the red walk in Figure 6(a) repeats a vertex, and the walk in Figure 6(c) repeats an edge, but not consecutively. **Lemma 5**.: _If the sum of the labels in a non-reversing closed walk \(W\) in a voltage graph with voltage group \(\mathbb{Z}_{m}\) is congruent to \(0\bmod m\), and no smaller sub-walk has voltage sum Figure 3: An example of a voltage graph with pinned vertices. In this and subsequent figures, each unpinned vertex \(v\) in the voltage graph becomes the collection of vertices \(v^{0},v^{1},\ldots,v^{m-1}\) in the lift, whereas pinned vertices \(x^{*}\) and \(y^{*}\) lift to single vertices. _congruent to \(0\bmod m\), then the lift of \(W\) forms a cycle._ Proof.: This follows from [14, SS2.1.3 - 4]. It is clear that the lift is a closed walk, since the starting and ending vertices in \(G_{m}\) have the same index. To see that it is a cycle, suppose we begin the walk \(W\) at some vertex \(v\), and suppose the lift walk had also intersected itself at some vertex \(w^{j}\) (in \(G_{m}\)). That would mean that in the voltage graph, the vertex \(w\) was used twice in the voltage walk, and moreover, the voltages along the part of the walk between those encounters with \(w\) summed to \(0\). This contradicts the assumption that no sub-walk had net voltage \(0\). Of particular utility to us is that if the sum \(s\) of the labels along a cycle in the voltage graph divides \(m\), so that \(m=sd\), then the closed walk formed by going \(d\) times around the cycle in the voltage graph lifts to a cycle in the derived graph. However, there are other closed walks in the voltage graph that can also lift to short cycles, which we discuss in subsection 3.1. When we are considering voltage graphs with pinned vertices, however, there are additional cycles in the lift graph which pass through the pinned vertices that do not come from lifting closed non-reversing walks in the voltage graph whose voltages sum to \(0\). See Figure 4 for a collection of specific examples, and Figures 5 and 6 for general schematics. **Lemma 6**.: _Let \(P\) be a path of length \(\ell\) in a voltage graph \(G\) that begins at a pinned vertex \(x^{*}\) and ends at a (different) pinned vertex \(y^{*}\). Then the lift graph contains a cycle of length \(2\ell\) passing through \(x^{*}\) and \(y^{*}\)._ Proof.: Assume that in \(P\), \(x^{*}\) is adjacent to \(x\) and \(y^{*}\) is adjacent to \(y\), and let \(P^{\prime}\) be the subpath of \(P\) in \(G\) that starts at \(x\) and ends at \(y\). Construct two lifts of path \(P^{\prime}\): let \((P^{\prime})^{0}\) start at \(x^{0}\) and end at some \(y^{j}\); then \((P^{\prime})^{1}\) will start at \(x^{1}\) and end at \(y^{j+1}\). By construction, these paths are disjoint. Extending these paths back to \(x^{*}\) and \(y^{*}\) and joining them results in a cycle of length \(2\ell\). (See Figure 5.) **Lemma 7**.: _If \(H\) is a "ollipop" subgraph in a voltage graph \(G\) composed of a path of length \(p\) from a pinned vertex \(x^{*}\) to a non-pinned vertex \(v\) and a cycle of length \(q\) containing \(v\) such that the sum of the non-zero voltages along the cycle is not congruent to \(0\pmod{m}\) then \(H\) lifts to a cycle of length \(2p+q\) in the derived graph \(G_{m}\)._ Proof.: Suppose the sum of the voltages along the path \((x^{*},x,\cdots,v)\) equals \(A\) and the sum of the voltages along the cycle equals \(B\). In the lift, first travel from \(x^{*}\) to \(x^{0}\) to \(v^{A}\) via the voltage instructions on the path in \(G\). Next, travel along the lift of the cycle from \(v^{A}\) to \(v^{A+B}\), which is different from \(v^{A}\) since \(B\not\equiv 0\bmod m\). By [14, Theorem 2.1.2] the lift of this cycle is a path in \(G_{m}\). Finally, travel backwards along the path from \(v^{A+B}\) to \(w^{B}\) (subtracting a total of \(A\) voltages) and then to \(x^{*}\). Since \(B\not\equiv 0\bmod m\), \(x^{B}\) is different from Figure 4: Examples of various walks in a voltage graph and their \(\mathbb{Z}_{3}\) lifts. Note that traveling forward by \(2\) is the same as traveling backwrds by \(1\) when \(m=3\). \(x^{0}\), so this closed walk does not self-intersect except at \(x^{*}\) and thus is a cycle. (See Figure 6.) ### Non-cycle closed walks that lift to short cycles In subsequent sections, we are interested in graphs of girth 6, 10, 8, and 12 that are formed by adding non-zero labeled arcs to the leaves of trees of particular heights. To analyze the girth, we need to analyze short closed walks in the voltage graph that lift to short cycles. In the previous section, we looked at cycles, lollipop walks, and paths between pinned vertices that lift to cycles. Here, we analyze possible non-cycle closed walks formed by joining cycles whose voltage sums we know. Some example walks are shown in Figure 7; the context is that these are assumed to be parts of some voltage graph over \(\mathbb{Z}_{m}\). First, note that joining two short cycles at a vertex result in a short walk around both cycles whose length is the sum of the lengths of the cycles. Figure 6(a) shows an example walk (following the red arrows) where the sums around the cycles are the same, caused by going forward around the first cycle (in the direction of the arcs in the cycle) and going backwards along the second cycle (opposite the direction of the arcs in the cycle). Joining two cycles with a path leads to the construction of a non-reversing closed walk formed by traversing the first cycle, going along the path, going around the second cycle, and going back along the path to the starting point (Figure 6(c)). If we think about joining two 4-cycles along a shared edge, we can construct a closed walk by going down the shared edge, around the first cycle, back down the shared edge, and around the second cycle. This is a non-reversing walk of length 8 as well. Figure 5: A schematic illustrating that a path in \(G\) of length \(\ell\) between pinned vertices lifts to a cycle in \(G_{m}\) of length \(2\ell\). Figure 6: A schematic illustrating that a lollipop in \(\mathcal{G}\) with path of length \(p\) and cycle of length \(q\) between pinned vertices lifts to a cycle of length \(2p+q\). Figure 7: Examples of short closed walks formed by traversing joined cycles in various ways. There are limited ways to construct non-cycle closed walks of lengths 6, 8 or 10. It suffices to look at the following types of joined cycles; if these types of joined cycles exist in the graph, then the voltage sums are analyzed separately along the walks. **Observation 2**.: _Non-cycle, non-reversing, non-lollipop closed walks of length 6, 8, or 10 in bipartite graphs have as their underlying structure the following forms:_ * _A 4-cycle and a 2-cycle joined at a vertex may lift to a 6-cycle;_ * _A 4-cycle and a 2-cycle joined at a shared edge may lift to a 6-cycle;_ * _pairs of cycles of length 4 joined at a vertex may lift to an 8-cycle;_ * _pairs of cycles of length 4 joined at a shared edge may lift to an 8-cycle;_ * _a 4-cycle and a 6-cycle joined at a vertex may lift to a 10-cycle;_ * _two 4-cycles joined by a path of length 1 may lift to a 10-cycle;_ * _a 4-cycle and a 6-cycle joined at a shared edge may lift to a 10-cycle;_ Other walks formed by going along cycles joined by shared edges or along cycles joined by paths have lengths larger than 10. Note that the voltage graphs we consider later in this work that potentially have girth more than 6 do not have 2-cycles. In the rest of the paper we will construct families of semicubic graphs of girth \(g\in\{6,8,10,12\}\). To obtain these graphs, we begin by describing two different families of voltage graphs called \(\mathcal{G}_{4t}\), \(t\geq 2\), and \(\mathcal{G}_{4t+2}\) for \(t\geq 1\). We use these graphs for \(t\in\{2,3\}\) in the first case and for \(t\in\{1,2\}\) in the second case to obtain families of semicubic graphs with girth \(g=4t\) and \(g=4t+2\). ## 4 A family of semicubic graph of girth \(4t+2\) In this section we we will use a family of voltage graphs \(\mathcal{G}_{4t+2}\) to construct a family of \((\{3;m\};4t+2)\)-biregular graphs called \((\mathcal{G}_{4t+2};m)\). First of all, we describe the construction of one of the graphs of this family, which we will call \(G_{4t+2}\). This voltage graph begins with two trees \(X_{t}\) and \(Y_{t}\), which are identical except for their name. Graph \(X_{t}\) begins with a vertex \(x^{*}\). We join this vertex to a single vertex \(x\) and construct a binary tree from \(x\). Vertex \(x\) has two children, \(x_{0}\) and \(x_{1}\); vertex \(x_{i}\), \(i=0,1\) has two children \(x_{i0}\) and \(x_{i1}\), and in general, for each given bit string \(b\) of length \(\ell\), \(1\leq\ell\leq 2t-1\), vertex \(x_{b}\) has children \(x_{b0}\) and \(x_{b1}\). Including \(x^{*}\) and \(x\), the total height of this tree is \(2t\). Next, we delete all of the left-handed children of vertex \(x_{a}\) where \(a=\underbrace{0\cdots 0}_{t-1}\), that is, all the vertices whose bit strings are of the form \(\underbrace{0\cdots 0}_{t-1}\ell\) for nonempty bitstring \(\ell\). For example, when \(t=2\), we delete the children of \(x_{0}\), and \(x_{0}\) has a bitstring of length 1. The remaining leaves of the tree have bit strings of length \(2t-1\). Observe that by construction, the distance from \(x^{*}\) to \(x_{a}\) is \(t\) and the distance from \(x_{a}\) to the level of the leaves is also \(t\). The vertex and the \(2^{2t-1}-2^{t-1}=2^{t-1}(2^{t}-1)\) leaves all have degree \(1\); the remaining internal vertices have degree \(3\). This pruned tree is called \(X_{t}\). To continue, we take a copy of \(X_{t}\), flip it vertically, change the labels from \(x\) to \(y\), and call it \(Y_{t}\). Finally, we join the trees \(X_{t}\) and \(Y_{t}\) using a zero-labeled edge between \(x_{a}\) and \(y_{a}\). This tree is called \(T_{4t+2}\), where the "\(4t+2\)" indicates the height of the tree. See Figure 8. Next, we construct a family of voltage graphs \(\mathcal{G}_{4t+2}\) for \(t=1,2\) by adding edge-labeled arcs to \(T_{4t+2}\) between the leaves of \(X\) and \(Y\) in such a way that each leaf (excluding \(x^{*}\) and \(y^{*}\)) is incident with one edge of \(T_{4t+2}\) and two introduced arcs; that is so that all the vertices, except the pinned vertices \(x^{*}\) and \(y^{*}\), have degree \(3\). A particular element in this family (a particular assignment of arcs and voltages) will be denoted \(G_{4t+2}\). Note that this family contains voltage graphs with different arc assignments, and, for each arc assignment, different labels. For a specific choice of \(G_{4t+2}\) (that is, a particular choice of arcs and labels), see Figure 10. We let \((G_{4t+2},m)\) denote the \(\mathbb{Z}_{m}\) lift of a graph \(G_{4t+2}\). The roots \(x^{*}\) and \(y^{*}\) in \(G_{4t+2}\) are pinned vertices \(x^{*}\) and \(y^{*}\) in \((G_{4t+2};m)\), and they have degree \(m\). The graph \((G_{4t+2};m)\) has \(2+2m(\sum_{i=0}^{2t-1}2^{i}-\sum_{i=0}^{t-1}2^{i})=2+2m(\sum_{i=t}^{2t-1}2^{i})\) vertices, found by counting the two pinned vertices \(x^{*}\) and \(y^{*}\), summing the vertices in the pruned binary trees \(X_{t}\) and \(Y_{t}\), and then multiplying those vertices by \(m\) from the lift. By choosing certain arcs and certain voltage assignments in \((\mathcal{G}_{4t+2},m)\), we obtain specific families of \((\{3;m\};4t+2)\)-biregular graphs for \(t=1\) and \(t=2\). Recall that \(T_{4t+2}\) is a voltage tree obtained by joining the binary trees \(X_{t}\) and \(Y_{t}\) by the edge \((x_{a}y_{a})\), and all the edges, including \((x_{a}y_{a})\), have voltage assignments equal to \(0\). Let \((T_{4t+2};m)\) be the derived graph of \(T_{4t+2}\), with \(x^{*}\) and \(y^{*}\) as pinned vertices. Notice that this derived graph has the same order as \((G_{4t+2};m)\), that is, \(n(T_{4t+2};m)=n(G_{4t+2};m)=2+2m(\sum_{j=t}^{2t-1}2^{j})\). Figure 8: The trees \(X_{2}\) and \(T_{4t+2}\), for \(t=2\) Applying Lemma 6 we can conclude the following: **Lemma 8**.: \((T_{4t+2};m)\) _has girth \(4t+2\)._ Proof.: By construction, since the distance in \(T_{4t+2}\) between \(x^{*}\) and \(x_{a}\) is \(t\), there exists a path \(P=(x^{*},\ldots,x_{a},y_{a},\ldots,y^{*})\) of length \(2t+1\) between the two roots in the voltage graph \(T_{4t+2}\). This path lifts to a cycle of length \(4t+2\) in \((T_{4t+2};m)\). No other shorter paths in \(T_{4t+2}\) lift to cycles, since \(T_{4t+2}\) is a tree. Therefore, the girth of \((T_{4t+2};m)\) is equal to \(4t+2\). Figure 4 shows an example of such a cycle when \(t=1\). **Lemma 9**.: _The graph \((G_{4t+2};m)\) has girth at most \(4t+2\)._ Proof.: The graph \((T_{4t+2},m)\) is a subgraph of \((G_{4t+2},m)\). **Lemma 10**.: _The graph \(\mathcal{G}_{4t+2}\) is bipartite._ Proof.: Let \(\mathcal{X}_{j}\) be the set of vertices of the pruned binary tree \(X_{t}\) contained in the voltage graph \(G_{4t+2}\) at distance \(j\) from \(x^{*}\): \(\mathcal{X}_{0}=x^{*}\), \(\mathcal{X}_{1}=x\), \(\mathcal{X}_{2}=\{x_{0},x_{1}\}\), \(\mathcal{X}_{3}=\{x_{00},x_{01},x_{10},x_{11}\}\) and so on. Analogously, let \(\mathcal{Y}_{j}\) be the set of vertices of the pruned binary tree \(Y_{t}\) at distance \(j\) of \(y^{*}\). Observe that \(B_{\mathcal{X}^{\prime}}=\{\mathcal{X}_{0},\mathcal{X}_{2},...,\mathcal{X}_{ 2t}\}\) and \(B_{\mathcal{X}^{\prime\prime}}=\{\mathcal{X}_{1},\mathcal{X}_{3},...,\mathcal{ X}_{2t-1}\}\) are two disjoint subsets of vertices of \(X_{t}\) whose union is \(V(X_{t})\). Analogously, \(B_{\mathcal{Y}^{\prime}}\) and \(B_{\mathcal{Y}^{\prime\prime}}\) are two disjoint subsets of the vertices of \(Y_{t}\) which union is \(V(Y_{t})\). The bipartite classes are \(\mathcal{B}_{1}=B_{\mathcal{X}^{\prime}}\cup B_{\mathcal{Y}^{\prime\prime}}\) and \(\mathcal{B}_{2}=B_{\mathcal{X}^{\prime\prime}}\cup B_{\mathcal{Y}^{\prime}}\). Consequently, by Lemma 10, we conclude that all derived graphs \((G_{4t+2};m)\) in the family of graphs \((\mathcal{G}_{4t+2},m)\) are also bipartite graphs. ## 5 A family of semicubic cages of girth \(6\) In this section, we obtain a family of semicubic cages of girth \(6\), by introducing a family of voltage graphs \(G_{6}\), shown in Figure 9, for which the \((G_{6},m)\)-graphs attain the lower bound given in [15]. If the voltage assignments of the graphs as described in Theorem 11 are \(\alpha=1\) and \(\beta=m-1\) we obtain the graphs given in [15]. **Theorem 11**.: _The family of graphs \((G_{6},m)\) given by the family of voltage graphs \(G_{6}\) with voltage assignments \((x_{0}y_{0})=\alpha\), \((x_{0}y_{0})=\beta\), \(\alpha\neq\beta\), \(\alpha\neq 0\) and \(\beta\neq 0\), and \(2(\alpha-\beta)\not\equiv 0\mod m\) for \(m\geq 3\) has girth \(6\). Moreover, it is a \((\{3,m\};6)\)-graph with two vertices of degree \(m\) and \(4m\) vertices of degree \(3\)._ Proof.: We analyze the following subgraphs of the voltage graph: paths that contain two pinned vertices, "lollipop" walks containing a single cycle joined to a path, cycles in \(G_{6}\) without pinned vertices, and non-reversing non-cycle closed walks whose total length is \(6\). 1. A minimum path between \(x^{*}\) and \(y^{*}\) in the voltage graph \(G_{6}\) is the path \((x^{*}xyy^{*})\), which lifts to the \(6\)-cycle \((x^{*}x^{0}y^{0}y^{*}y^{1}x^{1}x^{*})\), so the girth is at least \(6\). 2. All the lollipops in \(G_{6}\) with some edges with non-zero voltage assignments are essentially of two types: one has a path of length 1 and one 4-cycle, like \((x^{*},x,y,y_{0},x_{0},x,x^{*})\), and one has a path of length 2 with a 2-cycle, like \((x^{*},x,x_{0},y_{0},x_{0},x,x^{*})\). By Lemma 7 both induce 6-cycles in \((G_{6};m)\). 3. In considering the behavior of lifts of cycles without pinned vertices, note that since the graphs in \((\mathcal{G}_{6};m)\) are bipartite, it is sufficient to show that every cycle of length 2 or 4 in \(G_{6}\) lifts to a cycle of length at least 6 in \((G_{6};m)\). Inspection shows that there are 2 different 4-cycles, one of the form \((x,\underbrace{x_{0},y_{0}}_{\alpha},y,x)\) and one of the form \((x,\underbrace{x_{0},y_{0}}_{\beta},y,x)\) where the bracing indicates the voltage label on the directed edge \((x_{0},y_{0})\) (and which arrow we are choosing). The walk formed by traversing this cycle once lifts to a 4-cycle only when \(\alpha\equiv 0\bmod m\) or \(\beta\equiv 0\bmod m\). There is a single 2-cycle, using both directed arrows (reversing one), corresponding to the directed cycle \(\overbrace{x_{0},y_{0},x_{0}}^{\alpha}\) (note that we negate the voltage since we are traveling backwards along the arrow). Once around this 2-cycle lifts to a 2-cycle when \(\alpha=\beta\), and twice around the 2-cycle lifts to a 4-cycle when \(2(\alpha-\beta)\equiv 0\bmod m\), since in these cases the voltage sum for the closed walk equals 0. 4. The final option for walks that could lift to cycles consists of cycles joined by vertices or paths or that share an edge or path. However, since the only available cycles in \(G_{6}\) are 2-cycles or 4-cycles, such walks are all of length greater than or equal to 6, so the lifts of these walks do not decrease the girth. The order of the graph for \(m\geq 3\) is equal to \(4m+2\). This family is isomorphic to the family constructed in [3], and it also appears in [4] in the general Theorem for \(g\equiv 2\pmod{4}\). A family of semicubic graphs of girth \(10\) In this section, we will give a voltage assignment for a particular voltage graph \(G_{10}\) in \(\mathcal{G}_{4t+2}\) for \(t=2\), shown in Figure 10, and we will prove that its \(\mathbb{Z}_{m}\) lifts form a family of semicubic graphs of girth \(10\). The arcs between leaves of \(\mathcal{G}_{10}\) and voltage assignments on those arcs that produce the voltage graph \(G_{10}\) are also given in Table 1. **Theorem 12**.: _The derived graphs \((G_{10},m)\) associated with the voltage graph \(G_{10}\) (shown in Figure 10) have girth 10 for all \(m\geq 4\) except for \(m=6\). Moreover, the graphs \((G_{10},m)\) are \((\{3,m\};10)\)-graphs with \(2\) vertices of degree \(m\) and \(24m\) vertices of degree \(3\)._ Proof.: It suffices to consider the sums of voltages on non-reversing walks in the voltage graph \(G_{10}\); a walk lifts to a cycle in \((G_{10},m)\) if and only if the sum of the voltages along the walk is congruent to \(0\pmod{m}\). Observe that by Lemma 9, the girth of \((G_{10},m)\) is at most \(10\). To show that the girth is \begin{table} \begin{tabular}{c|c c c c c c c c c c c} Starting leaf & \(x_{010}\) & \(x_{010}\) & \(x_{011}\) & \(x_{011}\) & \(x_{100}\) & \(x_{100}\) & \(x_{101}\) & \(x_{101}\) & \(x_{110}\) & \(x_{111}\) & \(x_{111}\) \\ Ending leaf & \(y_{010}\) & \(y_{111}\) & \(y_{011}\) & \(y_{110}\) & \(y_{010}\) & \(y_{101}\) & \(y_{011}\) & \(y_{100}\) & \(y_{110}\) & \(y_{111}\) & \(y_{100}\) & \(y_{101}\) \\ Voltage & \(1\) & \(2\) & \(2\) & \(1\) & \(2\) & \(1\) & \(1\) & \(2\) & \(2\) & \(1\) & \(3\) & \(0\) \\ \end{tabular} \end{table} Table 1: Labelled arcs between leaves of \(\mathcal{G}_{10}\) forming a voltage graph \(G_{10}\). The \(\mathbb{Z}_{m}\) lifts \((G_{10},m)\) are girth 10 for \(m\neq 1,2,3,6\). Figure 10: The voltage graph \(G_{10}\). Unlabeled edges have voltage \(0\). Edges with the same voltage assignment are colored the same color for convenience. The two \(4\)-cycles in the graph are highlighted in yellow. equal to \(10\), we need to show that there are no cycles of length \(2\), \(4\), \(6\), or \(8\) (there are no odd cycles because \(G_{10}\) is bipartite, by Lemma 10). We consider the following walks to determine if they lift to cycles: (1) cycles; (2) walks formed by combining cycles; (3) lollipops. 1. There are no non-reversing walks of length \(2\) in \(G_{10}\); the only such cycle would be going along an edge and immediately returning back along the same edge, which is a reversing walk. 2. Analysis in Mathematica [18] of the voltage graph \(G_{10}\) (with all edges doubled, to allow for travel in either direction along the edges) gives \(82\) cycles of length \(4\), \(6\), or \(8\). Summing the voltages along all these cycles results in voltage sums in the set \(\{-1,-2,-3,1,2,3\}\). Since \(m>3\), one time around each of these individual cycles lifts to a path that is not a closed cycle, since the voltage sums are not congruent to \(0\mod m\). However, an inspection of the two \(4\)-cycles shows that twice around the \(4\)-cycle \((\overbrace{x_{111},y_{110}},y_{11},y_{101},x_{111})\), which has voltage sum \(3\), lifts to an \(8\)-cycle when \(m=6\), since \(2\cdot 3\equiv 0\pmod{6}\). Thus, we conclude that \((G_{10},6)\) has girth at most \(8\), so \(m=6\) is excluded. 3. Now we consider walks that are not themselves cycles that are formed by joining cycles at vertices, edges, or paths, or joining two cycles by a path There are no such walks of length \(4\) (all walks of length \(4\) in \(G_{10}\) are cycles). Following the discussion in Observation 2, note that walks of length \(6\) must be formed by joining \(2\)-cycles and \(4\)-cycles, but there are no \(2\)-cycles. Closed walks that are of length \(8\) whose voltages sum to \(0\) could be formed by joining two \(4\)-cycles at a vertex, or by joining a \(4\)-cycle and a \(4\)-cycle along a (possibly non-zero-labeled) edge (See Figure 7). However, there are only two \(4\)-cycles in \(G_{10}\) and they are disjoint (see Figure 10, where the two \(4\)-cycles are highlighted in yellow), so there are no such walks. Joining disjoint cycles with a path, as in Figure 6(c) results in even longer cycles in the lift. 4. Finally, we consider lollipop walks formed by joining a path to a pinned vertex. None of these walks lift to short cycles, because the pinned vertices are too far from the cycles. For example, a shortest path to a cycle is \((x^{*},x,x_{0})\) joined to the \(6\)-cycle \((x_{0},y_{0},y_{01},\overbrace{y_{010},x_{010}}^{-1},x_{01},x_{0})\), which will lift to a cycle of length \(10\). Similarly, a path of length \(4\) joined to a \(4\)-cycle will lift to a cycle of length \(12\). In all cases, as long as \(m\neq 1,2,3,6\), there are no short walks in \(G_{10}\) that lift to short cycles (with length at most \(8\)) in the derived graph \((G_{10},m)\), so the girth of \((G_{10},m)\) equals \(10\). As we said before, the tree used in Theorem 12 was also given in [4], and the order of the voltage graph \(G_{10}\) is \(26\), so the derived graph \((G_{10},m)\) gives us a family of \((\{3,m\};10)\)-graphs with \(24m+2\) vertices. Next, we describe a new voltage graph \(H_{10}\) whose order is \(22\) (see Figure 11). The derived graphs \((H_{10},m)\) produce a family of \((\{3,m\};10)\)-graphs with \(20m+2\) vertices, which obviously is an improvement. The derived graphs \((H_{10},m)\) are girth \(10\) for \(m\geq 6\). Figure 11: The voltage graph \(H_{10}\), with voltage group \(\mathbb{Z}_{m}\). Unlabeled edges all have voltage assignment \(0\), and the boxed vertices \(\boxed{x^{*}}\) and \(\boxed{y^{*}}\) are pinned vertices. Lifts of this graph are biregular graphs with two vertices of degree \(m\) and \(20m\) vertices of degree \(3\), which are all of girth \(10\) for \(m\geq 6\). The two \(4\)-cycles in the graph are highlighted. The graph \(H_{10}\) is formed by pruning the tree \(T_{10}\), by deleting the leaves \(x_{101}\) and \(y_{101}\) and their incident edges, and then adding an arc between \(x_{10}\) and \(y_{10}\), in addition to arcs between the remaining leaves, as in the construction of \(G_{10}\). **Theorem 13**.: _The graph \((H_{10};m)\) formed as a \(\mathbb{Z}_{m}\) lift of the voltage graph shown in Figure 11 has girth 10 for \(m\geq 6\). Moreover, the graphs \((H_{10},m)\) given us \((\{3,m\};10)\)-graphs with \(2\) vertices of degree \(m\) and \(20m\) vertices of degree \(3\)._ Proof.: By construction, a shortest path between the pinned vertices lifts to a 10-cycle, so the girth of \((H_{10},m)\) is at most 10. As before, to show that the girth is equal to 10 for \(m\geq 6\), we analyze short cycles, short non-cycle closed walks, and short lollipop walks in the voltage graph, and argue that none of these lift to cycles in the derived graph. Again using Mathematica [18], we considered all possible circuits in the voltage graph with doubled, oriented edges. There are 94 cycles possible: two disjoint 4-cycles (which can be oriented in either direction), highlighted in Figure 11, and a collection of 90 6- or 8-cycles. (Since \(H\) is bipartite, there are no odd cycles, and all cycles of length 2 are reversing walks.) Summing the voltages along each of the cycles resulted in all of the voltage sums lying in the set \(\{-5,-4,-3,-2,-1,1,2,3,4,5\}\). Thus, for \(m\geq 6\), none of the walks formed by going once around a cycle has a voltage sum of 0, so none of these cycles directly lifts to a cycle in \((H_{10},m)\). (It is perhaps interesting to note that the red edges, oriented appropriately, form one of the 8-cycles, but the sum of the voltages along that 8-cycle, along with the sums of the voltages along the other 8-cycles, does not sum to \(0\bmod m\) for \(m\geq 6\).) Next, we looked at the 4-cycles individually. The 4-cycle \((x_{101},y_{101},y_{10},x_{10},x_{101})\) has voltage sum 2 (or -2, depending on orientation), so twice around that 4-cycle lifts to an 8-cycle when \(m=4\). The other 4-cycle sums to 1 (or -1) and so only lifts to larger cycles. (Since twice around a 6-cycle would lift to a 12-cycle, there is no need to consider larger cycles than 4 in this case.) After that, we considered non-cycle closed walks which might lift to cycles, described in Observation 2. Since the two 4-cycles are disjoint, there are no 8-cycles that could be formed by traversing around those cycles in that way. All other joined cycles form walks that are at least length 10. Finally, we considered lollipop walks. Inspection of \(H_{10}\) shows that the shortest lollipop walks consist of paths of length 3 joined to a cycle of length 4 (e.g., \((x^{*},x,x_{1},x_{10})\) joined to the cycle \((x_{10},x_{101},y_{101},y_{10},x_{10})\), which lifts to a cycle of length \(2\cdot 3+4=10\), or a path of length 2 joined to a cycle of length 6 (e.g., \((x^{*},x,x_{0})\) joined to \((x_{0},x_{00},x_{000},y_{00},y_{00},x_{0})\) which lifts to a cycle of length \(2\cdot 2+6=10\). Notice that the voltage assignment of \(G_{10}\) produces graphs of girth 10 for \(m\geq 4\) except \(m=6\), and for the voltage graph \(H_{10}\) the voltage assignment produces graphs of girth 10 for \(m\geq 6\). Thus, the only examples we know for \(m=4,5\) use \(G_{10}\). For \(m\geq 6\), while both \((G_{10},m)\) and \((H_{10},m)\) are girth 10, \((H_{10},m)\) has fewer vertices than \((G_{10},m)\). It is possible that other choices of arcs or other choices of labels could produce a variant of that has girth \(10\) for smaller values of \(m\), but so far, such construction has eluded us. It is also interesting to note that, as we said before, graphs with the same order of \((G_{10},m)\) and with \(T_{10}\) as based were also given in [4], but in that paper, the authors give general constructions for \(m\) that is very large in relation to \(3\). Also in that paper, they constructed a \((\{3,4\};10)\)-graph with \(82\) vertices, which is exactly the order of a graph \((H_{10},4)\). However, we have not yet found a voltage assignment for \(H_{10}\) that produces a girth \(10\) graph for \(m=4\). Moreover, the \((\{3,4\};10)\)-graph constructed in [4] has \(4\) vertices of degree \(4\) and \(78\) vertices of degree \(3\), so clearly it is not a graph in our family. ## 7 Families of semicubic graphs of girth \(4t\) In this section, as in the previous one, we will use a family of voltage graphs \({\cal G}_{4t}\) to construct a family of \((\{3;m\};4t)\)-biregular graphs called \((G_{4t};m)\).In this case, the voltage graph construction begins by constructing three pruned binary trees, each extended by connecting a pinned vertex to the root. We begin with constructing a new pruned binary tree \(X^{\prime}_{t}\), which has height \(2t-1\), including the vertices \(x^{*}\) and \(x\). As in the previous section, we begin with a binary tree rooted at a vertex \(x\), whose vertices are indexed by bit strings of length at most \(2t-2\). We extend the tree upwards by a single pinned vertex \(x^{*}\), which becomes the new root of the tree. We identify the vertex \(x_{a}\) where \(a=\underbrace{0\cdots 0}_{t-1}\) which is at distance \(t\) from the root \(x^{*}\). We then delete all the children of \(x_{a}\), that is, all the vertices in the binary tree indexed by bit strings that begin with strings of \(t-1\) zeroes. Note that unlike the tree \(X_{t}\) from the previous section, \(X^{\prime}_{t}\) has height \(2t-1\) rather than \(2t\), and by construction, the distance from \(x^{*}\) to \(x_{a}\) is \(t\), and the distance from the level of \(x_{a}\) to the level of the other leaves is \(t-1\). Moreover, the vertex \(x_{a}\) is itself a leaf, unlike in the construction of \(X_{t}\), where it had degree \(2\). Notice that the bit strings of the leaves, except \(x_{a}\), have length \(2t-2\). We then take the pruned tree \(Y_{t-1}\) that was constructed in the previous section, which contains a vertex \(y_{a}\) where \(a=\underbrace{0\cdots 0}_{t-2}\) (when \(t=2\) we define \(y_{a}\) to be the vertex \(y\), indexed by the empty bitstring). In this tree, the distance from \(y^{*}\) to \(y_{a}\) is \(t-1\), and the distance from \(y_{a}\) to the level of the leaves of the tree is also \(t-1\). We define \(Z_{t-1}\) identically, changing the name only to keep track of the second copy. Here the bit strings of the leaves have lengths \(2t-3\). To complete the construction of the tree \(T_{4t}\), we join \(X^{\prime}_{t}\), \(Y_{t-1}\) and \(Z_{t-1}\) by adding edges \((x_{a},y_{a})\) and \((x_{a},z_{a})\) with label \(0\). An example for \(t=3\) is shown in Figure 12. The total number of vertices in \(T_{4t}\) is given by the following analysis: the number of vertices of \(X^{\prime}_{t}\), excluding the vertex \(x^{*}\), is: \(\sum_{i=0}^{2t-2}2^{i}-\sum_{i=1}^{t-1}2^{i}=1+\sum_{i=t}^{2t-2}2^{i}=1+2^{2t- 2}+\sum_{i=t}^{2t-3}2^{i}\); the number of vertices of \(Y_{t-1}\) (and also \(Z_{t-1}\)) is: \(\sum_{i=0}^{2t-3}2^{i}-\sum_{i=0}^{t-2}2^{i}=\sum_{i=t-1}^{2t-3}2^{i}\). Thus, the total number of vertices of the derived graph is \(3+m\{1+2^{2t-2}+2^{t}+3\sum_{i=t}^{2t-3}2^{i}\}\), where \(3\) of the vertices are the pinned vertices \(x^{*}\), \(y^{*}\), \(z^{*}\). As in the previous section, an element of the family \(\mathcal{G}_{4t}\) is any voltage graph formed by adding arcs and voltage assignments to \(T_{4t}\) so that all vertices other than the pinned vertices have degree \(3\); in this paper, we restrict our additions so that each leaf in \(X^{\prime}_{t}\) has one arc going to some leaf in \(Y_{t-1}\) and one arc going to some leaf in \(Z_{t-1}\). As usual, \((G_{4t},m)\) is the \(\mathbb{Z}_{m}\) lift of voltage graph \(G_{4t}\). As in the previous section, we prove the following results. **Lemma 14**.: _The graph \((T_{4t};m)\) has girth \(4t\)._ Proof.: By construction, since the distance in \(X^{\prime}_{t}\) between \(x^{*}\) and \(x_{a}\) is \(t\), and the distance in \(Y_{t-1}\) between \(y^{*}\) and \(y_{a}\) is \(t-1\) (analogously in \(Z_{t-1}\) between \(z^{*}\) and \(z_{a}\)), there exists a minimal path \(P=(x^{*},\ldots,x_{a},y_{a},\ldots,y^{*})\) of length \(2t\) between the two pinned vertices \(x^{*}\) and \(y^{*}\) (and similarly a path between \(x^{*}\) and \(z^{*}\)). Also, there exists another minimal path \(P^{\prime}=(y^{*},\ldots,y_{a},x_{a},z_{a},\ldots,z^{*})\) between \(y^{*}\) and \(z^{*}\) of length \(2(t-1)+2=2t\). By Lemma 6, the paths \(P\) and \(P^{\prime}\) lift to cycles of length \(4t\) in \((T_{4t},m)\). no other shorter paths in \(T_{4t}\) lift to cycles, since \(T_{4t}\) is a tree. In consequence the girth of \((T_{4t};m)\) is \(4t\). As \((T_{4t};m)\) is a subgraph of \((G_{4t};m)\), as in the previous section, we have the following result: **Lemma 15**.: _The graph \((G_{4t};m)\) has girth at most \(4t\)._ Now, we will prove that: **Lemma 16**.: _The family \(\mathcal{G}_{4t}\) is bipartite._ Proof.: As in the proof of Lemma 10, let \(\mathcal{X}_{j}\) be the set of vertices of the pruned binary tree \(X^{\prime}_{t}\) contained in the voltage graph \(\mathcal{G}_{4t}\) at distance \(j\) from \(x^{*}\) and the same for the trees \(Y_{t-1}\) and Figure 12: The trees \(X^{\prime}_{t}\), \(Y_{t-1}\), and tree \(T_{4t}\), for \(t=3\). (The light gray subgraphs have been pruned.) \(Z_{t-1}\). Observe that \(B_{\mathcal{X}^{\prime}}=\{\mathcal{X}_{0},\mathcal{X}_{2},...,\mathcal{X}_{2t-2}\}\) and \(B_{\mathcal{X}^{\prime\prime}}=\{\mathcal{X}_{1},\mathcal{X}_{3},...,\mathcal{ X}_{2t-1}\}\) are two disjoint subsets of vertices of \(X^{\prime}_{t}\) whose union is \(V(X^{\prime}_{t})\). Similarly, \(B_{\mathcal{Y}^{\prime}}=\{\mathcal{Y}_{0},\mathcal{Y}_{2},...,\mathcal{Y}_{2t -2}\}\) and \(B_{\mathcal{Y}^{\prime\prime}}=\{\mathcal{Y}_{1},\mathcal{Y}_{3},...,\mathcal{ Y}_{2t-3}\}\) are two disjoint subsets of the vertices of \(Y_{t-1}\) whose union is \(V(Y_{t-1})\) and analogously for \(Z_{t-1}\). Thus, the bipartite classes of \(\mathcal{G}_{4t}\) are \(\mathcal{B}_{1}=B_{\mathcal{X}^{\prime}}\cup B_{\mathcal{Y}^{\prime}}\cup B_{ \mathcal{Z}^{\prime}}\) and \(\mathcal{B}_{2}=B_{\mathcal{X}^{\prime\prime}}\cup B_{\mathcal{Y}^{\prime \prime}}\cup B_{\mathcal{Z}^{\prime\prime}}\). #### 7.0.1 A family of semicubic graphs of girth \(8\) Figure 13: An assignment of arcs and voltages \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) to \(\mathcal{G}_{8}\) that corresponds to a family of semi-regular graphs of girth \(8\), when \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) are constrained as in Theorem 17 We will give a voltage assignment of \(G_{4t}\) for \(t=2\) and we will prove that we can obtain a family of semicubic graphs of girth \(8\). These graphs are a generalization of the graphs given in [3]; specifically, they are obtained if, in the following Theorem, we have that \(\alpha=3\), \(\beta=1\), \(\gamma=2\), and \(\delta=3\). **Theorem 17**.: _The graphs \((G_{8};m)\) given by the voltage graph \(G_{8}\) with voltage and arc assignment \((x_{10}y_{1})=\alpha\), \((x_{10}z_{1})=\beta\), \((x_{11}y_{1})=\gamma\) and \((x_{11}z_{1})=\delta\) with \(\alpha\neq\beta\neq 0\), \(\gamma\neq\delta\neq 0\) and \((\alpha-\beta)-(\gamma-\delta)\neq 0\mod m\) has girth \(8\) for \(m\geq 3\). Moreover, it is a \((\{3,m\};8)\)-graph with three vertices of degree \(m\) and \(9m\) vertices of degree \(3\)._ Proof.: First, note that by Observation 2, since \(G_{8}\) has no \(2\)-cycles, there are no closed non-cycle non-reversing walks that lift to cycles of length \(4\) or \(6\). Thus, since \((G_{8};m)\) is bipartite, it is enough to show that every cycle of length \(4\) or \(6\) in \(G_{8}\) lifts to a cycle of length at least \(8\) in \((G_{8};m)\). We enumerated all (directed) cycles of length \(4\) and \(6\) in \(G_{8}\) directly, reversing arrows as needed, and summed their voltages: there are \(18\) such cycles, six \(4\)-cycles and \(12\)\(6\)-cycles. For example, \(\overbrace{(x_{10},\underbrace{z_{0},x_{11}},y_{0},x_{10})}^{\beta}\) is a \(4\)-cycle with voltage sum \(-\alpha+\beta+\gamma-\delta\). Summing the voltages in each cycle, we observed that the collection of possible voltage sums is the following set: \(\{-\alpha+\beta+\gamma-\delta,\alpha-\beta-\gamma+\delta,\delta-\beta,\gamma- \alpha,\beta-\delta,\alpha-\gamma,\gamma-\delta,\alpha-\beta,\delta-\gamma, \beta-\alpha,\delta,\gamma,\beta,\alpha,-\delta,-\beta,-\gamma,-\alpha\}\). Thus, if \(\alpha\neq\beta\neq 0\), \(\gamma\neq\delta\neq 0\) and \((\alpha-\beta)-(\gamma-\delta)\neq 0\mod m\), then these short cycles do not lift to short cycles in \((G_{8},m)\). By Lemma 14, we have that the girth of \((G_{8},m)\) is at most \(8\). Observe that, for example, \((x^{*},x^{0},x^{0}_{0},y^{0},y^{*},y^{1},x^{1}_{0},x^{1},x^{*})\) is an \(8\)-cycle. Thus, since \((G_{8},m)\) has no cycles of length less than \(8\), the girth of \(G_{8}\) is exactly \(8\). In particular, \(\alpha=\delta=1\) and \(\beta=\gamma=2\) produces a family of graphs of girth \(8\). #### 7.0.2 Two families of semicubic graphs of girth \(12\) Now, we will give a voltage assignment to produce a graph \(G_{4t}\) for \(t=3\) that lifts to a family of semicubic graphs of girth \(12\). The additional arcs and voltages are listed in Table 2 and the graph itself is shown in Figure 15. These arc assignments and voltages were found by trial-and-error, with the assumption that each leaf-vertex in \(X^{\prime}_{3}\) needed to be connected to one vertex in \(Y_{2}\) and one vertex in \(Z_{2}\) in such a way as to avoid creating \(4\)-cycles. **Theorem 18**.: _The graph \((G_{12};m)\in\mathcal{G}_{4t}\) for \(t=3\) given by the voltage graph \(G_{12}\) formed by adding the arcs and voltage assignments to \(T_{12}\) given in Table 2 has girth \(12\) for \(m\geq 9\). Moreover, it is a \((\{3,m\};12)\)-graph with three vertices of degree \(m\) and \(49m\) vertices of degree Proof.: Since \((G_{12};m)\) is bipartite, it is enough to show that every even circuit in the voltage graph \(G_{12}\) of length smaller than \(12\) lifts to a cycle of length at least \(12\) in \((G_{12};m)\). It is easy to find \(12\)-cycles in \((G_{12};m)\) so the girth is at most \(12\). As before, we analyzed all cycles of length at most \(10\) ("short cycles" in the voltage graph), using _Mathematica_[18], considering the directed voltage graph in which all single-directed arcs are replaced with a pair of alternately oriented arcs (labeling with the negative voltage for the oppositely oriented arc) to allow traveling along the arc in either direction. There are \(252\) short cycles that are not formed by going back and forth along a single edge. We summed the voltages along each (directed) cycle, and observed that the absolute value of the voltage sums along the cycles all lie in the set \(\{1,2,3,4,5,6,7,8\}\). Thus, when \(m\leq 8\), there is a short cycle in the voltage graph that lifts to a short cycle in the lift graph, dropping the girth for those values of \(m\), but for all other values of \(m\), no short cycle in the voltage graph lifts to a short cycle in \((G_{12};m)\). By construction of \(T_{12}\), all lollipop walks lift to long cycles. Next, we need to consider non-reversing non-cycle walks in the voltage graph that may lift to short cycles in the derived graph. From Observation 2, any such walk must involve at least one \(4\)-cycle. However, analysis of the cycles in \(G_{12}\) shows that, in fact, there are no \(4\)-cycles (by construction), so there are no short walks of this sort that lift to short cycles. The smallest element of this family is a semiregular graph with \(3\) vertices of degree \(9\) and \(441=49\cdot 9\) of vertices of degree \(3\). As in Section 4, we modified the graph \(T_{12}\) and added additional labeled arcs to produce a new voltage graph \(H_{12}\) with fewer vertices than \(G_{12}\). The \(\mathbb{Z}_{m}\) lifts of \(H_{12}\) produce graphs of girth \(12\) with \(41m+3\) vertices, which is a significant improvement. Specifically, we modified \(T_{12}\) by deleting the eight leaf vertices \(x_{1000}\), \(x_{1001}\), \(x_{1100}\)\(x_{1101}\), \(y_{100}\), \(y_{111}\), \(z_{100}\), \(z_{111}\) and incident edges. These deleted vertices and edges are shown in light gray in Figure 16. We then added new labeled directed arcs from \(x_{100}\) to \(y_{10},z_{10}\) and from \(x_{110}\) to \(y_{111}\) and \(z_{111}\), and then searched for voltage assignments on those arcs and arcs added between the remaining leaves of \(T_{12}\) that would lift to graphs of girth \(12\). **Theorem 19**.: _The graph \((H_{12};m)\) given by the voltage graph \(H_{12}\) shown in Figure 16 with voltages given in Table 3 has girth \(12\) for \(m\geq 10\). Moreover, it is a \((\{3,m\};12)\)-graph with \begin{table} \begin{tabular}{c|c c c c c c c c c c c c} Starting leaf & \(x_{0100}\) & \(x_{0101}\) & \(x_{0110}\) & \(x_{0111}\) & \(x_{1000}\) & \(x_{1001}\) & \(x_{1010}\) & \(x_{1100}\) & \(x_{1101}\) & \(x_{1110}\) & \(x_{1111}\) \\ Ending leaf & \(y_{010}\) & \(y_{011}\) & \(y_{100}\) & \(y_{101}\) & \(y_{01}\) & \(y_{011}\) & \(y_{110}\) & \(y_{111}\) & \(y_{100}\) & \(y_{101}\) & \(y_{110}\) & \(y_{111}\) \\ Voltage & \(1\) & \(2\) & \(1\) & \(2\) & \(2\) & \(1\) & \(1\) & \(2\) & \(2\) & \(1\) & \(3\) & -1 \\ \hline Starting leaf & \(x_{0100}\) & \(x_{0101}\) & \(x_{0110}\) & \(x_{0111}\) & \(x_{1000}\) & \(x_{1001}\) & \(x_{1010}\) & \(x_{1100}\) & \(x_{1101}\) & \(x_{1110}\) & \(x_{1111}\) \\ Ending leaf & \(z_{110}\) & \(z_{111}\) & \(z_{100}\) & \(z_{101}\) & \(z_{010}\) & \(z_{011}\) & \(z_{100}\) & \(z_{101}\) & \(z_{010}\) & \(z_{011}\) & \(z_{110}\) & \(z_{111}\) \\ Voltage & \(2\) & \(1\) & -1 & \(3\) & -1 & \(3\) & \(1\) & -1 & -3 & -2 & -1 & \(2\) \\ \end{tabular} \end{table} Table 2: Labelled arcs between leaves of \(T_{12}\) forming a voltage graph \(G_{12}\). The \(\mathbb{Z}_{m}\) lifts \((G_{12},m)\) are girth \(12\) for \(m\geq 9\). Figure 16: The voltage graph \(H_{12}\). The colors/styles of the arcs between the leaves of \(T_{12}\) identify disjoint 8-cycles (if arrows are directed appropriately). The leaves deleted from \(T_{12}\) to allow the formation of \(H_{12}\) are shown in light gray. Figure 15: The voltage graph \(G_{12}\). The colors/styles on the labeled arcs show the two 12-cycles (appropriately directing the arrows) formed by the labeled arcs between the leaves of \(T_{12}\). _three vertices of degree \(m\) and \(41m\) vertices of degree \(3\)._ Proof.: It is easy to find \(12\)-cycles in \((H_{12};m)\) so the girth is at most \(12\). As previously, we use Mathematica to analyze the voltage sums along short cycles (length at most \(10\)) in the graph. By construction, the graph has no \(4\)-cycles, so all non-cycle closed walks lift to cycles of length at least \(12\), by Observation 2. Moreover, all lollipop walks lift to cycles of at least length \(12\), by construction. (For example, there is a new lollipop walk formed by connecting the path \((x^{*},x_{1})\) to the cycle \((x_{1},x_{10},x_{100},y_{10},y_{11},x_{110},x_{11},x_{1})\) that uses two of the newly-introduced arcs that connect vertices that were not leaves of \(T_{12}\), but even this lollipop lifts to a cycle of length \(12\), by Lemma 7.) Thus, to determine the girth, it suffices to analyze the voltage sums of all cycles of length \(6\), \(8\), \(10\). There are \(254\) such cycles. Analyzing the voltage sums along all these cycles as before shows that for each cycle, the absolute value of the voltage sum lies in the set \(\{1,2,3,4,5,6,7,8,9\}\). Thus, for \(1\leq m\leq 9\), there is a short cycle in the voltage graph that lifts to a short cycle in the derived graph, but for \(m>9\) no short cycle lifts to a short cycle in \((H_{12},m)\). ## 8 Open questions Here we give a short, non-comprehensive list of natural open questions related to this work. * Find voltage assignments to construct graphs of girth \(10\) and \(12\) for missing values of \(m\). More specifically: * As we said at the end of Section 6, \((H_{10};m)\) is a graph of girth \(10\) with \(20m\) vertices of degree \(3\) and only two vertices of degree \(m\), for \(m\geq 7\). In [4] the authors give a graph with the same parameters for \(m=4\), but this graph has \(4\) vertices of degree \(m\) and only \(78\) vertices of degree \(3\). Can we find a similar voltage graph to \(H_{10}\) (either a different voltage assignment for the same arcs as \begin{table} \begin{tabular}{c|c c c c c c c} & Starting leaf & \(x_{100}\) & \(x_{100}\) & \(x_{110}\) & \(x_{110}\) & \\ & Ending leaf & \(y_{10}\) & \(z_{10}\) & \(y_{11}\) & \(z_{11}\) & \\ & Voltage & \(a=1\) & \(b=-1\) & \(c=-1\) & \(d=1\) & \\ Starting leaf & \(x_{0100}\) & \(x_{0101}\) & \(x_{0110}\) & \(x_{0111}\) & \(x_{1010}\) & \(x_{1001}\) & \(x_{1110}\) & \(x_{1111}\) \\ Ending leaf & \(y_{010}\) & \(y_{011}\) & \(y_{101}\) & \(y_{110}\) & \(y_{110}\) & \(y_{101}\) & \(y_{011}\) & \(y_{010}\) \\ Voltage & \(e=2\) & \(j=1\) & \(q=-1\) & \(u=1\) & \(w=-2\) & \(s=-3\) & \(l=2\) & \(g=1\) \\ \hline Starting leaf & \(x_{0100}\) & \(x_{0101}\) & \(x_{0110}\) & \(x_{0111}\) & \(x_{1010}\) & \(x_{1001}\) & \(x_{1100}\) & \(x_{1101}\) \\ Ending leaf & \(z_{101}\) & \(z_{110}\) & \(z_{010}\) & \(z_{011}\) & \(z_{011}\) & \(z_{101}\) & \(z_{110}\) \\ Voltage & \(f=1\) & \(k=-2\) & \(r=1\) & \(v=5\) & \(\alpha=2\) & \(t=-6\) & \(p=-2\) & \(h=2\) \\ \end{tabular} \end{table} Table 3: Labelled arcs between leaves of a modified \(T_{12}\) forming a voltage graph \(H_{12}\). The \(\mathbb{Z}_{m}\) lifts \((H_{12},m)\) are girth \(12\) for \(m\geq 10\). in \(H_{10}\), or a different assignment of arcs and voltages using the same "underlying" tree as in \(H_{10}\)) that gives us a derived graph \((H_{10}^{\prime};m)\) of girth \(10\) for all \(m\geq 4\)? Preliminary investigations suggest that new arc assignments are required to find smaller examples. * Find voltage assignments to produce \((G_{12};m)\) graphs with the construction given in Theorem 18 for any \(4\leq m\leq 8\), or show no such graphs exist. * Find different voltage assignments for the graph \(H_{12}\) that produce graphs of girth \(12\) for \(m\in\{3,4,\ldots,9\}\), or show no assignments exist. Preliminary investigations suggest that new arc assignments are required to find smaller examples. Does there exist a different graph \(H_{12}^{\prime}\) (for example, using the same modified tree structure and arcs \(a,b,c,d\) but different arcs among the remaining leaves of \(T_{12}\)) that produces graphs of girth \(12\) for these smaller values of \(m\)? For example, can we find graphs of small girth if we construct a \(16\)-cycle among the remaining leaves of \(T_{12}\), or if we interlace two \(8\)-cycles differently? * In Corollary 4, we obtain \((3,m;14)\)-graphs of order greater than \(115m\) for \(m\geq 3\). Can this voltage method be used to produce graphs of even girth \(14\) or larger in a tractable way? The voltage assignments used to construct the graph families exhibited in this paper were found by educated trial-and-error. This approach becomes less tractable as the number of parameters increases. * Study the same problem with an odd degree. The last two authors are making progress on this project [5].
2310.18360
Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading Comprehension Shortcut Triggers
Recent applications of LLMs in Machine Reading Comprehension (MRC) systems have shown impressive results, but the use of shortcuts, mechanisms triggered by features spuriously correlated to the true label, has emerged as a potential threat to their reliability. We analyze the problem from two angles: LLMs as editors, guided to edit text to mislead LLMs; and LLMs as readers, who answer questions based on the edited text. We introduce a framework that guides an editor to add potential shortcuts-triggers to samples. Using GPT4 as the editor, we find it can successfully edit trigger shortcut in samples that fool LLMs. Analysing LLMs as readers, we observe that even capable LLMs can be deceived using shortcut knowledge. Strikingly, we discover that GPT4 can be deceived by its own edits (15% drop in F1). Our findings highlight inherent vulnerabilities of LLMs to shortcut manipulations. We publish ShortcutQA, a curated dataset generated by our framework for future research.
Mosh Levy, Shauli Ravfogel, Yoav Goldberg
2023-10-24T12:37:06Z
http://arxiv.org/abs/2310.18360v1
# Guiding LLM to Fool itself: Automatically Manipulating ###### Abstract Recent applications of LLMs in Machine Reading Comprehension (MRC) systems have shown impressive results, but the use of shortcuts, mechanisms triggered by features spuriously correlated to the true label, has emerged as a potential threat to their reliability. We analyze the problem from two angles: LLMs as editors, guided to edit text to mislead LLMs; and LLMs as readers, who answer questions based on the edited text. We introduce a framework that guides an editor to add potential shortcuts-triggers to samples. Using GPT4 as the editor, we find it can successfully edit trigger shortcut in samples that fool LLMs. Analysing LLMs as readers, we observe that even capable LLMs can be deceived using shortcut knowledge. Strikingly, we discover that GPT4 can be deceived by its own edits (15% drop in F1). Our findings highlight inherent vulnerabilities of LLMs to shortcut manipulations. We publish ShortcutQA, a curated dataset generated by our framework for future research. ## 1 Introduction We consider the task of Machine Reading Comprehension (MRC), also known as text-grounded question answering (QA) (Wang et al., 2022), where a model is given a text passage and a question, and has to answer the question based on the text, either by marking a span over the text or generating a short string. Recently, LLMs (Zhao et al., 2023) such as GPT-Instruct (GPT3.5) (Ouyang et al., 2022), GPT-Turbo and GPT4 (OpenAI, 2023) emerge as strong models for performing the MRC task. The demonstrated text-grounded QA abilities of LLMs prompted the incorporation of LLMs within a search-engine setups in which a retrieval model retrieves documents, and the LLM answers by extracting answers from these documents, in websites such as google.com and bing.com. However, previous MRC models are known to often answer by relying on _shortcuts_ (also called _shallow heuristics_) rather than on full understanding of the text (Ho et al., 2022). Does this tendency transfer also to LLMs? To examine the role of shortcuts in-depth, it is necessary to edit samples to activate these shortcuts in the models. Previous attempts at this mainly involved simple edits such as word replacements (Wang et al., 2021; Schlegel et al., 2020; Rychalska et al., 2018), or more intricate edits designed for specific cases, bound by the structure of the original text (Cao et al., 2022; Wang et al., 2020). Given that larger models exhibit greater resilience and are less inclined towards simple shortcuts compared to their smaller counterparts (Bandel et al., 2022; Wang et al., 2023), a faithful investigation of shortcut usage in LLMs necessitates the application of more complex shortcut triggers. This paper sets out to explore two principal questions: First, whether an LLM can be used to study shortcut usage in other LLMs, and second, whether a given LLM is robust to the adversarial edits done by itself. We look into the interaction of LLMs and shortcuts by using an LLM that functions as an editor, altering text to add or exclude shortcut triggers to mislead a different LLM. Our framework (see Figure 1) uses a strong model to edit samples and is guided by the output of a weaker model. We evaluate the resulted edited samples (after manual verification that their semantics remain the same) on both the model that Figure 1: Our framework overview. edited them and other, weaker models. In our experiments we found that GPT41 is a reliable and effective editor, editing samples to mislead less proficient LLMs, like GPT3.52, resulting in a 30% drop in F1 score. Interestingly, GPT4 is being misled too by some of the samples it made to mislead GPT3.5, reflected in a 15% decrease in F1 score. We release ShortcutQA - a curated dataset of samples generated by our framework. Footnote 1: We refer to the model gpt-4-0314 as GPT4 Footnote 2: We refer to the model text-davinci-003 as GPT3.5 Our findings also highlight a potential attack vector that could be exploited for malicious intents (we discuss its implications in ethical statement). ## 2 Shortcuts We define here the term shortcuts (sometimes called shallow heuristics or Clever Hans features) as we use this term throughout the paper. Given a model \(M\), an MRC sample composed of a text \(t\) and question \(q\), we define an intervention \(f_{j}\) that edits \(t\) to add or exclude a property \(j\), we say that \(M\) is misled by \(j\) if the following conditions hold: \[M(t,q)=A(t,q) \tag{1}\] \[A(t,q)=A(f_{j}(t),q) \tag{2}\] \[M(f_{j}(t),q)\neq A(f_{j}(t),q) \tag{3}\] Here \(A\) returns the gold label (right answer) for its input. Equation 1 stands for the basic assumption that the model answers correctly in the first place. Equation 2 express that the edit did not affect the right answer. Equation 3 express that the model changed its prediction in the context of the semantics-preserving edit. ## 3 Methodology We propose a framework that adversarially edits samples to mislead a specific model (target model). The framework achieves this by adding or excluding shortcut triggers guided by the confidence levels of the target model. In our experiments, the editor was GPT4, and the target model was GPT3.5. Our code and dataset will be available on GitHub3. Footnote 3: [https://github.com/Mosh110/Guiding-LLM](https://github.com/Mosh110/Guiding-LLM) ### Defining Shortcuts We collect a set of shortcut-trigger families from prior studies on heuristics in span-prediction models. For each family, we create a prompt asking the editor to modify the text according to the trigger. Each trigger instruction aims to prompt the editor to either add a trigger of a shortcut that leads to an incorrect answer or exclude a trigger of a shortcut that leads to the correct answer. We gathered 5 shortcut families (4 from existing literature, 1 of our own) that depend on various features. Each shortcut family was translated into a directive that calls for the minimal modification of that feature within the text. We present our specific prompts in the appendix (Table 6). **Base distractor** Based on the finding in Jia and Liang (2017), MRC models are more likely to make a mistake if a distractor sentence that answers a question with high lexical overlap to the original original question is added to the text. This takes advantage of the shortcuts: _entity type matching_ and _lexical overlap_Rondeau and Hazen (2018). To generate the base distractor, we asks the editor to generate a sentence that answers a question similar to the given question but which has one major difference. We use a few demonstrations of this task to improve performance. The prompt asks that the distractor does not include the sample's real answer string, and we also verify it programatically. **Extended distractor** We hypothesize that making the distractor longer can mislead the model more in some cases. We have two methods of extending the distractor: the first asks the editor to add additional text that extends the distractor and add a coreference to an entity from it and the second asks it to write a new sentence that elaborates on the first one. **Distractor positioning** Based on the finding in Ko et al. (2020), the model is less likely to be mistaken if the answer is positioned at the beginning of the text. To control this trigger we try positioning the distractor both at the beginning and at the end of the text. **Overlap anchor** Based on the finding in Shinoda et al. (2022), words that appears both in the question and in the text may be used by models as anchors. Models are less likely to make a mistake if the answer is close to an anchor word. To prevent this shortcut behavior, we need to edit the trigger that leads to the right answer out of the text, that is, to add distance between the anchor and the right answer. We locate the answer and the anchor word that is closest to it, and then instruct the editor to add words between them. We also instruct the editor, and verify programatically, that the answer and the anchor are not changed w.r.t the original text. **Lexical overlap** Based on the finding in [12], the number of words that are in the question and are near the real answer is correlated to the probability that a model will answer correctly. As in the overlap anchor, we need to edit out the trigger of this shortcut near the right answer. Here, it means to reduce the number of overlapped words near the answer. To do that, we instruct the editor to rewrite the text near the real answer without using words from the question that are not entities (to not lose the text's meaning). We also instruct and verify the answer itself remain as is in the text. ### Sequential Editing For each sample we perform the following sequence of editing steps: (1) Base distractor - Instruct the generation of a base distractor. (2) Extended distractor - Instruct the generation of extended versions of the base distractor. (3) Distractor positioning - Create two versions of the text for each distractor, one where it is positioned at beginning and one at the end. Choose the most misleading. (4) Overlap anchor - Instruct to increase distance between the gold label and the overlapped anchor. (5) Reduce lexical overlap - Instruct to reduce the lexical overlap, repeat 3 times and choose the most misleading. ### Using LLM Confidence as Guidance To enhance the effectiveness of the edit, we use the edit model to generate multiple edits in each step (excluding the deterministic step 2), and choose the one which is most misleading to the guide model (the one with highest \(\delta C\) where \(C\) is the guide model's confidence of the answer, and \(\delta\) is 1 if the model's answer is correct and -1 otherwise). We gauge the confidence of the LLM by calculating a weighted mean over the probability assigned to the first 3 tokens it produced, which we find to be an adequate proxy to the LLM's confidence, for our purposes (full technical explanation can be found in appendix E). To check if the LLM answered correctly, we use an inclusion match (IM) score, which measures whether the gold label's text is included in the answer from the LLM. ## 4 ShortcutQA We run the procedure described in Section 3 on 300 text/questions pairs from SQuAD [11] and 300 from NewsQA [12] with GPT4 as the edit model and text-davinci-003 as the guide model. We then manually verified that the edits did not change the semantics of the text w.r.t the original answer (discarding samples that failed this verification). This left us with 247 edited SQuAD samples and 243 NewsQA samples, a total of 490 verified samples which we use in our evaluations. The analysis of ShortcutQA in Table 2 uncovers two main findings: (1) Observing the distractor types distribution, there is no clear leaning to spe \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Model** & **Type** & **F1** & **EM** & **IM** & \begin{tabular}{c} **IM** \\ **Diff** \\ \end{tabular} \\ \hline \multirow{4}{*}{GPT4} & Squad & \multirow{2}{*}{87.4} & \multirow{2}{*}{70.8} & \multirow{2}{*}{95.6} & \multirow{2}{*}{-19.9} \\ & Natural & & & & \\ \cline{2-5} & Squad & & & & \\ & Edited & & & & \\ \cline{2-5} & NewsQA & & & & \\ & Edited & & & & \\ \hline \multirow{4}{*}{GPT3.5} & Squad & \multirow{2}{*}{44.0} & \multirow{2}{*}{32.8} & \multirow{2}{*}{42.9} & \multirow{2}{*}{-31.0} \\ & Natural & & & & \\ \cline{2-5} & Squad & & & & \\ \cline{2-5} & Squad & & & & \\ \cline{2-5} & Equid & & & & \\ \cline{2-5} & NewsQA & & & & \\ \cline{2-5} & Natural & & & & \\ \cline{2-5} & NewsQA & & & & \\ \cline{2-5} & Edited & & & & \\ \hline \multirow{4}{*}{GPT-Turbo} & Squad & \multirow{2}{*}{28.7} & \multirow{2}{*}{7.2} & \multirow{2}{*}{62.7} & \multirow{2}{*}{-19.0} \\ & Natural & & & & \\ \cline{2-5} & Squad & & & & \\ \cline{2-5} & Equid & & & & \\ \cline{2-5} & NewsQA & & & & \\ \cline{2-5} & Natural & & & & \\ \cline{2-5} & NewsQA & & & & \\ \cline{2-5} & Natural & & & & \\ \cline{2-5} & NewsQA & & & & \\ \cline{2-5} & Edited & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on the ShortcutQA. Results are percentages. IM Diff is the difference between the IM on the natural data and ShortcutQA. cific type of distractor, implying that the effectiveness of the distractor type depends on the sample. (2) Observing Anchor to answer distance and Lexical overlap, we see that GPT4 was successful in the required editing task. ## 5 Evaluating Models on ShortcutQA In Table 1 we report the performance of different models on our datasetm, and compare it to their performance on the original natural versions of the samples in the dataset. ### LLMs Are Misled by Shortcuts We see a major decrease in performance in all models on both types of data (Squad and NewsQA) on each of the metrics we measured (F1, EM, IM). Those results show that when guided by GPT3.5 answers, GPT4, can in some cases mislead not only GPT-Turbo4 and GPT3.5, but also itself. Those results are a causal evidence that LLMs misled by the shortcut triggers we inspected (see 3.1). Furthermore, from the results of GPT4 we see that it has some inner inconsistency, as it is misled by samples that were edited by it. Footnote 4: We refer to the model gpt-3.5-turbo-0301 as GPT-Turbo The F1 and EM performance of GPT-Turbo are much lower than the other two models even on natural samples. This is because it was much harder to make models produce short and succinct answers, due to their conversational style. IM scores are much higher but are also affected when applying our edits, demonstrating its lack of robustness in the presence of shortcuts' triggers even when using a forgiving metric that take into consideration the conversational style of the models. ### Comparison to Non-targeted Edits To confirm if the difference in performance between natural data and ShortcutQA is due to our knowledge of shortcuts, we carried out a control experiment. In this test, we edited samples but didn't use any known shortcut triggers. We made changes to the text without any special rules about shortcuts, with the only instruction being to leave the correct answer phrase unchanged. This approach mirrors our main experiment where we did use shortcut knowledge. Specifically, we (1) instructed GPT4 to write an extension of the given text (regardless of the question) and (2) instructed GPT4 to rephrase the sentence that includes the answer (while keeping the answer's substring as is). Our exact prompt are in the appendix 7. The baseline experiment was performed on the Squad subset and we evaluated GPT4 on it, the results can be seen in Table 3. Those results support that the guidance of the trigger instructions and the confidence of GPT3.5 are useful to effectively edit samples to mislead models. ### Controllability In addition to evaluating the decrease in LLMs answer accuracy, we also evaluated whether the model's incorrect answers came from the distractor. When evaluating GPT4 on the edited Squad dataset, we find that out of the 19.9% of the sam \begin{table} \begin{tabular}{l c c} \hline \hline **Shortcut** & **Property** & **Value** \\ \hline \multirow{2}{*}{Distractor position (\%)} & At the beginning & 47.7 \\ & At the end & 52.3 \\ \hline \multirow{2}{*}{Distractor length (\%)} & Base length & 30.0 \\ & Extended & 70.0 \\ \hline Anchor to answer distance (\# tokens) & Distance added (tokens) & 13.3 \\ \hline Lexical overlap (\%) & Jaccard Similarity score reduced & 61.8 \\ \hline \hline \end{tabular} \end{table} Table 2: ShortcutQA analysis. Jaccard similarity was measured on the answer sentence before and after the edit; we show here the ratio between the scores. \begin{table} \begin{tabular}{c c c c} \hline \hline **Type** & **F1** & **EM** & **IM** \\ \hline Natural & 87.4 & 70.8 & 95.9 \\ \hline Baseline & 79.2 & 63.1 & 85.8 \\ \hline ShortcutQA (Squad) & 69.8 & 54.2 & 75.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison to non-targeted edits results. Results are percentages. GPT4 performance on versions of the curated Squad samples included in ShortcutQA. ples the model answer incorrectly, in **16.7%** of the samples the incorrect answer was taken from the added distractor. While not a very high number, it nonetheless still broadens the possibility to use shortcuts for malicious uses, which we elaborate on in the ethical statement section. ### Update: ShortcutQA1.1 We run the procedure described in section 3 using the updated model GPT-4-0613 to produce an updated version of the dataset, named Shortcut1.1. We found that this version of GPT is more reliable for our task, resulting in much less samples that were harmed during the edit. Also, we found that models are more susceptible to make error on this dataset. Surprisingly, the drop in performance of GPT-4-0613 remains similar to the drop in performance of GPT-4-0314 on the original ShortcutQA and was even increased according to the IM metric on the NewsQA subset. This emphasizes that the phenomenon (the vulnerability of LLMs to edits they perform) is unlikely to decrease as models improve and may even increase. ## 6 Related work LLM robustness was studied also by others: Pan et al. (2023) demonstrated the use of LLMs as a tool to generate misinformation text, both in a controlled and in an uncontrolled fashion. Li et al. (2023); Carlini et al. (2023) discuss the plausibility of modifying training data to cause models to learn shortcuts when they are trained on it. Shi et al. (2023); Greshake et al. (2023) studied how irrelevant context affects LLMs in arithmetic tasks. However, none of those studies employ known shortcuts to show their ability to adversarially fool LLMs. ## 7 Discussion Our findings highlight the ability of large language models (LLMs), specifically GPT4, to exploit known shortcuts to mislead less proficient models, illuminating a new dimension of inter-model interactions. Interestingly, we find that GPT4 is susceptible to be misled by the same adversarial manipulations it created, suggesting intrinsic vulnerabilities and pushing the boundary of our understanding of LLMs' resilience to shortcuts. Our results underlines the importance of further investigations into LLMs robustness, resilience, and potential susceptibilities to failure. We release the dataset we used for the evaluation, ShortcutQA, which we see as a valuable resource for stress-testing and learning the vulnerabilities of LLMs in the future. ### Limitations In acknowledging the limitations of our work, we first note that the use of human annotation in the dataset preparation could potentially introduce a degree of subjectivity, as the process hinged on the experts' interpretation of incorrect model edits. Furthermore, our method was only assessed on datasets built around span extraction, so the effectiveness of our approach on other types of NLP tasks remains unverified. Future work should consider broadening the scope to address these limitations. \begin{table} \begin{tabular}{c c c c c c} \hline \hline ### Ethical statement The impressive results Large Language Models (LLMs) showed in the task of extractive question answering led to implementing them in widely available products (Bing5 and Google search6). Those solutions include a component that searches the web to look for texts relevant to the question, then answer the question based on this text. Footnote 5: [https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/](https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/) Footnote 6: [https://blog.google/products/search/generative-ai-search/](https://blog.google/products/search/generative-ai-search/) In a threat scenario where an adversary gains edit access to the source text, for instance a Wikipedia page, a news outlet, or an earning report or press material on the company's own website, they can carefully edit triggers in the text. These triggers, designed to activate shortcuts, would cause the LLMs to produce incorrect responses when prompted with certain questions. This imperceptible subversion (Chen et al., 2022) will not compromise the coherence and understandability of the text to a human reader in contrast to other method of distraction (Greshake et al., 2023). However, under the assumption that users are more likely than not to trust the LLM answer and not verify in the text itself, the user will be led to a wrong answer. Given the widespread usage of LLMs, this could play a key role in **large-scale disinformation campaigns**(Pan et al., 2023), or targeted attempts to mislead markets (consider a company releasing a quarterly report with some negative indications, while editing the text such that an LLM asked about it will be misled to perceive and report positive indications instead). On the one hand, our work can be seen as aiding the perpetrators of such malicious uses. On the other hand we believe that raising awareness to such possibilities and studying the vulnerabilities of models will help mitigate them in the future (Shinoda et al., 2022; Wang et al., 2021; Shinoda et al., 2022; Mikula et al., 2023), and hope that our study helps with this cause. ## Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
2310.17053
Invariant Physics-Informed Neural Networks for Ordinary Differential Equations
Physics-informed neural networks have emerged as a prominent new method for solving differential equations. While conceptually straightforward, they often suffer training difficulties that lead to relatively large discretization errors or the failure to obtain correct solutions. In this paper we introduce invariant physics-informed neural networks for ordinary differential equations that admit a finite-dimensional group of Lie point symmetries. Using the method of equivariant moving frames, a differential equation is invariantized to obtain a, generally, simpler equation in the space of differential invariants. A solution to the invariantized equation is then mapped back to a solution of the original differential equation by solving the reconstruction equations for the left moving frame. The invariantized differential equation together with the reconstruction equations are solved using a physics-informed neural network, and form what we call an invariant physics-informed neural network. We illustrate the method with several examples, all of which considerably outperform standard non-invariant physics-informed neural networks.
Shivam Arora, Alex Bihlo, Francis Valiquette
2023-10-25T23:26:51Z
http://arxiv.org/abs/2310.17053v2
# Invariant Physics-Informed Neural Networks for Ordinary Differential Equations ###### Abstract Physics-informed neural networks have emerged as a prominent new method for solving differential equations. While conceptually straightforward, they often suffer training difficulties that lead to relatively large discretization errors or the failure to obtain correct solutions. In this paper we introduce _invariant physics-informed neural networks_ for ordinary differential equations that admit a finite-dimensional group of Lie point symmetries. Using the method of equivariant moving frames, a differential equation is invariantized to obtain a, generally, simpler equation in the space of differential invariants. A solution to the invariantized equation is then mapped back to a solution of the original differential equation by solving the reconstruction equations for the left moving frame. The invariantized differential equation together with the reconstruction equations are solved using a physcis-informed neural network, and form what we call an invariant physics-informed neural network. We illustrate the method with several examples, all of which considerably outperform standard non-invariant physics-informed neural networks. ## 1 Introduction Physics-informed neural networks (PINN) are an emerging method for solving differential equations using deep learning, [23, 30]. The main idea behind this method is to train a neural network as an approximate solution interpolant for a system of differential equations. This is done by minimizing a loss function that incorporates both the differential equations and any associated initial and/or boundary conditions. The method has a particular elegance as the derivatives in the differential equations can be computed using automatic differentiation rather than numerical discretization, which greatly simplifies the solution procedure, especially when solving differential equations on arbitrary surfaces, [35]. The ease of the discretization procedure in physics-informed neural networks, however, comes at the price of numerous training difficulties, and numerical solutions that are either not particularly accurate, or fail to converge at all to the true solution of the given differential equation. Since training a physics-informed neural network constitutes a non-convex optimization problem, an analysis of failure modes when physics-informed neural networks fail to train accurately is a non-trivial endeavour. This is why several modified training methodologies have been proposed, which include domain decomposition strategies, [18], modified loss functions, [38], and custom optimization, [3]. While all of these strategies, sometimes substantially, improve upon vanilla physics-informed neural networks, none of these modified approaches completely overcome all the inherent training difficulties. Here we propose a new approach for training physics-informed neural networks, which relies on using Lie point symmetries of differential equations and the method of equivariant moving frames to simplify the form of the differential equations that have to be solved. This is accomplished by first projecting the differential equation onto the space of differential invariant to produce an _invariantized differential equation_. The solution to the invariantized equation is then mapped back to the solution of the original equation by solving a system of first order differential equations for the left moving frame, called _reconstruction equations_. The invariant physics-informed neural network architecture proposed in this paper consists of simultaneously solving the system of equations consisting of the invariantized differential equation and the reconstruction equations using a physics-informed neural network. The method proposed is entirely algorithmic, and can be implemented for any system of differential equations that is strongly invariant under the action of a group of Lie point symmetries. Since almost all equations of physical relevance admit a non-trivial group of Lie point symmetries, the proposed method is potentially a viable path for improving physics-informed neural networks for many real-world applications. The idea of projecting a differential equation into the space of invariants and then reconstructing its solution is reminiscent of the recent work [37], where the authors consider Hamiltonian systems with symmetries, although the tools used in our paper and in [37] to achieve the desired goals are very different. Moreover, in our approach we do not assume that our equations have an underlying symplectic structure. To simplify the theoretical exposition, we focus on the case of ordinary differential equations in this paper. We show using several examples that the proposed approach substantially improves upon the numerical results achievable with vanilla physics-informed neural networks. Applications to partial differential equations will be considered elsewhere. The paper is organized as follows. We first review relevant work on physics-informed neural networks and symmetry preserving numerical methods in Section 2. In Section 3 we introduce the methods of equivariant moving frames and review how it can be used to solve ordinary differential equations that admit a group of Lie point symmetries. Building on Section 3 we introduce a version of invariant physics-informed neural network in Section 4. We illustrate our method with several examples in Section 5. The examples show that our proposed invariant physics-informed neural network formulation can yield better numerical results than its non-invariant version. A short summary and discussion about potential future research avenues concludes the paper in Section 6. ## 2 Previous work Physics-informed neural networks were first proposed in [23], and later popularized through the work of Raissi et al., [30]. The main idea behind physics-informed neural networks is to train a deep neural network to directly approximate the solution to a system of differential equations. This is done by defining a loss function that incorporates the given system of equations, along with any relevant initial and/or boundary conditions. Crucially, this turns training physics-informed neural networks into a multi-task, non-convex optimization problem that can be challenging to minimize, [22]. There have been several solutions proposed to overcome the training difficulties and improve the generalization capabilities of physics-informed neural networks. These include modified loss functions, [26; 38], meta-learned optimization, [3], domain decomposition methods, [6; 18], and the use of operator-based methods, [11; 24]. The concepts of symmetries and transformation groups have also received considerable attention in the machine learning community. Notably, the equivariance of convolutional operations with respect to spatial translations has been identified as a crucial ingredient for the success of convolutional neural networks, [13]. The generalization of this observation for other types of layers of neural networks and other transformation groups has become a prolific subfield of deep learning since. For example, see [16] for some recent results. Here we do not consider the problem of endowing a neural network with equivariance properties but rather investigate the question whether a better formulation of a given differential equation can help physics-informed neural networks better learn a solution. As we will be using symmetries of differential equations for this re-formulation, our approach falls within the framework of geometric numerical integration. The problem of symmetry-preserving numerical schemes, in other words the problem of designing discretization methods for differential equations that preserve the symmetries of a given differential equation, has been studied extensively over the past several decades, see [14; 34; 39] for some early work on the topic. Invariant discretization schemes have since been proposed for finite difference, finite volume, finite elements and meshless methods, [2; 4; 5; 7; 8; 12; 19; 28; 31; 32]. ## 3 Method In this section we introduce the theoretical foundations on which the invariant physics-informed neural network framework is based. In order to fix some notation, we begin by recalling certain well-known results pertaining to symmetries of differential equations, and refer the reader to [9; 10; 17; 27] for a more thorough exposition. Within this field, the use of moving frames to solve differential equations admitting symmetries is not as well-known. Therefore, the main purpose of this section is to introduce this solution procedure. In contrast to the approach proposed in [25], we avoid the introduction of computational variables. All computations are based on the differential invariants of the prolonged group action which results in less differential equations. Our approach is a simplified version of the algorithm presented in [36], which deals with partial differential equations admitting infinite-dimensional symmetry Lie pseudo-groups. As mentioned in the introduction, in this paper we limit ourselves to the case of ordinary differential equations. ### Invariant differential equations Let \(M\) be a \((q+1)\)-dimensional manifold with \(q\geq 1\). Given a one-dimensional curve \(C\subset M\), we introduce the local coordinates \(z=(t,u)=(t,u^{1},\ldots,u^{q})\) on \(M\) so that the curve \(C\) is locally specified by the graph of a function \(C=\{(t,f(t))\}\). Accordingly, the \(n\)-th order jet space \(\mathrm{J}^{(n)}\) is locally parametrized by \(z^{(n)}=(t,u^{(n)})\), where \(u^{(n)}\) denotes all the derivatives \(u^{\alpha}_{j}=u^{\alpha}_{t^{j}}\) of order \(0\leq j\leq n\), with \(\alpha=1,\ldots,q\). Let \(G\) be an \(r\)-dimensional Lie group (locally) acting on \(M\): \[(T,U)=Z=g\cdot z=g\cdot(t,u),\qquad\text{where}\qquad g\in G. \tag{1}\] The group transformation (1) induces an action on curves \(C\subset M\), which prolongs to the jet space \(\mathrm{J}^{(n)}\): \[Z^{(n)}=g\cdot z^{(n)}. \tag{2}\] Coordinate expressions for the prolonged action (2) are obtained by applying the implicit total derivative operator \[\mathrm{D}_{T}=\frac{1}{\mathrm{D}_{t}(T)}\,\mathrm{D}_{t},\qquad\text{where} \qquad\mathrm{D}_{t}=\frac{\partial}{\partial t}+\sum_{j=0}^{\infty}\sum_{ \alpha=1}^{q}\,u^{\alpha}_{j+1}\frac{\partial}{\partial u^{\alpha}_{j}}\] denotes the standard total derivative operator, to the transformed dependent variables \(U^{\alpha}\): \[U^{\alpha}_{j}=U^{\alpha}_{T^{j}}=\mathrm{D}^{j}_{T}(U^{\alpha}),\qquad\alpha= 1,\ldots,q,\qquad j\geq 0. \tag{3}\] In the following we use the notation \(\Delta(z^{(n)})=\Delta(t,u^{(n)})=0\) to denote a system of differential equations, and use the index notation \(\Delta_{i}(z^{(n)})=0\), \(i=1,\ldots,l\), to label each equation in \(\Delta(z^{(n)})\). Also, a differential equation \(\Delta(z^{(n)})=0\), can either be a single equation or represent a system of differential equations. **Definition 1**.: A nondegenerate1 ordinary differential equation \(\Delta(z^{(n)})=0\) is said to be _strongly invariant_ under the prolonged action of a connected local Lie group of transformations \(G\) if and only if Footnote 1: A differential equation is nondegenerate if at every point in its solution space it is both locally solvable and of maximal rank, [27]. \[\Delta(g\cdot z^{(n)})=0\qquad\text{for all}\qquad g\in G\] near the identity element. **Remark 2**.: Strong invariance is more restrictive than the usual notion of symmetry, where invariance is only required to hold on the solution space. In the following, we require strong invariance to guarantee that our differential equation is an invariant function. Invariance is usually stated in terms of the infinitesimal generators of the group action. To this end, let \[\mathbf{v}_{\kappa}=\xi_{\kappa}(t,u)\frac{\partial}{\partial t}+\sum_{\alpha= 1}^{q}\phi^{\alpha}_{\kappa}(t,u)\frac{\partial}{\partial u^{\alpha}},\qquad \kappa=1,\ldots,r, \tag{4}\] be a basis of infinitesimal generators for the group action \(G\). The prolongation of the vector fields (4), induced from the prolonged action (3), is given by \[\mathbf{v}^{(n)}=\xi_{\kappa}(t,u)\frac{\partial}{\partial t}+\sum_{j=0}^{n}\sum _{\alpha=1}^{q}\phi_{\kappa}^{\alpha,j}(t,u^{(j)})\frac{\partial}{\partial u_ {j}^{\alpha}},\qquad\kappa=1,\ldots,r,\] where the prolonged coefficients are computed using the standard prolongation formula \[\phi_{\kappa}^{\alpha,j}=\mathrm{D}_{t}^{j}(\phi_{\kappa}^{\alpha}-\xi_{ \kappa}u_{1}^{\alpha})+\xi_{\kappa}u_{j+1}^{\alpha},\qquad\kappa=1,\ldots,r, \qquad\alpha=1,\ldots,q,\qquad 0\leq j\leq n.\] **Proposition 3**.: A nondegenerate ordinary differential equation \(\Delta(z^{(n)})=0\) is strongly invariant under the prolonged action of a connected local Lie group of transformations \(G\) if and only if \[\mathbf{v}_{\kappa}^{(n)}[\Delta_{i}(z^{(n)})]=0,\qquad\kappa=1,\ldots,r, \qquad i=1,\ldots,l,\] where \(\mathbf{v}_{1},\ldots,\mathbf{v}_{r}\) is a basis of infinitesimal generators for the group of transformations \(G\). **Remark 4**.: As one may observe, we do not include the initial conditions \[u^{(n-1)}(t_{0})=u_{0}^{(n-1)} \tag{5}\] when discussing the symmetry of the differential equation \(\Delta(z^{(n)})=\Delta(x,u^{(n)})=0\). This is customary when studying symmetries of differential equations. Of course, the initial conditions are necessary to select a particular solution and when implementing numerical simulations. ### Invariantization Given a nondegenerate differential equation \(\Delta(z^{(n)})=0\) strongly invariant under the prolonged action of an \(r\)-dimensional Lie group \(G\) acting regularly on \(\mathrm{J}^{(n)}\), we now explain how to use the method of equivariant moving frames to "project" the differential equation onto the space of differential invariants. For the theoretical foundations of the method of equivariant moving frames, we refer the reader to the foundational papers [15, 21] and the textbook [25]. **Definition 5**.: A Lie group \(G\) acting smoothly on a \(\mathrm{J}^{(n)}\) is said to act _freely_ if the isotropy group \[G_{z^{(n)}}=\{g\in G\,|\,g\cdot z^{(n)}=z^{(n)}\}=\{e\}\] at the point \(z^{(n)}\) is trivial for all \(z^{(n)}\in\mathrm{J}^{(n)}\). The Lie group \(G\) is said to act _locally freely_ if \(G_{z^{(n)}}\) is a discrete subgroup of \(G\) for all \(z^{(n)}\in\mathrm{J}^{(n)}\). **Remark 6**.: More generally we can restrict Definition 5, and the subsequent considerations, to a \(G\)-invariant submanifold \(\mathcal{V}^{(n)}\subset\mathrm{J}^{(n)}\). To simplify the discussion. we assume \(\mathcal{V}^{(n)}=\mathrm{J}^{(n)}\). **Definition 7**.: A _right moving frame_ is a \(G\)-equivariant map \(\rho\colon\mathrm{J}^{(n)}\to G\) such that \[\rho(g\cdot z^{(n)})=g\cdot\rho(z^{(n)})\] for all \(g\in G\) where the prolonged action is defined. Taking the group inverse of a right moving frame yields the left moving frame \[\overline{\rho}(z^{(n)})=\rho(z^{(n)})^{-1}\] satisfying the equivariance condition \[\overline{\rho}(g\cdot z^{(n)})=g\cdot\overline{\rho}(z^{(n)}).\] **Theorem 8**.: A moving frame exists in the neighborhood of a point \(z^{(n)}\in\mathrm{J}^{(n)}\) provided the prolonged action of \(G\) on \(\mathrm{J}^{(n)}\) is (locally) free and regular. A moving frame is obtained by selecting a cross-section \(\mathcal{K}\subset\mathrm{J}^{(n)}\) to the orbits of the prolonged action. Keeping with most applications, and to simplify the exposition, assume \(\mathcal{K}\) is a coordinate cross-section obtained by setting \(r\) coordinates of the jet \(z^{(n)}\) to constant values: \[z^{a_{\kappa}}=c^{\kappa},\qquad\kappa=1,\dots,r. \tag{6}\] Solving the normalization equations \[Z^{a_{\kappa}}=c^{\kappa},\qquad\kappa=1,\dots,r,\] for the group parameters, yields a right moving frame \(\rho\). Given a moving frame, there is a systemic procedure for constructing differential invariant functions. **Definition 9**.: Let \(\rho\colon\mathrm{J}^{(n)}\to\mathbb{R}\) be a right moving frame. The invariantization of the differential function \(F\colon\mathrm{J}^{(n)}\to\mathbb{R}\) is the differential invariant function \[\iota(F)(z^{(n)})=F(\rho(z^{(n)})\cdot z^{(n)}).\] In particular, invariantization of the coordinate jet functions \[\iota(z^{(n)})=\rho(z^{(n)})\cdot z_{n}\] yields differential invariants that can be used as coordinates on the cross-section \(\mathcal{K}\). In particular, the invariantization of the coordinates used to define the cross-section in (6) are constant \[\iota(z^{a_{\kappa}})=c^{\kappa},\qquad\kappa=1,\dots,r,\] and are called _phantom invariants_. The remaining invariantized coordinates are called _normalized invariants_. In light of Theorem 5.32 in [29], assume there are \(q+1\) normalized invariants \[H,I^{1},\,\dots,I^{q}, \tag{7}\] such that locally the invariants \[I^{\alpha}=I^{\alpha}(H),\qquad\alpha=1,\dots,q,\] are independent functions of the invariant \(H\), and generate the algebra of differential invariants. This means that any differential invariant can be expressed in terms of (7) and their invariant derivatives with respect to \(\mathrm{D}_{H}\). In the following we let \(I^{(n)}\) denote the derivatives of \(I=(I^{1},\dots,I^{q})\) with respect to \(H\), up to order \(n\). Assuming the differential equation \(\Delta(z^{(n)})=0\) is strongly invariant and its solutions are transverse to the prolonged action, this equation, once invariantized, will yield a differential equation in the space of invariants \[\iota[\Delta(t,u^{(n)})]=\Delta_{\text{Inv}}(H,I^{(k)})=0,\qquad\text{where} \qquad k\leq n. \tag{8}\] Initial conditions for (8) are obtained by invariantizing (5) to obtain \[I^{(k-1)}(H_{0})=I_{0}^{(k-1)}. \tag{9}\] **Example 10**.: To illustrate the concepts introduced thus far, we use the Schwarz equation \[\frac{u_{ttt}}{u_{t}}-\frac{3}{2}\bigg{(}\frac{u_{tt}}{u_{t}}\bigg{)}^{2}=F(t) \tag{10}\] as our running example. This equation admits a three-dimensional Lie group of point transformations given by \[T=t,\qquad U=\frac{\alpha u+\beta}{\gamma u+\delta},\qquad\text{where}\qquad g =\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\in\text{SL}(2,\mathbb{R}), \tag{11}\] so that \(\alpha\delta-\beta\gamma=1\). A cross-section to the prolonged action \[U_{T} =\text{D}_{t}(U)=\frac{u_{t}}{(\gamma u+\delta)^{2}},\] \[U_{TT} =\text{D}_{t}(U_{T})=\frac{u_{tt}}{(\gamma u+\delta)^{2}}-\frac{2 \gamma u_{t}^{2}}{(\gamma u+\delta)^{3}}, \tag{12}\] \[U_{TTT} =\text{D}_{t}(U_{TT})=\frac{u_{ttt}}{(\gamma u+\delta)^{2}}- \frac{6\gamma u_{t}u_{tt}}{(\gamma u+\delta)^{3}}+\frac{6\gamma^{2}u_{t}^{3}}{ (\gamma u+\delta)^{4}},\] is given by \[\mathcal{K}=\{u=0,\,u_{t}=\sigma,\,u_{tt}=0\}\subset\mathcal{V}^{(n)}\subset \text{J}^{(n)}, \tag{13}\] where \(\sigma=\text{sign}(u_{t})\), and \(\mathcal{V}^{(n)}=\{z\in\text{J}^{(n)}\,|\,u_{t}\neq 0\}\) with \(n\geq 2\). Solving the normalization equations \[U=0,\qquad U_{T}=\sigma,\qquad U_{TT}=0, \tag{14}\] together with the unitary constraint \(\alpha\delta-\beta\gamma=1\), we obtain the right moving frame \[\alpha=\pm\frac{1}{\sqrt{|u_{t}|}},\qquad\beta=\mp\frac{u}{\sqrt{|u_{t}|}}, \qquad\gamma=\pm\frac{u_{tt}}{2|u_{t}|^{3/2}},\qquad\delta=\pm\frac{2u_{t}^{2 }-uu_{tt}}{2|u_{t}|^{3/2}}, \tag{15}\] where the sign ambiguity comes from solving the normalization \(U_{T}=\sigma\), which involves the quadratic term \((\gamma u+\delta)^{2}\). Invariantizing the third order derivative \(u_{ttt}\) produces the differential invariant \[\iota(u_{ttt})=\frac{u_{ttt}}{(\gamma u+\delta)^{2}}-\frac{6\gamma u_{t}u_{tt }}{(\gamma u+\delta)^{3}}+\frac{6\gamma^{2}u_{t}^{3}}{(\gamma u+\delta)^{4}} \bigg{|}_{(15)}=\sigma\bigg{(}\frac{u_{ttt}}{u_{t}}-\frac{3}{2}\bigg{(}\frac{ u_{tt}}{u_{t}}\bigg{)}^{2}\bigg{)}. \tag{16}\] In terms of the general theory previously introduced, we have the invariants \[H=t,\qquad I=\frac{u_{ttt}}{u_{t}}-\frac{3}{2}\bigg{(}\frac{u_{tt}}{u_{t}} \bigg{)}^{2}. \tag{17}\] Since the independent variable \(t\) is an invariant, instead of using \(H\), we use \(t\) in the following computations. The invariantization of Schwarz equation (10) then yields the algebraic equation \[I=F(t). \tag{18}\] Since the prolonged action is transitive on the fibers of each component \(\{(t,u,u_{t},u_{tt})\,|\,u_{t}>0\}\cup\{(t,u,u_{t},u_{tt})\,|\,u_{t}<0\}= \mathcal{V}^{(2)}\), any initial conditions \[u(t_{0})=u_{0},\qquad u_{t}(t_{0})=u_{t}^{0},\qquad u_{tt}(t_{0})=u_{tt}^{0},\] is mapped, under invariantization, to the identities \[0=0,\qquad\sigma=\sigma,\qquad 0=0.\] ### Recurrence relations Using the recurrence relations we now explain how the invariantized equation (8) can be derived symbolically, without requiring the coordinate expressions for the moving frame \(\rho\) or the invariants \((H,I)\). The key observation is that the invariantization map \(\iota\) and the exterior differential do not, in general, commute \[\iota\circ\mathrm{d}\neq\mathrm{d}\circ\iota.\] The extend by which these two operations do not commute is encapsulated in the recurrence relations. To state these equations we need to introduce the (contact) invariant one-form \[\varpi=\iota(\mathrm{d}t)=\rho^{*}(\mathrm{D}_{t}(T))\,\mathrm{d}t,\] which comes from invariantizing the horizontal one-form \(\mathrm{d}t\), see [21] for more details. Given a Lie group \(G\), let \(g\in G\) be represented by a faithful matrix. Then the _right Maurer-Cartan form_ is given by \[\mu=\mathrm{d}g\cdot g^{-1}. \tag{19}\] The pull-back of the Maurer-Cartan form (19) by a right moving frame \(\rho\) yields the invariant matrix \[\nu=\mathrm{d}\rho\cdot\rho^{-1}=\begin{bmatrix}I_{ij}\end{bmatrix}\varpi, \tag{20}\] where the invariants \(I_{ij}\) are called Maurer-Cartan invariants. **Proposition 11**.: Let \(F\colon\mathrm{J}^{(n)}\to\mathbb{R}\) be a differential function. The recurrence relation for the invariantization map \(\iota\) is \[\mathrm{d}[\iota(F)]=\iota[\mathrm{d}F]+\sum_{\kappa=1}^{r}\iota[\mathbf{v}_{ \kappa}^{(n)}(F)]\,\nu^{\kappa}, \tag{21}\] where \(\nu^{1},\ldots,\nu^{r}\) is a basis of normalized Maurer-Cartan forms extracted from (20). Substituting for \(F\) in (21) the jet coordinates (6) specifying the coordinate cross-section \(\mathcal{K}\) leads to \(r\) linear equations for the normalized Maurer-Cartan forms \(\nu^{1},\ldots,\nu^{r}\). Solving those equations and substituting the result back in (21) yields a symbolic expression for the differential of any invariantized differential function \(F\), without requiring the coordinate expressions for the moving frame \(\rho\). **Example 12**.: Continuing Example 10, a basis of infinitesimal generators for the group action (11) is provided by \[\mathbf{v}_{1}=\frac{\partial}{\partial u},\qquad\mathbf{v}_{2}=u\frac{ \partial}{\partial u},\qquad\mathbf{v}_{3}=u^{2}\frac{\partial}{\partial u}.\] The prolongation of those vector fields, up to order 2, is given by \[\mathbf{v}_{1}^{(3)} =\frac{\partial}{\partial u},\] \[\mathbf{v}_{2}^{(3)} =u\frac{\partial}{\partial u}+u_{t}\frac{\partial}{\partial u_{t} }+u_{tt}\frac{\partial}{\partial u_{tt}},\] \[\mathbf{v}_{3}^{(3)} =u^{2}\frac{\partial}{\partial u}+2uu_{t}\frac{\partial}{\partial u _{t}}+2(u_{t}^{2}+uu_{tt})\frac{\partial}{\partial u_{tt}}.\] Applying the recurrence relation (21) to \(t\), \(u\), \(u_{t}\), and \(u_{tt}\) yields \[\begin{split}\mathrm{d}[\iota(t)]&=\varpi,\\ \mathrm{d}[\iota(u)]&=\iota(u_{t})\varphi+\nu^{1},\\ \mathrm{d}[\iota(u_{t})]&=\iota(u_{tt})\varpi+ \iota(u_{t})\nu^{2}+2\iota(u)\iota(u_{t})\nu^{3},\\ \mathrm{d}[\iota(u_{tt})]&=\iota(u_{ttt})\varpi+ \iota(u_{tt})\nu^{2}+2[\iota(u_{t})^{2}+\iota(u)\iota(u_{tt})]\nu^{3}.\end{split} \tag{22}\] Recalling the cross-section (13) and the invariants (16), (17), we make the substitutions \(H=\iota(t)=t\), \(\iota(u)=0\), \(\iota(u_{t})=\sigma\), \(\iota(u_{tt})=0\), \(\iota(u_{tt})=\sigma I\) into (22) and obtain \[\mathrm{d}t=\varpi,\qquad 0=\varpi+\nu^{1},\qquad 0=\nu^{2},\qquad 0=I\sigma \varpi+2\nu^{3}.\] Solving for the normalized Maurer-Cartan forms yields \[\nu^{1}=-\sigma\,\varpi,\qquad\nu^{2}=0,\qquad\nu^{3}=-\frac{I\sigma}{2}\varpi.\] In matrix form we have that \[\nu=\begin{bmatrix}0&-\sigma\\ \frac{1}{2}\sigma I&0\end{bmatrix}\varpi=\begin{bmatrix}0&-\sigma\\ \frac{1}{2}\sigma F(t)&0\end{bmatrix}\varpi,\] where we used the algebraic relationship (18) originating from the invariance of Schwarz' equation (10). ### Reconstruction Let \(I(H)\) be a solution to the invariantized differential equation (8) with initial conditions (9). In this section we explain how to reconstruct the solution to the original equation \(\Delta(x,u^{(n)})=0\) with initial conditions (5). To do so, we introduce the reconstruction equations for the left moving frame \(\overline{\rho}=\rho^{-1}\): \[\mathrm{d}\overline{\rho}=-\overline{\rho}\cdot\mathrm{d}\rho\cdot\overline{ \rho}=-\overline{\rho}\,\nu, \tag{23}\] where \(\nu\) is the normalized Maurer-Cartan form introduced in (20). As we have seen in Section 3.3, the invariantized Maurer-Cartan matrix \(\nu\) can be obtained symbolically using the recurrence relations for the phantom invariants. Since \(\nu\) is invariant, it can be expressed in terms of \(H\), the solution \(I(H)\), and its derivatives. Thus, equation (23) yield a first order system of differential equations for the group parameters expressed in the independent variable \(H\). Integrating (23), we obtain the left moving frame that sends the invariant curve \((H,I(H))\) to the original solution \[(t(H),u(H))=\overline{\rho}(H)\cdot\iota(t,u)(H). \tag{24}\] Assuming \(t_{H}>0\), the initial conditions to the reconstruction equations (23) are given by \[\overline{\rho}(H_{0})=\overline{\rho}_{0}\qquad\text{such that}\qquad \overline{\rho}_{0}\cdot\iota(t_{0},u_{0}^{(n-1)})=(t_{0},u_{0}^{(n-1)}). \tag{25}\] If \(t_{H}<0\), one can always reparametrize the solution so that the derivative becomes positive. The solution (24) is a parametric curve with the invariant \(H\) serving as the parameter. From a numerical perspective, this is sufficient to graph the solution. Though we note that by inverting \(t=t(H)\) to express the invariant \(H=H(t)\) in terms of \(t\), we can recover the solution as a function of \(t\): \[u=u(H(t)).\] **Example 13**.: The left moving frame \[\overline{\rho}=\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\quad\in\quad\text{SL}(2,\mathbb{R})\] that will send the solution (18) to the original function \(u(t)\) is a solution to the reconstruction equations \[\begin{bmatrix}\alpha_{t}&\beta_{t}\\ \gamma_{t}&\delta_{t}\end{bmatrix}=-\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\begin{bmatrix}0&-\sigma\\ \frac{1}{2}\sigma F(t)&0\end{bmatrix}=\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\begin{bmatrix}0&\sigma\\ -\frac{1}{2}\sigma F(t)&0\end{bmatrix},\] (26a) with initial conditions \[\delta_{0}=\pm\frac{1}{\sqrt{|u_{t}^{0}|}},\qquad\beta_{0}=\pm\frac{u_{0}}{ \sqrt{|u_{t}^{0}|}},\qquad\gamma_{0}=\mp\frac{u_{tt}^{0}}{2(u_{t}^{0})^{3/2}}, \qquad\alpha_{0}=\pm\sqrt{|u_{t}^{0}|}\mp\frac{u_{0}u_{tt}^{0}}{2(|u_{t}^{0}|) ^{3/2}}. \tag{26b}\] Then, the solution to Schwarz' equation (10) is \[u(t)=\overline{\rho}\cdot 0=\frac{\beta}{\delta}. \tag{27}\] ### Summary Let us summarize the algorithm for solving an ordinary differential equation \(\Delta(t,u^{(n)})=0\) admitting a group of Lie point transformations \(G\) using the method of moving frames. 1. Select a cross-section \(\mathcal{K}\) to the prolonged action. 2. Choose \(q+1\) invariants \(H\), \(I^{1},\ldots,I^{q}\) from \(\iota(t,u^{(n)})\), that generate the algebra of differential invariants, and assume \(I^{1}(H),\ldots,I^{q}(H)\) are functions of \(H\). 3. Invariantize the differential equation \(\Delta(t,u^{(n)})=0\) and use the recurrence relation (21) to write the result in terms of \(H\) and \(I^{(k)}\) to obtain the equation \(\Delta_{\text{Inv}}(H,I^{(k)})=0\). 4. Solve the equation \(\Delta_{\text{Inv}}(H,I^{(k)})=0\) subject to the initial conditions (9). 5. A parametric solution to the original equation \(\Delta(t,u^{(n)})=0\) is given by \(\overline{\rho}(H)\cdot\iota(t,u)(H)\), where the left moving frame \(\overline{\rho}(H)\) is a solution of the reconstruction equation (23) subject to the initial conditions (25). ## 4 Invariant physics-informed neural networks Before introducing our invariant physics-informed neural network, we recall the definition of the standard physics-informed loss function that needs to be minimized when solving ordinary differential equations. To this end, assume we want to solve the ordinary differential equation \(\Delta(t,u^{(n)})=0\) subject to the initial conditions \(u^{(n-1)}(t_{0})=u_{0}^{(n-1)}\) on the interval \([t_{0},t_{f}]\). First we introduce the collocation points \(\{t_{i}\}_{i=0}^{\ell}\) sampled randomly over the interval \([t_{0},t_{f}]\) with \(t_{0}=t_{0}<t_{1}<\cdots<t_{\ell}=t_{f}\). Then, a neural network of the form \(u_{\boldsymbol{\theta}}(t)=\mathcal{N}_{\boldsymbol{\theta}}(t)\), parameterized by the parameter vector \(\mathbf{\theta}\), is trained to approximate the solution of the differential equation, i.e. \(u_{\mathbf{\theta}}(t)\approx u(t)\), by minimizing the physics-informed loss function \[\mathcal{L}(\mathbf{\theta})=\mathcal{L}_{\Delta}(\mathbf{\theta})+\alpha\,\mathcal{L}_ {\text{I.C.}}(\mathbf{\theta})\] (28a) with respect to \[\mathbf{\theta}\], where \[\mathcal{L}_{\Delta}(\mathbf{\theta})=\sum_{i=0}^{\ell}\ \left[\Delta(t_{i},u_{\mathbf{\theta}}^{( n)}(t_{i}))\right]^{2}\] (28b) is the _differential equation loss_, \[\mathcal{L}_{\text{I.C.}}(\mathbf{\theta})=\left[u_{\mathbf{\theta}}^{(n-1)}(t_{0})-u_{ 0}^{(n-1)}\right]^{2} \tag{28c}\] is the _initial condition loss_, and \(\alpha\) is a hyper-parameter to re-scale the importance of both loss functions. We note that the differential equation loss is the mean squared error of the differential equation evaluated at the collocation points \(\left\{t_{i}\right\}_{i=0}^{\ell}\subset[t_{0},t_{f}]\) over which the numerical solution is sought. The initial condition loss is likewise the mean squared error between the true initial conditions and the initial conditions approximated by the neural network. We note in passage that the initial conditions could alternatively be enforced as a hard constraint in the neural network, [11, 23], in which case the physics-informed loss function would reduce to \(\mathcal{L}_{\Delta}(\mathbf{\theta})\) only. The physics-informed loss function (28) is minimized using gradient descent, usually using the Adam optimizer, [20], but also more elaborate optimizers can be employed, [3]. The particular elegance of the method of physics-informed neural networks lies in the fact that the derivatives \(u_{\mathbf{\theta}}^{(n)}\) of the neural network solution approximation are computed using _automatic differentiation_, [1], which is built into all modern deep learning frameworks such as JAX, TensorFlow, or PyTorch. Similar to the above standard physics-informed neural network, an invariant physics-informed neural network is a feed-forward neural network approximating the solution of the invariantized Figure 1: Solving a differential equation using moving frames. differential equation and the reconstruction equations for the left moving frame. In other words, the loss function to be minimized is defined using symmetries of the given differential equation. In light of the five step process given in Section 3.5, assume the invariantized equation \(\Delta_{\text{Inv}}(H,I^{(k)})=0\) and the reconstruction equation \(\mathrm{d}\overline{\rho}=-\overline{\rho}\nu\) have been derived. Introduce an interval of integration \([H_{0},H_{f}]\) over which the numerical solution is sought, and consider the collocation points \(\{H_{i}\}_{i=0}^{\ell}\subset[H_{0},H_{f}]\), such that \(H_{\ell}=H_{f}\). The neural network has to learn a mapping between \(H\) and the functions \[I_{\boldsymbol{\theta}}(H)\qquad\text{and}\qquad\overline{\rho}_{\boldsymbol{ \theta}}(H),\] where \(I_{\boldsymbol{\theta}}(H)\) denotes the neural network approximation of the differential invariants \(I(H)\) solving (8), and \(\overline{\rho}_{\boldsymbol{\theta}}(H)\) is the approximation of the left moving frame \(\overline{\rho}(H)\) solving the reconstruction equations (23). We note that the output size of the network depends on the numbers of invariants \(I(H)\) and the size of the symmetry group via \(\overline{\rho}(H)\). The network is trained by minimizing the invariant physics-informed loss function consisting of the invariantized differential equation loss and the reconstruction equations loss defined as the mean squared error \[\mathcal{L}_{\Delta_{\text{Inv}},\overline{\rho}}(\boldsymbol{ \theta})=\sum_{i=0}^{\ell}\big{(}\big{[}\Delta_{\text{Inv}}(H_{i},I_{ \boldsymbol{\theta}}^{(k)}(H_{i}))\big{]}^{2}+\big{[}\mathrm{d}\overline{ \rho}_{\boldsymbol{\theta}}(H_{i})+\overline{\rho}_{\boldsymbol{\theta}}(H_{ i})\,\nu(H_{i},I_{\boldsymbol{\theta}}^{(\kappa)}(H_{i}))\big{]}^{2}\big{)}. \tag{29}\] We supplement the loss function (29) with the initial conditions (9) and (25) by considering the invariant initial conditions loss function \[\mathcal{L}_{\text{I.C.}}(\boldsymbol{\theta})=\big{[}I_{\boldsymbol{\theta}} ^{(k-1)}(H_{0})-I_{0}^{(k-1)}\big{]}^{2}+\big{[}\overline{\rho}_{\boldsymbol{ \theta}}(H_{0})-\overline{\rho}_{0}\big{]}^{2}.\] The final invariant physics-informed loss function is thus given by \[\mathcal{L}_{\text{Inv}}(\boldsymbol{\theta})=\mathcal{L}_{\Delta_{\text{Inv} },\overline{\rho}}(\boldsymbol{\theta})+\alpha\,\mathcal{L}_{\text{I.C.}}( \boldsymbol{\theta}),\] where \(\alpha\) is again a hyper-parameter rescaling the importance of the equation and initial condition losses. ## 5 Examples We now implement the invariant neural network architecture introduced in Section 4 for several examples. We also train a standard physics-informed neural network to compare the solutions obtained. For both models we use feed-forward neural networks minimizing the invariant loss function and standard PINN loss function, respectively. For the sake of consistency, all networks used throughout this section have 5 layers, with 40 nodes per layer, and use the hyperbolic tangent as activation functions. For most examples, the loss stabilizes at fewer than 5,000 epochs, but for uniformity we trained all models for 5,000 epochs. The numerical errors of the two neural network solutions are obtained by comparing the numerical solutions to the exact solution, if available, or to the numerical solution obtained using odeint in scipy.integrate. We also compute the mean square error over the entire interval of integration for all examples together with the standard deviation averaged over \(5\) runs. These results are summarized in Table 1. Finally, the point-wise square error plots for each example are provided to show the error varying over the interval of integration. **Example 14**.: As our first example, we consider the Schwarz equation (10), with \(F(t)=2\). For the numerical simulations, we used the initial conditions \[u_{0}=u_{tt}^{0}=0,\qquad u_{t}^{0}=1. \tag{30}\] According to (18) the invariantization of Schwarz' equation yields the algebraic constraint \(I=2\). Thus, the loss function (29) will only contain the reconstruction equations (26a). Namely, \[\alpha_{t}+\beta=\beta_{t}-\alpha=\gamma_{t}+\delta=\delta_{t}-\gamma=0,\] where we used the fact that \(\sigma=1\). Substituting (30) into (26b), yields the initial conditions \[\delta_{0}=\alpha_{0}=\pm 1,\qquad\beta_{0}=\gamma_{0}=0\] for the reconstruction equations. In our numerical simulations we worked with the positive sign. Once the reconstruction equations have been solved, the solution to the Schwarz equation is given by the ratio (27). The solution is integrated on the interval \(t\in[0,\pi]\). Error plots for the solutions obtained via the invariant PINN and the standard PINN implementations are given in Figure 2. These errors are obtained by comparing the numerical solutions to the exact solution \(u(t)=\tan(t)\). Clearly, the invariant implementation is substantially more precise near the vertical asymptote at \(t=\pi/2\). **Example 15**.: As our second example, we consider the logistic equation \[u_{t}=u(1-u) \tag{31}\] occurring in population growth modeling. Equation (31) admits the one-parameter symmetry group \[T=t,\qquad U=\frac{u}{1+\epsilon\,ue^{-t}},\qquad\text{where}\qquad\epsilon \in\mathbb{R}.\] Implementing the algorithm outlined in Section 3.5, we choose the cross-section \(\mathcal{K}=\{u=1\}\). This yields the invariantized equation \[I=\iota(u_{t})=0.\] Figure 2: Time series of the squared error for the Schwarz equation (10). The reconstruction equation is \[\epsilon_{t}=I=0,\] subject to the initial condition \[\epsilon(t_{0})=\bigg{(}\frac{1-u_{0}}{u_{0}}\bigg{)}e^{t_{0}},\] where \(u_{0}=0.5\) and our interval of integration is \([0,\pi]\). The solution to the logistic equation is then given by \[u(t)=\frac{1}{1+\epsilon\,e^{-t}}.\] As Figure 3 illustrates, the error incurred by the invariant PINN model is significantly smaller, by about a factor of more than 100, than the standard PINN implementation when compared to the exact solution \(u(t)=1/(1+e^{-t})\). **Example 16**.: We now consider the driven harmonic oscillator \[u_{tt}+u=\sin(t^{a}), \tag{32}\] which appears in inductor-capacitor circuits, [33]. In the following we set \(a=0.99\), which yields bounded solutions close to resonance occurring when \(a=1\). The differential equation (32) admits the two-dimensional symmetry group of transformations \[T=t,\qquad U=u+\alpha\sin(t)+\beta\cos(t),\qquad\text{where}\qquad\alpha, \beta\in\mathbb{R}.\] A cross-section to the prolonged action is given by \(\mathcal{K}=\{u=u_{t}=0\}\). The invariantization of (32) yields \[I=\iota(u_{tt})=\sin(t^{a}).\] The reconstruction equations are \[\alpha_{t}=\sin(t^{a})\cos(t),\qquad\beta_{t}=-\sin(t^{a})\sin(t), \tag{33}\] Figure 3: Time series of the squared error for the logistic equation (31). with initial conditions \[\alpha(t_{0})=u_{0}\sin(t_{0})+u_{t}^{0}\cos(t_{0}),\qquad\beta(t_{0})=u_{0}\cos( t_{0})-u_{t}^{0}\sin(t_{0}),\] where, in our numerical simulations, we set \(u_{0}=u_{t}^{0}=1\) and integrate over the interval \([0,10]\). Given a solution to the reconstruction equations (33), the solution to the driven harmonic oscillator (32) is \[u(t)=\alpha(t)\,\sin(t)+\beta(t)\cos(t).\] Figure 4 shows the error for the invariant PINN implementation and the standard PINN approach compared to the solution obtained using the odeint Runge-Kutta method in scipy.integrate. As in the previous two examples, the invariant version yields substantially better numerical results than the standard PINN method. **Example 17**.: We now consider the second order ordinary differential equation \[u_{tt}=\exp{[-u_{t}]} \tag{34}\] with an exponential term. Equation (34) admits a three-dimensional symmetry group action given by \[T=e^{\epsilon}t+a,\qquad U=e^{\epsilon}u+\epsilon\,e^{\epsilon}x+b,\] where \(a,b,\epsilon\in\mathbb{R}\). In the following, we only consider the one-dimensional group \[T=e^{\epsilon}t,\qquad U=e^{\epsilon}u+\epsilon\,e^{\epsilon}x.\] We note that in this example the independent variable is not invariant as in the previous examples. A cross-section to the prolonged action is given by \(\mathcal{K}=\{u_{x}=0\}\). Introducing the invariants \[H=\ln{\left[\frac{1}{1-\iota(t)}\right]}=\ln{\left[\frac{1}{1-tu_{t}}\right]},\qquad I=\iota(u)=\exp[-u_{t}](u-tu_{t}),\] Figure 4: Time series of the squared error for the driven harmonic oscillator (32). the invariantization of the differential equation (34) reduces to the first order linear equation \[I_{H}+I=e^{-H}-1.\] The reconstruction equation for the left moving frame is simply \[\epsilon_{H}=1.\] In terms of \(\epsilon\) and \(I\), the parametric solution to the original differential equation (34) is \[t=e^{\epsilon}(1-e^{-H}),\qquad u=e^{\epsilon}(I+\epsilon(1-e^{-H})). \tag{35}\] The solution to (34) is known and is given by \[u(t)=(t+c_{1})\ln(t+c_{1})-t+c_{2}, \tag{36}\] where \(c_{1}\), \(c_{2}\) are two integration constants. For the numerical simulations, we use the initial conditions \[I_{0}=\exp[-u_{t}^{0}](u_{0}-t_{0}u_{t}^{0}),\qquad\epsilon_{0}=u_{t}^{0},\] where \(u_{0}=u(t_{0})\), \(u_{t}^{0}=u_{t}(t_{0})\) with \(t_{0}=0\), and \(c_{1}=\exp(-5)\), \(c_{2}=0\) in (36). The interval of integration \([H_{0},H_{f}]\) is given by \[H_{0}=\ln\bigg{[}\frac{1}{1-t_{0}u_{t}^{0}}\bigg{]},\qquad H_{f}=\ln\bigg{[} \frac{1}{1-t_{f}u_{t}^{f}}\bigg{]}, \tag{37}\] where \(u_{t}^{f}=u_{t}(t_{f})\) and \(t_{f}=2\). We choose the interval of integration given by (37), so that when \(t\) is given by (35) it lies in the interval \([0,2]\). Figure 5 shows the error obtained for the invariant PINN model when compared to the exact solution (34), and similarly for the non-invariant PINN model. As in all previous examples, the invariant version drastically outperforms the standard PINN approach. **Example 18**.: As our final example, we consider a system of first order ODEs \[u_{t}=-u+(t+1)v,\qquad v_{t}=u-tv. \tag{38}\] This system admits a two-dimensional symmetry group of transformations given by \[T=t,\qquad U=\alpha u+\beta t,\qquad V=\alpha v+\beta,\] where \(\alpha>0\) and \(\beta\in\mathbb{R}\). Working with the cross-section \(\mathcal{K}=\{u=1,v=0\}\), the invariantization of (38) yields \[I=\iota(u_{x})=-1,\qquad J=\iota(v_{x})=1.\] The reconstruction equations are \[\alpha_{t}=\alpha(1+t),\qquad\beta_{t}=\alpha\] subject to the initial conditions \(\alpha_{0}=1\), \(\beta_{0}=1\), corresponding to the initial conditions \(u_{0}=v_{0}=1\), when \(t_{0}=0\). In our numerical simulations we integrated over the interval \([0,2]\). The solution to (38) is then given by \[u(t)=\alpha(t)+t\,\beta(t),\qquad v(t)=\beta(t).\] As in all previous examples, comparing the numerical solutions to the exact solution \[u(t)=\sqrt{\frac{2}{\pi}}\,c\,e^{-(t+1)^{2}/2}+c\,t\,\text{erf}\!\left(\frac{t+1} {\sqrt{2}}\right)+kt,\qquad v(t)=c\,\text{erf}\!\left(\frac{t+1}{\sqrt{2}} \right)+k,\] with \(c=\left(\sqrt{2/\pi}\,\exp(-1/2)\right)^{-1}\) and \(k=1-c\,\text{erf}(1/\sqrt{2})\), where \(\text{erf}(t)=2/\sqrt{\pi}\int_{0}^{t}e^{-x^{2}}\mathrm{d}x\) is the standard error function, we observe in Figure 6 that the invariant version of the PINN model considerably outperforms its non-invariant counterpart. ## 6 Summary and conclusions In this paper we have introduced the notion of invariant physics-informed neural networks. These combine physics-informed neural networks with methods from the group analysis of differential Figure 5: Time series of the squared error for the exponential equation (34). Figure 6: Time series of the squared error for the system of equations (38). equations to simplify the form of the differential equations that have to be solved. In turns, this simplifies the loss function that has to be minimized, and our numerical tests show that the solutions obtained with the invariant model outperformed their non-invariant counterparts, and typically considerably so. Table 1 summarizes the examples considered in the paper and shows that the invariant PINN outperforms vanilla PINN for the examples considered. The proposed method is fully algorithmic and as such can be applied to any system of differential equations that is strongly invariant under the prolonged action of a group of Lie point symmetries. It is worth noting that the work proposed here parallels some of the work on invariant discretization schemes which, for ordinary differential equations, also routinely outperform their non-invariant counterparts. We have observed this to also be the case for physics-informed neural networks. Lastly, while we have restricted ourselves here to the case of ordinary differential equations, our method extends to partial differential equations as well. Though, when considering partial differential equations, it is not sufficient to project the equations onto the space of differential invariants as done in this paper. As explained in [36], integrability conditions among the differential invariants must also be added to the invariantized differential equations. In the multivariate case, the reconstruction equations (23) will then form a system of first order partial derivatives for the left moving frame. Apart from these modifications, invariant physics-informed neural networks can also be constructed for partial differential equations, which will be investigated elsewhere. #### Acknowledgments and Disclosure of Funding This research was undertaken, in part, thanks to funding from the Canada Research Chairs program and the NSERC Discovery Grant program. The authors also acknowledge support from the Atlantic Association for Research in the Mathematical Sciences (AARMS) Collaborative Research Group on _Mathematical Foundations for Scientific Machine Learning_.
2307.05098
Fabry-Pérot interference in Josephson junctions
Conductance of metallic heterostructures can be controlled by applying a gate voltage to a region in the transport channel. For sufficiently long phase coherent channels, oscillations appear in conductance versus chemical potential plot, which can be explained by Fabry-P\'erot interference. In this work, we study DC Josephson effect in a superconductor-normal metal-superconductor junctions. The chemical potential of the normal metal (NM) region can be tuned by an applied gate voltage. We numerically obtain the Andreev bound states formed within the superconducting gap and calculate Josephson current by summing up the currents carried by the occupied Andreev bound states. We find that the Josephson current oscillates as a function of the chemical potential in the NM region, and these oscillations can be explained by Fabry-P\'erot interference condition. We find that Josephson current carried by one bound state can be higher than that carried by two or more bound states.
Sushil Kumar Sahu, Abhiram Soori
2023-07-11T08:12:43Z
http://arxiv.org/abs/2307.05098v3
# Fabry-Perot interference in Josephson junctions ###### Abstract Conductance of metallic heterostructures can be controlled by applying a gate voltage to a region in the transport channel. For sufficiently long phase coherent channels, oscillations appear in conductance versus chemical potential plot, which can be explained by Fabry-Perot interference. In this work, we study DC Josephson effect in a superconductor-normal metal-superconductor junctions. The chemical potential of the normal metal (NM) region can be tuned by an applied gate voltage. We numerically obtain the Andreev bound states formed within the superconducting gap and calculate Josephson current by summing up the currents carried by the occupied Andreev bound states. We find that the Josephson current oscillates as a function of the chemical potential in the NM region, and these oscillations can be explained by Fabry-Perot interference condition. We find that Josephson current carried by one bound state can be higher than that carried by two or more bound states. ## I Introduction Fabry-Perot interference (FPI) is a phenomenon in light scattering that happens in optical cavities wherein for certain sizes of the cavity the transmission of monochromatic light is perfect and the light is reflected from the cavity otherwise [1]. This phenomenon has been used to assist lasing action in lasers [2] and in gravitational wave detectors [3; 4]. The same phenomenon is exhibited by electrons in nanostructures owing to their wave nature [5]. The physics of FPI is used in the detection of fractional charges in quantum Hall devices [6]. Spin transistors [7; 8] and planar Hall effect devices [9; 10] exhibit FPI. Several proposals to enhance crossed Andreev reflection make use of FPI [11; 12; 13; 14]. Scattering across PT-symmetric non-Hermitian ladders exhibits FPI in the PT-unbroken phase whereas FPI is absent in PT-broken phase [15]. DC Josephson effect is an equilibrium phenomenon in junctions between two superconductors, wherein a current flows from one superconductor to the other when a superconducting phase difference is maintained between two superconductors [16]. This current termed as Josephson current is carried by Cooper pairs and flows even when a normal metal or a thin insulating layer is sandwiched between the two superconductors. In presence of a phase difference, quasiparticle bound states appear within the superconducting gap and unlike the bound states in normal metal which carry no current, these bound states carry the supercurrent [17]. It may be noted here that bound states can also appear in absence of a phase difference if a sufficiently long normal metal region is inserted between the superconductors, but they do not carry any supercurrent in absence of a phase bias. Josephson current through a quantum point contact has been studied by Beenakker and van Houten wherein the dependence of the Josephson current on the width of the point contact is investigated [18; 19]. In a superconductor-normal metal-superconductor junction, the transport is coherent only when the size of the normal metal is smaller than the coherence length. In this work, we study the effect of a gate tunable normal metal sandwiched between two superconductors on the Josephson current. The schematic of the setup is shown in Fig. 1. When the chemical potential of a normal metal coupled to normal metal reservoirs on either side is changed, the differential conductance of the setup exhibits oscillations which are rooted in Fabry-Perot interference. Oscillations are expected in Josephson current as well when the chemical potential of the central normal metal is varied. However, an important difference between the two cases is that current carried under an applied bias between two normal metal reservoirs is a non-equilibrium current, whereas the Josephson current is an equilibrium current. In this paper, we study the Fabry-Perot interference exhibited by the equilibrium Josephson current. ## II Details of calculation The Hamiltonian for a superconductor-normal metal superconductor junction is given by Figure 1: Schematic of the setup. The superconductor (SC) on the left (right) has a phase \(\phi\) (0). Gate voltage applied to the central normal metal (NM) can change its chemical potential. \[H = \begin{cases}\Psi^{\dagger}(x)\Big{[}\Big{(}-\frac{\hbar^{2}}{2m} \frac{\partial^{2}}{\partial x^{2}}-\mu\Big{)}\tau_{z}+\Delta(\cos\phi\tau_{x}+ \ \sin\phi\tau_{y})\Big{]}\Psi(x),\ \ \ \text{for}\ \ x<0\\ \\ \Psi^{\dagger}(x)\Big{[}\Big{(}-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{ \partial x^{2}}-\mu_{0}\Big{)}\tau_{z}\Big{]}\Psi(x),\ \ \text{for}\ \ 0<x<L,\\ \\ \Psi^{\dagger}(x)\Big{[}\Big{(}-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{ \partial x^{2}}-\mu\Big{)}\tau_{z}+\Delta\tau_{x}\Big{]}\Psi(x),\ \ \text{for}\ \ x>0,\end{cases} \tag{1}\] where \(\Psi(x)=[c_{\uparrow}(x),\ c_{\downarrow}^{\dagger}(x),\ -c_{\downarrow}(x),\ c_{ \uparrow}^{\dagger}(x)]^{T}\), and \(c_{\sigma}(x)\) is annihilation operator for an electron of spin-\(\sigma\) at \(x\) and \(\tau_{x}\), \(\tau_{y}\), \(\tau_{z}\) are Pauli spin matrices that act on the particle-hole sector. Here, \(m\) is the effective mass of electrons, \(\mu\) is the chemical potential in the superconductors, \(\mu_{0}\) is the chemical potential in the normal metal region and can be tuned by an applied gate voltage, \(\Delta\) is the strength of superconducting pair potential, and \(\phi\) is superconducting phase difference. The wavefunction for this Hamiltonian is a four-spinor having the form \(\psi=[\psi_{e,\uparrow},\psi_{h,\downarrow},\psi_{e,\downarrow},\psi_{h, \uparrow}]^{T}\), where each of \(\psi_{p,\sigma}\) (\(p=e,h\), \(\sigma=\uparrow,\downarrow\)) is a function of \(x\) and \(\psi_{p,\sigma}\) corresponds to an electron excitation of spin \(\sigma\) for \(p=e\), a hole excitation of spin \(\sigma\) for \(p=h\). The Hamiltonian does not have any spin dependent term. So, it has spin degenerate states. From eq. (1), it is evident that spin-up electrons mix with only spin-down holes and spin-down electrons mix with spin-up holes. So, the two-spinor eigenfunctions \(\psi^{\prime}=[\psi_{e,\uparrow},\psi_{h,\downarrow}]\) can be found, and the Josephson current determined from this can be multiplied with a factor of 2 to obtain the total Josephson current. The wavefunction \(\psi^{\prime}\) satisfies a probability current conserving boundary condition [20]. We choose the boundary condition that corresponds to a delta function impurity present at the junction between normal metal and the superconductor [21]: \[\psi^{\prime}(x_{0}^{-}) = \psi^{\prime}(x_{0}^{+}),\ \ \ \partial_{x}\psi^{\prime}_{\ \ given by eq. (3) is numerically normalized. Now, the current carried by such a state can be found by calculating the current in the normal metal region [17]: \(J^{\prime}=(e\hbar/m)\text{Im}(\psi^{\prime\dagger}\partial_{x}\psi^{\prime})\). The current carried by both spin sectors is \(J=2(e\hbar/m)\text{Im}(\psi^{\prime\dagger}\partial_{x}\psi^{\prime})\). The bound state energies come in pairs \(\pm E_{b}\). There can be multiple pairs of energies at which det.\((M)=0\). The sum of currents carried by all the negative energy bound states gives the total Josephson current. The Josephson current can also be calculated by the formula \(J=4(e/\hbar)\sum_{j}\partial E_{b,j}/\partial\phi\), where \(\pm E_{b,j}\)'s are different bound state energies. We numerically find that both the methods give exactly the same Josephson current. ## III Results and analysis We choose \(\mu=20\Delta\), \(q_{0}=2\sqrt{m\Delta}/\hbar\), \(L=10\hbar/\sqrt{m\Delta}\) and numerically calculate the Josephson current \(J\) as a function of the superconducting phase difference \(\phi\) for different choices of \(\mu_{0}\). The graph of current phase relation is shown in Fig. 2. The current is not a sinusoidal function of \(\phi\). Interestingly, the current first increases as \(\mu_{0}\) increases and then decreases. This motivates us to look at the dependence of Josephson current on \(\mu_{0}\) at a fixed \(\phi\). In Fig. 3, we plot the Josephson current versus \(\mu_{0}\) for the same set of parameters. \(\mu_{0}\) - the chemical potential in the normal metal region can be controlled in an experiment using an external gate voltage. We find that the Josephson current is close to zero near \(\mu_{0}=0\). This is because, the band bottom of the normal metal region is close to zero and at nonzero energies within the superconducting gap, for an electron plane wave state, there is no hole state which is plane wave in the normal metal region. As \(\mu_{0}\) is increased further, the Josephson current increases in magnitude, but with oscillations. These oscillations are due to Fabry-Perot interference. If \(\mu_{0,i}\) is the position of \(i\)-th local peak, \(k_{e,i}\simeq\sqrt{2m\mu_{0,i}}/\hbar\) satisfies the relation \([k_{e,i+1}-k_{e,i}]L=\pi\) very well for \(\mu_{0}>10\Delta\), since \(|E|<\Delta\) can be neglected in comparison to \(\mu_{0}\) in the expression \(k_{e}=\sqrt{2m(\mu_{0}+E)}/\hbar\). This condition is the Fabry-Perot interference condition. Bohr-Sommerfeld like quantization refers to the fact that standing waves on a ring can have discrete momenta such that \(\int pdx=nh\). But, in our case, the standing waves are not on a ring and the momenta \(\pm\hbar k\) contribute to making the standing wave in the normal metal region. Hence, \(\int pdx=\int_{0}^{L}\hbar kdx+\int_{L}^{0}(-\hbar k)dx=2\hbar kL\) and the quantization condition implies \(2\hbar kL=nh\implies(k_{n+1}-k_{n})L=\pi\) which is the same as the Fabry-Perot interference condition. Another feature of Fig. 3(a), is that around \(\mu_{0}=70\Delta\), the maximum value of the magnitude of Josephson current at the peak saturates. This is because the number of bound states changes from 2 to 1 around \(\mu_{0}=70\Delta\). The Josephson current is driven by the superconducting phase bias. We find that the supercurrent is higher in magnitude when carried by only one bound state compared to when it is carried by two bound states. Also, the cusps in Fig. 2 are due to change in the number of bound states at the location of the cusps when the phase difference is varied. Beyond \(\mu_{0}\simeq 70\Delta\), the amplitude of oscillations increases as \(\mu_{0}\) increases. To understand this feature, we look at the transmission probability in a similar NM-NM junction as a function of \(\mu_{0}\). NM-NM-NM junction can be described by the Hamiltonian in eq. (1) by eliminating terms proportional to \(\Delta\). In Fig. 3(b), transmission probability at zero energy is plotted versus \(\mu_{0}\) keeping other parameters the same. The transmission probability reaches 1 at the peak, but the values at the local minima decrease as \(\mu_{0}\) increases. This explains why the Josephson current shows oscillations with larger amplitude as the chemical potential \(\mu_{0}\) increases. If \(L\) is increased, the oscillations become more closely spaced, as can be understood from the FPI condition. ## IV Discussion These results hold when the length of the NM region is less than coherence length and the transport is phase coherent. At the same time, for the interference to happen, the length of the normal metal should be larger than a critical value given by \(\pi\hbar/\sqrt{2m\mu_{0}}\). In typical superconductors, \(\mu\gg\Delta\) and hence, we have chosen \(\mu=20\Delta\). The barrier strength \(q_{0}\) is assumed to be small compared to the Fermi wavelength so that the Josephson current is large. This holds for a smooth junction between normal metal and the superconductor. Instead of the dependence of the Josephson current on the chemical potential of the normal metal region, the dependence on the size of the normal metal region can be studied to probe the FPI in Josephson junctions. Such an experimental study supported by theoretical calculations using Gorkov formalism was performed by Gudkov et al[22]. Our analysis uses Bogoliubov de-Gennes formalism [17]. ## V Summary We have studied DC Josephson effect in a superconductor-normal metal-superconductor junction. We have written down the form of wavefunctions analytically and solved for the bound state energy, and coefficients numerically. We find current-phase relations and study the dependence of Josephson current on the chemical potential \(\mu_{0}\) in the NM region. The oscillations in the Josephson current versus \(\mu_{0}\) match with the Fabry-Perot interference condition. We have studied Fabry-Perot interference in equilibrium transport, in contrast to the Fabry-Perot interference commonly studied in nonequilibrium transport. The number of bound states changes as \(\mu_{0}\) is varied, and the current carried by one bound state can be higher in magnitude compared to the current by two bound states. Our results can be tested experimentally with present day technology. ###### Acknowledgements. AS thanks DST-INSPIRE Faculty Award (Faculty Reg. No. : IFA17-PH190), SERB Core Research grant (CRG/2022/004311) and University of Hyderabad Institute of Eminence PDF for financial support.
2301.02479
Quantum Multiple Access Wiretap Channel: On the One-Shot Achievable Secrecy Rate Regions
In this paper, we want to investigate classical-quantum multiple access wiretap channels (CQ-MA-WTC) under one-shot setting. In this regard, we analyze the CQ-MA-WTC using simultaneous position-based decoder for reliable decoding and using a newly introduced technique in order to decode securely. Also, for the sake of comparison, we analyze the CQ-MA-WTC using Sen's one-shot joint typicality lemma for reliable decoding. The simultaneous position-based decoder tends to a multiple hypothesis testing problem. Also, using convex splitting to analyze the privacy criteria in a simultaneous scenario becomes problematic. To overcome both problems, we first introduce a new channel that can be considered as a dual to the CQ-MA-WTC. This channel is called a point-to-point quantum wiretap channel with multiple messages (PP-QWTC). In the following, as a strategy to solve the problem, we also investigate and analyze quantum broadcast channels (QBCs) under the one-shot setting.
Hadi Aghaee, Bahareh Akhbari
2023-01-06T12:33:41Z
http://arxiv.org/abs/2301.02479v1
# Quantum Multiple Access Wiretap Channel: On the One-Shot Achievable Secrecy Rate Regions ###### Abstract In this paper, we want to investigate classical-quantum multiple access wiretap channels (CQ-MA-WTC) under one-shot setting. In this regard, we analyze the CQ-MA-WTC using simultaneous position-based decoder for reliable decoding and using a newly introduced technique in order to decode securely. Also, for the sake of comparison, we analyze the CQ-MA-WTC using Sen's one-shot joint typicality lemma for reliable decoding. The simultaneous position-based decoder tends to a multiple hypothesis testing problem. Also, using convex splitting to analyze the privacy criteria in a simultaneous scenario becomes problematic. To overcome both problems, we first introduce a new channel that can be considered as a dual to the CQ-MA-WTC. This channel is called a point-to-point quantum wiretap channel with multiple messages (PP-QWTC). In the following, as a strategy to solve the problem, we also investigate and analyze quantum broadcast channels (QBCs) under the one-shot setting. Quantum Channel; Mutual Information; Secrecy Capacity; Multiple Access Channel ## I Introduction The quantum multiple access channel (QMAC) was first introduced by Winter [1]. A QMAC can accept two or more messages (classical or quantum) as inputs and one output. Similar to the classical world, decoding messages over a QMAC is based on two main techniques: successive cancelation decoding and simultaneous decoding. In [1], the author employs the successive cancelation decoding technique. A quantum broadcast channel (QBC) is a channel with a sender and two or more receivers. The sender wishes to transmit two or more messages (classical or quantum) over the channel to the receivers. The QBC was first introduced by Yard _et al_. [2]. In [2], the authors derived an inner bound for QBC for i.i.d. (independent and identical) case, and in [3], the authors derived the same inner bound using a more straightforward method and more in the spirit of its classical analogous [4] than the method in [2]. In recent decades, with development of quantum data processing and its applications, the necessity to study the security of quantum channels has increased. In this regard, the quantum wiretap channel (QWTC) was first introduced in [5] and [6]. Then, the secrecy constraints are extended to multi-user quantum channels such as quantum interference channel (QIC) [7,8], and quantum multiple access channel (QMAC) [9-13]. There are two bottlenecks in studying the security of quantum channels. The first is decoding three or more messages simultaneously (reliability), and the second is about how we can securely decode two or more messages (confidentiality). The first bottleneck arises from the nonexistence of a general quantum joint typicality lemma. However, this problem has been solved in some cases, such as the min-entropy case and QMACs with commutative output [14]. Therefore, in the independent and identical distributed (i.i.d.) case, successive decoding combined with time-sharing techniques should be used. In this setting, transmitters are allowed to transmit their messages by only one use channel. Sen proved a joint typicality lemma which helps to decode any number of messages simultaneously in the one-shot case [14]. Obtaining secrecy against the eavesdropper by Wyner's technique [15] of randomizing over a block becomes problematic in the quantum setting. Wyner's technique has been shown to work for point-to-point quantum channels by Devetak [6] and explained further in [16]. However, there are no easy generalizations to multiple senders for a quantum channel. This issue is discussed in detail in [16]. In this paper, we want to investigate the secrecy problem of quantum multiple access channel (QMAC) with classical inputs under one-shot setting. Also, we have investigated some bottlenecks connected to decoding process for CQ-MA-WTC. The achievement of this paper is about analyzing bottlenecks in decoding process and providing solutions to overcome them. Also, we present two techniques for quantum multiple access wiretap channel with classical inputs (CQMA-WTC). The first approach is based on the method presented in [14], and another technique is the _simultaneous position-based decoder_. From [17], we know that the simultaneous position-based decoder tends to a multiple quantum hypothesis testing problem which is solvable in a special case. Also, from [18], we know that the convex split lemma could not be used to analyze the privacy of multiple messages in simultaneous decoding. The paper is organized as follows: In Section II, some seminal definitions are presented. In Section III, the main channel and information processing tasks are presented. In Section IV, the results and main theorems are presented. Section V is dedicated to discussion. ## II Preliminaries Let A (Alice), B (Bob), and C (Charlie) be three quantum systems. These quantum systems can be denoted by their corresponding Hilbert spaces as \(\mathcal{H}^{A}\), \(\mathcal{H}^{B}\), and \(\mathcal{H}^{C}\). The states of the above quantum systems are presented as density operators \(\rho^{A}\), \(\rho^{B}\), and \(\rho^{C}\), respectively, while the shared state between Alice, Bob, and Charlie is denoted by \(\rho^{ABC}\). A density operator is a positive semidefinite operator with a unit trace. Alice, Bob, or Charlie's state can be defined by a partial trace operator over the shared state. The partial trace is used to model the lack of access to a quantum system. Thus, Alice's density operator using partial trace is \(\rho^{A}=Tr_{BC}\{\rho^{ABc}\}\). \(\left|\psi\right\rangle^{A}\) denotes the pure state of system A. The corresponding density operator is \(\psi^{A}=\left|\psi\right\rangle\!\left\langle\psi\right|^{A}\). The von Neumann entropy of the state \(\rho^{A}\) is defined by \(H(A)_{\rho}=-Tr\{\rho^{A}\log\rho^{A}\}\). For an arbitrarily state such as \(\sigma^{AB}\), the quantum conditional entropy is defined by \(H(A|B)_{\sigma}=H(A,B)_{\sigma}-H(B)_{\sigma}\). The quantum mutual information is defined by \(I(A;B)_{\sigma}=H(A)_{\sigma}+H(B)_{\sigma}-H(A,B)_{\sigma}\), and the conditional quantum mutual information is defined by: \[I(A;B|C)_{\sigma}=H(A|C)_{\sigma}+H(B|C)_{\sigma}-H(A,B|C)_{\sigma}\] Quantum operations can be denoted by _completely positive trace-preserving_ (CPTP) maps \(\mathcal{N}^{A\to B}\). The CPTP maps accept input states in A and output states in B. The distance between two quantum states, such as A and B, is defined by trace distance. The trace distance between two arbitrary states, such as \(\sigma\) and \(\rho\) is: \[\left\|\sigma-\ \rho\right\|_{1}=Tr|\sigma-\ \rho| \tag{1}\] where \(\left|\Psi\right|=\sqrt{\Psi^{\dagger}\Psi}\). This quantity is zero for two similar and perfectly distinguishable states. _Fidelity_ is defined as \(F(\rho,\sigma)=\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{1}^{2}\), and _purified distance_ is a metric on \(\mathcal{D}(\mathcal{H})\) and is defined as \(P(\rho,\sigma)\coloneqq\sqrt{1-F(\rho,\sigma)^{2}}\). Most of the above definitions are given in [19]. **Definition 1**: (Hypothesis testing mutual information)_ _Hypothesis testing mutual information is denoted by \(I_{H}^{e}(X;Y)\) i:= \(D_{H}^{e}(\rho_{XY}\left\|\rho_{X}\otimes\rho_{Y}\right\rangle,\epsilon\in(0,1)\) and is considered as quantum hypothesis testing divergence [17] where \(D_{H}^{e}(\left\|.\right\rangle)\) is hypothesis testing relative entropy [17]. \(\epsilon\) is the smoothing variable, \(\rho^{Rx\neq x\neq Y}\) is the joint classical-quantum state of input and output over their Hilbert spaces \((\mathcal{H}_{x},\mathcal{H}_{Y})\), and it can be shown as \(\rho_{XY}\):_ \[\rho_{XY}=\sum_{x}p_{X}(x)\left|x\right\rangle\!\left\langle x\right|_{x} \otimes\rho_{Y}^{x}\] _where \(p_{X}\) is the input distribution._ **Definition 2**: (Quantum relative entropy [20]): _Consider states \(\rho_{X}\), \(\sigma_{X}\in\mathcal{D}(\mathcal{H}_{X})\). The Quantum relative entropy is defined as:_ \[\begin{array}{l}D(\rho_{X}\left\|\sigma_{X}\right\rangle\\ \coloneqq\begin{cases}Tr\{\rho_{X}[\log_{2}\rho_{X}-\log_{2}\sigma_{x}]\}&supp (\rho_{X})\subseteq supp(\sigma_{X})\\ +\infty&otherwise\end{cases}\end{array}\] _where \(supp(\sigma_{X})\) refers to the set-theoretic support of \(\sigma\). \(supp(\sigma)\) is the subspace of \(\mathcal{H}\) spanned by all eigenvectors of \(\sigma\) with non-zero eigenvalues._ **Fact 1**: _The following relation exists between the quantum relative entropy and hypothesis testing relative entropy for \(\epsilon\in(0,1)\)[21]:_ \[D_{H}^{e}(\rho_{X}\left\|\sigma_{X}\right\rangle\leq\frac{1}{1-\epsilon}[D( \rho_{X}\left\|\sigma_{X}\right\rangle+h_{b}(\epsilon)]\] _where \(h_{b}(\epsilon)\coloneqq-\epsilon\log_{2}\epsilon-(1-\epsilon)\log_{2}(1- \epsilon)\) is a binary entropy function._ **Definition 3**: (Max mutual information [21])_ _Consider a bipartite state \(\rho_{XY}\) and a parameter \(\epsilon\in(0,1)\). The max mutual information can be defined as follows:_ \[I_{max}(X;Y)_{\rho}\coloneqq D_{max}(\rho_{XY}\left\|\rho_{X}\otimes\rho_{Y} \right\rangle_{\rho}\] _where \(\rho\) refers to the state \(\rho_{XY}\) and \(D_{max}(\left|\right\rangle\) is the max-relative entropy [22] for \(\rho_{X},\sigma_{X}\in\mathcal{H}_{X}\):_ \[D_{max}(\rho_{X}\left\|\sigma_{X}\right\rangle\coloneqq\inf\{\gamma\in\mathbb{R }\colon\rho_{X}\leq 2^{\gamma}\sigma_{X}\}\] **Definition 4**: (Quantum smooth max relative entropy [22])_ _Consider states \(\rho_{X}\), \(\sigma_{X}\in\mathcal{D}(\mathcal{H}_{X})\), and \(\epsilon\in(0,1)\). The quantum smooth max relative entropy is defined as:_ \[D_{max}^{\epsilon}(\rho_{X}\left\|\sigma_{X}\right\rangle):=\inf_{\rho_{X}^{ \epsilon}\in\mathcal{D}(\rho_{X})}D_{max}(\rho_{X}^{\epsilon}\left\|\sigma_{X}\right\rangle)\] _where \(\mathcal{B}^{\epsilon}(\rho_{X})\coloneqq\{\rho_{X}^{\epsilon}\in\mathcal{D}( \mathcal{H}_{X})\colon P(\rho_{X}^{\epsilon},\rho_{X})\leq\epsilon\}\) is \(\epsilon\)-ball for \(\rho_{XY}\)._ **Definition 5**: (Quantum smooth max mutual information [21])_ _Consider \(\rho_{XY}\coloneqq\sum_{x\in X}P_{X}(x)\left|x\right\rangle\!\left\langle x \right|_{x}\otimes\rho_{Y}^{x}\) as a classical-quantum state and a parameter \(\epsilon\in(0,1)\). The smooth max mutual information between the systems \(X\) and \(Y\) can be defined as follows:_ \[\begin{array}{l}I_{max}^{\epsilon}(X;Y)\coloneqq\inf_{\rho_{XY}^{\epsilon}\in \mathcal{D}(\rho_{XY})}D_{max}(\rho_{XY}^{\epsilon}\left\|\rho_{X}\otimes\rho_{ Y}\right.)\\ =\inf_{\rho_{XY}^{\epsilon}\in\mathcal{B}^{\epsilon}(\rho_{XY})}I_{max}(X;Y)_{ \rho^{\epsilon}}\,,\end{array}\] _where \(\mathcal{B}^{\epsilon}(\rho_{XY})\coloneqq\{\rho_{XY}^{\epsilon}\in\mathcal{D}( \mathcal{H}_{X}\otimes\mathcal{H}_{Y})\colon P(\rho_{XY}^{\epsilon},\rho_{XY}) \leq\epsilon\}\) is \(\epsilon\)-ball for \(\rho_{XY}\)._ **Definition 6**: (Conditional smooth hypothesis testing mutual information [23])_ _Consider_ \(\rho_{XYZ}\coloneqq\sum_{x\in Z}p_{Z}(z)\left|z\right\rangle\!\left\langle z \right|_{Z}\otimes\rho_{XY}^{\epsilon}\) _be a tripartite classical-quantum state and_ \(\epsilon\in(0,1)\)_. We define,_ \[I_{H}^{e}(X;Y|Z)_{\rho}\coloneqq\max_{\rho^{\prime}}\min_{z\in supp(\rho_{Z}^{ \epsilon})}I_{H}^{e}(X;Y)_{\rho_{XY}^{\epsilon}}\,,\] _where maximization is over all \(\rho_{Z}^{\epsilon}=\sum_{x\in Z}p_{Z}(z)\left|z\right\rangle\!\left\langle z \right|_{Z}\) satisfying \(P(\rho_{Z}^{\epsilon},\rho_{Z})\leq\epsilon\)._ **Fact 2**: [24] Let \(\rho_{XYZ}:=\sum_{x\in Z}p_{Z}(z)\left|z\right\rangle\!\left\langle z\right|_{Z}\otimes \rho_{XY}^{\epsilon}\) be a tripartite classical-quantum state and \(\epsilon\in(0,1)\), the following relation holds, \[\lim_{n\to\infty}\frac{1}{n}I_{n}^{e}(X^{\otimes n};Y^{\otimes n}|Z^{n})_{\rho^{ \otimes n}}=I(X;Y|Z)_{\rho}\] **Definition 7**: (Alternate smooth max-mutual information)_ _Consider a bipartite state \(\rho_{XY}\) and a parameter \(\epsilon\in(0,1)\). The alternate definition of the smooth max-mutual information between the systems \(X\) and \(Y\) can be defined as follows:_ \[\hat{I}_{max}^{\epsilon}(Y;X)\coloneqq\inf_{\rho_{XY}^{\epsilon}\in\mathcal{B}^{ \epsilon}(\rho_{XY})}D_{max}(\rho_{XY}^{\epsilon}\left\|\rho_{X}\otimes\rho_{Y}^{ \epsilon}\right.)\] **Fact 3**: (Relation between two definitions of the smooth max mutual information) [25]: Let \(\epsilon\in(0,1)\) and \(\gamma\in(0,\epsilon)\) For a bipartite state \(\rho_{XY}\), it holds that: \[I_{max}^{e}(Y;X)_{\rho}\leq I_{max}^{e-\gamma}(X;Y)_{\rho}+\log\frac{3}{\gamma^{2}}\] **Definition 8**: (Conditional smooth max mutual information [23])_ Consider \(\rho_{XYZ}:=\sum_{z\in Z}p_{z}(z)|z\rangle\langle z|_{Z}\bigotimes\rho_{XY}^{z}\) be a tripartite classical-quantum state and \(\epsilon\in(0,1)\). We define, \[I_{max}^{e}(X;Y|Z)_{\rho}\coloneqq\max_{\rho^{\prime}}\min_{zesupupp(\rho_{Z}^ {\prime})}I_{max}^{e}(X;Y)_{\rho_{XY}^{z}}\,,\] where maximization is over all \(\rho_{Z}^{\prime}=\sum_{z\in Z}p_{z}(z)|z\rangle\langle z|_{Z}\) satisfying \(P(\rho_{Z}^{\prime},\rho_{Z})\leq\epsilon\). **Fact 4**: [24]\(\rho_{XYZ}:=\sum_{z\in Z}p_{z}(z)|z\rangle\langle z|_{Z}\bigotimes\rho_{XY}^{z}\) be a tripartite classical-quantum state and \(\epsilon\in(0,1)\), the following relation holds, \[\lim_{n\rightarrow\infty}\frac{1}{n}I_{max}^{e}(X^{\otimes n};Y^{\otimes n}|Z ^{n})_{\rho^{\otimes n}}=I(X;Y|Z)_{\rho}\] **Definition 9**: (Quantum Renyi relative entropy of order \(\alpha\)[17])_ For a state \(\rho\in\mathcal{D}(\mathcal{H})\) and a positive semidefinite operator \(\sigma\), the _quantum Renyi relative entropy of order \(\alpha\)_, where \(\alpha\in[0,1)\cup(1,+\infty)\) is defined as: \[D_{\alpha}(\rho\|\sigma)\equiv\frac{1}{\alpha-1}\log_{2}Tr\{\rho^{\alpha} \sigma^{1-\alpha}\}\] Also, _Renyi entropy of order \(\alpha\)_ can be defined as follows: \[H_{\alpha}(A)_{\rho}\equiv\frac{1}{1-\alpha}\log_{2}Tr\{\rho_{A}^{\alpha}\}\] **Definition 10**: (One-shot inner bound of a classical-quantum multiple access channel) [14]_ A two-user classical-quantum multiple access channel (C-QMAC) under the one-shot setting is a triple \((\mathcal{X}_{1}\times\mathcal{X}_{2},\mathcal{N}_{\mathcal{X}_{1}\mathcal{X }_{2}\to Y}(x_{1},x_{2})\equiv\rho_{x_{1}x_{2}}^{Y},\mathcal{H}_{Y})\), where \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) are the alphabet sets of two classical inputs, and \(Y\) is the output system. \(\rho_{x_{1}x_{2}}^{Y}\) is a quantum state, and the channel has a completely positive trace-preserving map \((\mathrm{CPTP})\mathcal{N}_{\mathcal{X}_{1}\mathcal{X}_{2}\to Y}\). Considering the joint typicality lemma introduced in [Corollary 4, 14], the one-shot inner bound of a C-QMAC is as follows: \[R_{1}\leq I_{H}^{e}(X_{1}\colon X_{2}Y)_{\rho}-2+\log\epsilon\] \[R_{2}\leq I_{H}^{e}(X_{2}\colon X_{1}Y)_{\rho}-2+\log\epsilon\] \[R_{1}+R_{2}\leq I_{H}^{e}(X_{1}X_{2}\colon Y)_{\rho}-2+\log\epsilon\] with decoding error at most \(49\sqrt{\epsilon}\), where \(I_{H}^{e}(.)\) is the hypothesis testing mutual information defined in Definition 1 with respect to the controlling state: \[\begin{split}\rho^{Q\mathcal{X}_{1}\mathcal{X}_{2}\mathcal{X}}:= \\ \sum_{q\mathcal{X}_{1}\mathcal{X}_{2}}p(q)p(x_{1}|q)p(x_{2}|q)|q \mathcal{X}_{1}x_{2}\rangle\langle qx_{1}x_{2}|^{Q\mathcal{X}_{1}\mathcal{X}_{ 2}}\\ \bigotimes\rho_{x_{1}x_{2}}^{Y}\end{split} \tag{2}\] and \(Q\) is a time-sharing variable. Note that \(I_{H}^{e}(\cdot)\) is the difference between a _Renyi entropy_ of order two and a conditional quantum entropy. **Lemma 1**: _[_16_]_ _Given the control state in (2) (without time-sharing variable), \(\delta^{\prime}>0\) and \(0<\epsilon^{\prime}<\delta^{\prime}\),let \(\big{\{}x_{1},...,x_{\mathcal{X}_{n}}\big{\}}\) and \(\big{\{}y_{1},...,y_{\mathcal{X}_{n}}\big{\}}\) be i.i.d. samples from the distributions \(P_{X}\) and \(P_{Y}\). Then, if_ \[\begin{split}\log|\mathcal{X}_{1}|\geq I_{max}^{\delta^{\prime}- \epsilon^{\prime}}(X\colon Z)_{\rho}+\log\frac{3}{\epsilon^{\prime\,3}}-\frac{ 1}{4}\log\delta^{\prime}\\ \log|\mathcal{X}_{2}|\geq I_{max}^{\delta^{\prime}-\epsilon^{ \prime}}(Y\colon Z)_{\rho}+\log\frac{3}{\epsilon^{\prime\,3}}-\frac{1}{4}\log \delta^{\prime}+\mathcal{O}(1)\end{split}\] _the following holds,_ \[\begin{split}\mathbb{E}_{x_{1}\cdots x_{\mathcal{X}_{1}}\to P_{X}} \left\|\frac{1}{|\mathcal{X}_{1}||\mathcal{X}_{2}|}\sum_{j=1}^{|\mathcal{X}_{ 2}||\mathcal{X}_{1}|}\rho_{x_{2}y_{j}}^{Z}-\rho^{Z}\right\|_{1}\leq 20\delta^{ \frac{1}{\delta}}\end{split}\] _Proof:_ see [16]. **Lemma 2**: : _(Convex split lemma) [19,20] Let \(\rho_{XY}\) be an arbitrary state and suppose that \(\tau_{\mathcal{X}_{1}\to X_{B}B}\) be the following state:_ \[\tau_{\mathcal{X}_{1}\to X_{B}B}=\frac{1}{K}\sum_{k=1}^{K}\rho_{\mathcal{X}_{ 1}}\bigotimes...\bigotimes\rho_{\mathcal{X}_{k-1}}\bigotimes\rho_{\mathcal{X} _{k}B}\bigotimes\rho_{x_{k+1}}\bigotimes...\] \[\bigotimes\rho_{\mathcal{X}_{k}}\] Let \(\epsilon\in(0,1)\) and \(\delta\in\big{(}0,\sqrt{\epsilon}\big{]}\), if \[\log_{2}K=I_{max}^{\sqrt{\epsilon}-\delta}(Y;X)_{\rho}+2\log_{2}\left(\frac{1 }{\delta}\right)\] then, \[P\big{(}\tau_{\mathcal{X}_{1}\to X_{B}B},\rho_{\mathcal{X}_{1}}\bigotimes... \bigotimes\rho_{\mathcal{X}_{k}}\bigotimes\rho_{\mathcal{Y}}\big{)}\leq\sqrt{\epsilon}\] for some state \(\bar{\rho}_{Y}\) such that \(P(\rho_{Y},\bar{\rho}_{Y})\leq\sqrt{\epsilon}-\delta\). _Proof:_ see [20]. **Lemma 3**: (Hayashi-Nagaoka inequality [26])_ Suppose that \(S,T\in\mathcal{P}(\mathcal{H}_{X})\) such that \((I-S)\in\mathcal{P}(\mathcal{H}_{X})\) are operators such that \(T\geq 0\) and \(0\leq S\leq I\), then for all positive constant \(c\), the following relation holds: \[\begin{split} I-(S+T)^{-\frac{1}{2}}S&(S+T)^{- \frac{1}{2}}\\ &\leq(1+c)(I-S)+(2+c+c^{-1})T\end{split} \tag{3}\] _Proof:_ see [26]. ## III Channel Model A two-user CQ-MA-WTC is a triple \((\mathcal{X}_{1}\times\mathcal{X}_{2},\mathcal{N}^{X_{1}X_{2}\to YZ}(x_{1},x_{2}) \equiv\rho_{x_{1}x_{2}}^{YZ},\mathcal{H}^{Y}\bigotimes\mathcal{H}^{Z})\), where \(\mathcal{X}_{i},i\in\{1,2\}\) denote the input alphabet sets, and \(Y,Z\) denote the output systems (\(Y\) denotes the channel output at the legitimate receiver (Charlie), and \(Z\) is the channel output at the eavesdropper). \(\rho_{x_{1}x_{2}}^{YZ}\) is the system output's quantum state. Both users want to transmit their messages as secure as possible over a CQ-MA-WTC to the receiver. The main channel is illustrated in Figure 1. Consider the main channel illustrated in Figure 1. Each user chooses its message \(m_{i};i\in\{1,2\}\) from its message set \(\mathcal{M}_{i}=[1:|\mathcal{M}_{i}|=2^{R_{i}}];i\in\{1,2\}\) (\(R_{1}\) and \(R_{2}\) are the transmitting rates corresponding to the first and the second messages, respectively), and sends it over a CQ-MA-WTC. The users also use two junk variables \(k_{i};i\in\{1,2\}\) from two amplification sets \(\mathcal{K}_{i}=\left[1:|\mathcal{K}_{i}|=2^{R_{i}}\right];i\in\{1,2\}\) for randomizing Eve's knowledge. We have two doubly indexed codebooks \(x_{1}(m_{1},k_{1})\), and \(x_{2}(m_{2},k_{2})\), for user-1 and user-2, respectively. ## IV Main Results In this section, we present the main results. Corollary 1 gives a one-shot achievable secrecy rate region for sending classical messages over a CQ-MA-WTC based on Sen's quantum joint typicality lemma [14]. The second theorem presents a novel approach to decode both messages over a CQ-MA-WTC reliably and confidentially (simultaneous position-based decoder). It should be noted that Corollary 1 and Theorem 1 use the same method to prove the security requirements. Also, we present a theorem that tries to overcome the bottlenecks connected to Theorem 1. **Corollary 1:** _(One-shot achievable rate region for CQ-MA-WTC) Consider a two-user CQ-MA-WTC which accepts \(X_{1}\) and \(X_{2}\) as inputs and \(Y\), and \(Z\) as outputs. \(\rho_{x_{1}x_{2}}^{\gamma_{Z}}\) is the channel density operator. For any fixed \(\epsilon\in(0,1)\), \(\epsilon^{\prime}\in(0,\delta^{\prime})\) and \(\delta,\delta^{\prime}\) such that \(\delta^{\prime}>0\), the rate pair \(\left(R_{1},R_{2},49\sqrt{\epsilon}+20\delta^{\prime\frac{1}{6}}\right)\) is achievable to satisfy the following inequalities:_ \[R_{1}\leq I_{H}^{\epsilon}(X_{1} :X_{2}Y|Q)_{\rho}-I_{max}^{\eta}(X_{1}:Z|Q)_{\rho}+\log\epsilon-2 -\log\frac{3}{\epsilon^{\prime 3}}\] \[+\frac{1}{4}\log\delta^{\prime}\] \[R_{2}\leq I_{H}^{\epsilon}(X_{2} :X_{2}Y|Q)_{\rho}-I_{max}^{\eta}(X_{2}:ZX_{1}|Q)_{\rho}+\log \epsilon-2\] \[-\log\frac{3}{\epsilon^{\prime 3}}+\frac{1}{4}\log\delta^{\prime}+ \mathcal{O}(1)\] \[R_{1}+R_{2}\leq I_{H}^{\epsilon}(X_{1} :X_{2}:Y|Q)_{\rho}-I_{max}^{\eta}(X_{1}:Z|Q)_{\rho}\] \[-I_{max}^{\eta}(X_{2}:ZX_{1}|Q)_{\rho}+\log\epsilon-2-2\log\frac{ 3}{\epsilon^{\prime 3}}\] \[+\frac{1}{2}\log\delta^{\prime}+\mathcal{O}(1)\] _where \(\eta=\delta^{\prime}-\epsilon^{\prime}\) and the union is taken over input distribution \(p_{Q}(q)p_{x_{1}|Q}(x_{1}|q)p_{x_{2}|Q}(x_{2}|q)\). \(Q\) is the time-sharing random variable, and all of the mutual information quantities are taken with respect to the following state:_ \[\rho^{q\alpha_{1}x_{2}\gamma_{Z}}\equiv\] \[\sum_{q,x_{1},x_{2}}p_{Q}p(q)p_{x_{1}|Q}(x_{1}|q)p_{x_{2}|Q}(x_{2 }|q)|q)\langle q|^{Q}\] \[\bigotimes\limits_{\bigotimes}|x_{1}\rangle\langle x_{1}|^{X_{1}} \otimes|x_{2}\rangle\langle x_{2}|^{x_{2}}\] \[\bigotimes\limits_{\rho_{x_{1}x_{2}}^{\gamma_{Z}}}\big{(}\gamma_{ x_{1}x_{2}}^{\gamma_{Z}}\big{)} \tag{3}\] _Proof_: See Appendix A. _Sketch of proof_: The proof has two steps: 1- Reliable decoding based on Sen's quantum one-shot joint typicality (Definition 9). 2- Secure decoding based on Lemma 1. **Theorem 1:** _(one-shot lower bound for CQ-MA-WTC) For any fixed \(\epsilon\in(0,1),\epsilon^{\prime}\in(0,1)\) and \(\delta,\delta^{\prime}\) such that \(\delta\in(0,\epsilon)\), and \(\delta^{\prime}\in(0,\epsilon^{\prime})\), there exists a one-shot code for the channel \(\mathcal{N}_{x_{1}x_{2}+\gamma_{Z}}\), if rate pair \(\left(R_{1},R_{2},\epsilon+2\delta+20\delta^{\prime\frac{1}{6}}\right)\) satisfies the following bounds:_ \[R_{1}\leq I_{H}^{\epsilon}(X_{1} :X_{2}Y|Q)_{\rho}-I_{max}^{\eta}(X_{1} :Z|Q)_{\rho}-\log_{2}\left(\frac{4\epsilon}{\delta^{2}}\right)\] \[-\log\frac{3}{\epsilon^{\prime\,3}}+\frac{1}{4}\log\delta^{\prime}\] \[R_{2}\leq I_{H}^{\epsilon}(X_{2} :X_{1}Y|Q)_{\rho}-I_{max}^{\eta}(X_{2}:ZX_{1}|Q)_{\rho}-\log_{2}\left(\frac{4 \epsilon}{\delta^{2}}\right)\] \[-\log\frac{3}{\epsilon^{\prime\,3}}+\frac{1}{4}\log\delta^{\prime }+\mathcal{O}(1)\] \[R_{1}+R_{2}\leq I_{H}^{\epsilon}(X_{1} :X_{2}:Y|Q)_{\rho}-I_{max}^{\eta}(X_{1}:Z|Q)_{\rho}\] \[-I_{max}^{\eta}(X_{2} :ZX_{1}|Q)_{\rho}-\log_{2}\left(\frac{4\epsilon}{\delta^{2}}\right)\] \[-2\log\frac{3}{\epsilon^{\prime\,3}}+\frac{1}{2}\log\delta^{\prime }+\mathcal{O}(1)\] _where \(\eta=\delta^{\prime}-\epsilon^{\prime}\) and the union is taken over input distribution \(p_{Q}(q)p_{x_{1}|Q}(x_{1}|q)p_{x_{2}|Q}(x_{2}|q)\). \(Q\) is the time-sharing random variable, and all mutual information quantities are taken with respect to the state (3)._ _Proof_: See Appendix B. _Sketch of proof_: The proof has two steps: 1- Reliable decoding based on the _simultaneous position-based technique_: for simplicity of analysis, we merge reliability and confidentiality criteria into a single criterion [20]. 2- Secure decoding based on the Lemma 1. **Remark 1:** It should be noted that, both of the above theorems tend to the same result if and only if \(\delta=\epsilon\). As mentioned before, the simultaneous position-based decoder tends to a multiple hypothesis testing problem which is unsolvable in the general case. Also, the convex split lemma (Lemma 2) does not make sense in the simultaneous decoding. Because it runs to the famous smoothing bottleneck of quantum information theory. Now, consider the channel illustrated in Figure 2. Figure 1: The CQ-MA-WTC model This channel accepts two or more messages from one user. We call this channel a point-to-point quantum wiretap channel with multiple messages (PP-QWTC). Consider PP-QWTC with classical messages. This channel is studied in [27] under a different scenario wherein a sender wants to send classical and quantum messages simultaneously to a legitimate receiver. _Information processing task_: Two classical messages \((m_{1},m_{2})\in\mathcal{M}_{1}\times\mathcal{M}_{2}\) are possessed by a sender (Alice) and be transmitted to a receiver (Bob) in the presence of a passive wiretapper over a point-to-point quantum channel under the one-shot scenario. Both of the messages, should be kept as secure as possible from the wiretapper. The PP-QWTC is a triple \((\mathcal{X},\mathcal{N}^{\mathcal{X}\to YZ}(u_{1},u_{2})\equiv\rho_{x(u_{1},u_ {2})}^{YZ}\mathcal{H}^{Y}\otimes\mathcal{H}^{Z})\), where \(\mathcal{X}\) denotes the input alphabet sets, and \(Y\), \(Z\) denote the output systems (\(Y\) denotes the channel output at the legitimate receiver (Bob), and \(Z\) is the channel output at the eavesdropper). \(\rho_{x(u_{1},u_{2})}^{YZ}\equiv\rho_{u_{1}u_{2}}^{YZ}\) is the system output's quantum state. Alice chooses its message \(m_{i};i\in\{1,2\}\) from its message set \(\mathcal{M}_{i}=[1\colon[\mathcal{M}_{i}]=2^{R_{i}}];i\in\{1,2\}\), and sends it over a PP-QWTC. Alice also uses two junk variables \(k_{i};i\in\{1,2\}\) from two amplification sets \(\mathcal{K}_{i}=[1\colon[\mathcal{X}_{i}]=2^{R_{i}}];i\in\{1,2\}\) for randomizing Eve's knowledge. We have two doubly indexed codebooks \(u_{1}(m_{1},k_{1})\), and \(u_{2}(m_{2},k_{2})\). _Encoding_: An encoding operation by Alice \(\mathcal{E}\colon M_{1}M_{2}\to\mathcal{D}(\mathcal{H}_{A})\) \[\forall m_{1},m_{2}\in M_{1},M_{2}\quad\frac{1}{2}\big{\|}\rho_{M_{1}M_{2}Z}- \rho_{M_{1}M_{2}}\otimes\bar{\rho}_{2}\big{\|}_{1}\leq\epsilon_{2} \tag{4}\] where for each message \(m_{1},m_{2}\), \(\rho_{M_{1}M_{2}Z}\) and \(\rho_{M_{1}M_{2}}\) are appropriate marginal of the state \(\rho_{M_{1}M_{2}YZ}=\frac{1}{|\mathcal{M}_{1}||M_{2}|}\sum_{m_{2}=1}^{|\mathcal{ M}_{2}|}\sum_{m_{1}=1}^{|\mathcal{M}_{1}|}|m_{1}\rangle\langle m_{1}|\otimes|m_{2} \rangle\langle m_{2}|\otimes\mathcal{M}\big{(}\mathcal{E}(m_{1},m_{2})\big{)}\). Also, \(\bar{\rho}_{Z}\) can be any arbitrary state. _Decoding_: Decoding operation by Bob \(\mathcal{D}\colon\mathcal{D}(\mathcal{H}_{B})\to\bar{M}_{1}\bar{M}_{2}\) such that: \[pr\left(\left(\bar{M}_{1},\bar{M}_{2}\right)\neq(M_{1},M_{2})\right)\leq \epsilon_{1} \tag{5}\] A rate pair \((R_{1},R_{2})\) is \((\epsilon_{1},\epsilon_{2})\)-achievable if, for such encoding and decoding maps \((\mathcal{E},\mathcal{D})\), the conditions stated in (4) and (5) are satisfied. As it can be understood from criterion (4), the reliability and confidentiality conditions are merged into a single criterion. This idea is used in [28] and [20] for the first time. _Theorem 2: (An inner bound on the one-shot capacity region of \(\,\,\text{PP-QWTC}\)) For any fixed \(\epsilon_{1}\in(0,1),\epsilon_{2}\in(0,1)\) and \(\delta_{1},\delta_{2}\) such that \(\delta_{1}\in(0,\epsilon_{1})\) and \(\delta_{2}\in(0,\epsilon_{2})\), there exists a one-shot code for the channel \(\mathcal{N}_{X\to YZ}\), if rate pair \((R_{1},R_{2},3\epsilon_{1}+2\sqrt{\epsilon_{1}}+2\sqrt{\epsilon_{2}},2(\epsilon _{1}+\sqrt{\epsilon_{1}})+\sqrt{\epsilon_{2}})\) satisfies the following bounds:_ \[R_{1}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{1};Y|U_{2})_{\rho}-I_{ max}^{\sqrt{\epsilon_{2}}-\delta_{2}}(U_{1};Z)_{\rho}-\log\frac{4\epsilon_{1}}{ \delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}}\] \[R_{2}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{2};Y|U_{1})_{\rho}-I_{max}^{\sqrt{ \epsilon_{2}}-\delta_{2}}(U_{2};Z|U_{1})_{\rho}-\log\frac{4\epsilon_{1}}{ \delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}}\] with respect to state \(\rho_{U_{1}}u_{2}v_{Z}=\sum_{u_{2}=1}^{|u_{2}|}\sum_{u_{1}=1}^{|u_{1}|}p(u_{1},u_{2})|u_{1}\rangle\langle u_{1}|\otimes|u_{2}\rangle\langle u_{2}|\otimes \rho_{YZ}^{u_{1}u_{2}}\). _Proof_: In Appendix C. **Remark 2:** The proof of Theorem 2 has two advantages over the proof of Theorem 1: The first is that the proof of Theorem 2 is based on solving a binary hypothesis testing problem against the proof of Theorem 1, which is based on solving a multiple hypothesis testing problem. The second is that in the privacy proof of Theorem 1, Lemma 1 [16] is used. But, in the proof of Theorem 2 the convex split lemma (Lemma 2) can be used. **Remark 3:** From a comparison between the results of Theorem 1 and Theorem 2, it can be understood that the proof of Theorem 3 does not give the sum-rate \((R_{1}+R_{2})\). This is because of using the successive decoding technique. This issue should not cause doubts about whether PP-QWTC is a dual for CQ-MA-WTC or not. To solve this doubt, we propose the issue of quantum broadcast channels. _Quantum broadcast channels_ The quantum broadcast channel (QBC) accepts one user and two or more receivers. In the basic case, the sender (Alice) wishes to transmit three separate messages: \(m_{1}\) is the personal message for the first receiver \(Y_{1}\), \(m_{2}\) is the personal message for the second receiver \(Y_{2}\), and \(m_{c}\) is the common message for both of the receivers. The basic QBC is illustrated in Figure 3. It should be noted that, for ease of calculation, we removed the security constraint from the problem. Figure 3: The QBC model. Figure 2: The PP-QWTC model. \[\rho_{x_{1}x_{1}^{\prime}x_{1}^{\prime\prime}}\equiv\sum_{x_{1}}p_{x_{1}}(x_{1}) \left|x_{1}\right\rangle\left\langle x_{1}\right|_{x_{1}}\otimes\left|x_{1} \right\rangle\left\langle x_{1}\right|_{x_{1}^{\prime}}\otimes\left|x_{1} \right\rangle\left\langle x_{1}\right|_{x_{1}^{\prime\prime}} \tag{6}\] \[\sigma_{x_{2}x_{2}^{\prime}x_{2}^{\prime\prime}}\equiv\sum_{x_{2}}p_{x_{2}}(x_{2 })\left|x_{2}\right\rangle\left\langle x_{2}\right|_{x_{2}}\otimes\left|x_{2} \right\rangle\left\langle x_{2}\right|_{x_{2}^{\prime}}\otimes\left|x_{2} \right\rangle\left\langle x_{2}\right|_{x_{2}^{\prime\prime}} \tag{7}\] \[\rho_{x_{1}x_{1}^{\prime\prime}YZ}\equiv\sum_{x_{1}}p_{x_{1}}(x_{1})\left|x_{1 }\right\rangle\left\langle x_{1}\right|_{x_{1}}\otimes\rho_{YZ}^{x_{1}x_{2}} \otimes\left|x_{1}\right\rangle\left\langle x_{1}\right|_{x_{1}^{\prime\prime}} \tag{8}\] \[\sigma_{x_{2}x_{2}^{\prime\prime}YZ}\equiv\sum_{x_{2}}p_{x_{2}}(x_{2})\left|x_ {2}\right\rangle\left\langle x_{2}\right|_{x_{2}}\otimes\rho_{YZ}^{x_{1}x_{2}} \otimes\left|x_{2}\right\rangle\left\langle x_{2}\right|_{x_{2}^{\prime\prime}} \tag{9}\] \[\rho_{x_{1}x_{2}YZ}\equiv\mathcal{N}_{x_{1}^{\prime}x_{2}^{\prime}-YZ}\left( \rho_{x_{1}x_{1}^{\prime\prime}YZ}\otimes\sigma_{x_{2}x_{2}^{\prime\prime}YZ} \right)=\sum_{x_{1}x_{2}}p_{x_{1}}(x_{1})p_{x_{2}}(x_{2})\left|x_{1}\right\rangle \left\langle x_{1}\right|_{x_{1}}\otimes\left|x_{2}\right\rangle\left\langle x _{2}\right|_{x_{2}}\otimes\rho_{YZ}^{x_{1}x_{2}} \tag{10}\] \[\frac{1}{|\mathcal{M}_{1}||\mathcal{M}_{2}|}\sum_{m_{2}=1}^{|\mathcal{M}_{2}|} \sum_{m_{2}=1}^{|\mathcal{M}_{1}|}\frac{1}{2}\left\|\mathcal{D}_{Y\to \bar{\alpha}_{1}\bar{\alpha}_{2}}\left(\rho_{(x_{1}x_{1}^{\prime})\left\| \mathcal{M}_{1}\left\|x_{1}\right\|x_{1}\right\|x_{1}}^{(m_{1},k_{1})\left(m_ {2},k_{2}\right)}\right.\right.\] \[\left.\left.\left.\otimes\ \hat{\rho}_{x_{1}^{\prime\prime}\otimes|\mathcal{M}_{1}\left\|x_{1}\right\|x _{1}}^{(m_{1},k_{1})\left(m_{2},k_{2}\right)}\right)\right\|_{1}\leq\epsilon+2 \delta+20\delta^{\frac{1}{\delta}} \tag{11}\] where \(\hat{\rho}_{x_{1}^{\prime\prime}\otimes|\mathcal{M}_{1}\left\|x_{1}\right\|x_{ 1}}^{(m_{1},k_{1})\left(m_{2},k_{2}\right)}\coloneqq\rho_{x_{1}^{\prime \prime}\otimes|\mathcal{M}_{1}\left\|x_{1}\right\|x_{1}^{\prime\prime}\otimes| \mathcal{M}_{2}\left\|x_{2}\right\|\otimes\bar{\rho}_{Z}}\). \[\mathcal{D}_{Y\to\bar{\alpha}_{1}\bar{\alpha}_{2}}\left(\rho_{x_{1}^{ \prime}\otimes|\mathcal{M}_{1}\left\|x_{1}\right\|x_{1}}^{(m_{1},k_{1})\left(m _{2},k_{2}\right)}\right)\coloneqq\sum_{m_{2}=1}^{|\mathcal{M}_{2}|}\sum_{m_{2 }=1}^{|\mathcal{M}_{1}|}p_{\bar{\alpha}_{1}}(m_{1})p_{\bar{\alpha}_{1}}(m_{1}) \left|m_{1}\right\rangle\left\langle m_{1}\right|_{\bar{\alpha}_{1}}\otimes \left|m_{2}\right\rangle\left\langle m_{2}\right|_{\bar{\alpha}_{2}}\] \[=\sum_{m_{2}=1}^{|\mathcal{M}_{2}|}\sum_{m_{2}=1}^{|\mathcal{M}_{2}| }\sum_{m_{2}=1}^{|\mathcal{M}_{1}|}Tr\left\{\Lambda_{X_{1}}^{m_{2}}\otimes| \mathcal{M}_{2}|\mathcal{M}_{2}|\left\|x_{2}\right\|^{2}|Y\rho_{X_{1}^{\prime }\otimes|\mathcal{M}_{1}\left\|x_{1}\right\|x_{1}}^{m_{1}m_{2}k_{1}k_{2}}\rho _{X_{1}^{\prime\prime}\otimes|\mathcal{M}_{1}\left\|x_{1}\right\|x_{1}}^{(m_{ 1},m_{2}k_{1}k_{2})}\right\}|m_{1}\rangle\left\langle m_{1}\right|_{\bar{ \alpha}_{1}}\otimes\left|m_{2}\right\rangle\left\langle m_{2}\right|_{\bar{ \alpha}_{2}} \tag{12}\] The problem of QBC is widely studied in the i.i.d. case in [2-3] and in the one-shot case in [29]. In the following, we want to achieve a one-shot inner bound for QBC with classical messages. Suppose that Alice has not a personal message for the second receiver \(Y_{2}\) (\(m_{2}=\emptyset\to R_{2}=0\)). The QBC under the one-shot setting is a triple ( \(\mathcal{X},N^{\mathcal{X}\to Y_{1}Y_{2}}\equiv\rho_{x}^{Y_{1}Y_{2}}, \mathcal{H}^{Y_{1}}\otimes\mathcal{H}^{Y_{2}}\)), where \(\mathcal{X}\) denotes the input alphabet set, and \(Y_{1},Y_{2}\) denote the output systems. \(\rho_{x}^{Y_{1}Y_{2}}\) is the system output's quantum state. **Theorem 3**: _(one-shot inner bound for QBC) Let U be an auxiliary random variable, \(p=p_{X|U}(x|u)p_{U}(u)\) be the code probability function. The one-shot achievable rate consists of all rate pairs \((R_{1},R_{c})\) such that:_ \[R_{1}\leq I_{H}^{e}(X;Y_{1}|U)_{\rho}-2+\log\epsilon\] \[R_{c}\leq I_{H}^{e}(U;Y_{2})_{\rho}-2+\log\epsilon\] \[R_{1}+R_{c}\leq I_{H}^{e}(X;Y_{1})_{\rho}-2+\log\epsilon\] _is achievable, and all information quantities are taken with respect to the following state:_ \[\rho_{UX_{1}Y_{2}}=\sum_{u,x}p_{U}(u)p_{X|U}(x|u)\left\langle u \right|_{U}\otimes\left|x\right\rangle\left\langle x\right|_{X}\otimes\rho_{x}^{Y_ {1}Y_{2}} \tag{13}\] _Proof:_ In Appendix D. Now, consider the extended version of the above theorem: **Corollary 2**: _(one-shot inner bound for QBC with three personal messages for the first receiver) Let U be an auxiliary random variable, \(p=p_{U}(u)p_{X_{1}|U}(x_{1}|u)p_{X_{2}|UX_{1}}(x_{2}|ux_{1})\) be the code probability function. The one-shot achievable rate region consists of all rate tuples \((R_{1},R_{c},R_{2})\) in order to sending \((m_{1},m_{2},m_{c})\) such that:_ \[R_{1}\leq I_{H}^{e}(X_{1};Y_{1}|U)_{\rho}-2+\log\epsilon\] \[R_{2}\leq I_{H}^{e}(X_{2};Y_{1}|UX_{1})_{\rho}-2+\log\epsilon\] \[R_{c}\leq I_{H}^{e}(U;Y_{2})_{\rho}-2+\log\epsilon\] \[R_{1}+R_{2}\leq I_{H}^{e}(X_{1};Y_{1}|U)_{\rho}-2+\log\epsilon\] \[R_{1}+R_{c}\leq I_{H}^{e}(X_{1};Y_{1})_{\rho}-2+\log\epsilon\] \[R_{2}+R_{c}\leq I_{H}^{e}(X_{2};Y_{1}|X_{1})_{\rho}-2+\log\epsilon\] _is achievable, and all information quantities are taken with respect to the following state:_ \[\rho_{UX_{1}Y_{2}}=\sum_{u,x}p_{U}(u)p_{X_{1}|U}(x_{1}|u)\left\langle u \right|_{U}\otimes\left|x\right\rangle\left\langle x\right|_{X}\otimes\rho_{x}^{Y_ {1}Y_{2}} \tag{14}\] \[\Lambda_{X_{1}\otimes|X_{1}|X_{2}|}^{(m_{2},k_{2})}\] \[\coloneqq \left(\sum_{m_{2}^{\prime}}\sum_{m_{1}^{\prime}}\sum_{k_{2}^{ \prime}}\sum_{k_{1}^{\prime}}\Gamma_{k_{1}^{\prime}}^{m_{1}^{\prime},k_{1}^{ \prime},m_{2}^{\prime},k_{2}^{\prime}}\Gamma_{k_{1}^{\prime}|X_{1}|X_{2}|x_{2} |x_{2}|x_{2}|x_{2}}^{m_{1},k_{1}^{\prime},m_{2},k_{2}}\right)^{-\frac{1}{2}} \Gamma_{k_{1}^{\prime}|X_{1}|X_{1}|x_{2}^{\prime}|x_{2}|x_{2}|x_{2}|x_{2}}^{m_ {1},k_{1}^{\prime},m_{2},k_{2}^{\prime},k_{1}^{\prime}}\Gamma_{k_{1}^{\prime}|X _{1}|x_{1}|x_{2}^{\prime}|x_{2}|x_{2}|x_{2}}^{m_{1},k_{1}^{\prime},m_{2},k_{2} ^{\prime},k_{1}^{\prime}}\Gamma_{k_{1}^{\prime}|X_{1}|x_{1}|x_{2}^{\prime}|x_{ 2}|x_{2}}^{m_{1},k_{1}^{\prime},m_{2},k_{2}}\right)^{-\frac{1}{2}}\] (14) \[\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{2}^{\prime}|x_{2}|x_{2}|x_{ 2}}^{m_{1},k_{1},m_{2},k_{2},k_{2}}\] \[\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{2}^{\prime}|x_{2}|x_{2}|x_{ 2}}^{m_{1},k_{1},m_{2},k_{2}}\] \[\left.\begin{array}{c}\coloneqq I_{x_{1}x_{2}}^{(1,1),(1,1)} \otimes\ldots\otimes I_{x_{1}x_{2}}^{(1,1),(1,k_{2})}\otimes\ldots\otimes I_{x_ {1}x_{2}}^{(1,1),(m_{2},k_{2}-1)}\otimes\ldots\otimes I_{x_{1}x_{2}}^{(1,k_{1} ),(m_{2},k_{2})}\otimes\ldots\otimes I_{x_{1}x_{2}}^{(1,k_{1}-1),(m_{2},k_{2} )}\\ \otimes I_{x_{1}x_{2}}^{(m_{1},k_{1}),(m_{2},k_{2}-1)}\otimes\ldots\otimes I_{ x_{1}x_{2}}^{(1,k_{1}),(1,k_{2}),(1,k_{2}^{\prime})}\end{array}\right.\otimes \ldots\otimes I_{x_{1}x_{2}}^{(1,k_{1}),(m_{2},k_{2}-1)}\otimes\ldots \otimes I_{x_{1}x_{2}}^{(1,k_{1}),(m_{2},k_{2})}\otimes\ldots\otimes I_{x_{1}x _{2}}^{(m_{1},k_{1}-1),(m_{2},k_{2})}\] (15) \[Tr\left\{\left(I-\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{1}|X_{2} ^{\prime}|x_{2}|x_{2}|x_{2}|x_{2}|x_{2}|x_{2}|x_{2}}^{m_{2},k_{1},k_{2}}\right) \rho_{X_{1}^{\prime}|X_{1}|X_{1}|X_{2}^{\prime}|x_{2}|x_{2}|x_{2}}^{m_{2},k_{ 1},k_{1},k_{2}}\right)=Tr\left\{\left(I-T_{x_{1}x_{2}}Y\right)\!N_{x_{1}^{ \prime}}^{\prime}x_{2}^{\prime}\to Y\!\left(\rho_{X_{1}x_{2}^{\prime}} \otimes\sigma_{X_{1}x_{2}^{\prime}}\right)\right\}\] (16) \[\begin{split} Pr\left(\left(\bar{M}_{1},\bar{M}_{2}\right) \neq(M_{1},M_{2})\right)\\ &\leq(1+c)Tr\left\{\left(I-\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{2} ^{\prime}|x_{2}|x_{2}|x_{2}}^{m_{1},k_{1},m_{2}k_{2}}\right)\rho_{X_{1}^{ \prime}|X_{1}|X_{2}^{\prime}|x_{2}|x_{2}|x_{2}}^{m_{1}m_{2}k_{1},k_{2}}\right. \end{split}\right.\\ &\leq(1+c)Tr\left\{\left(I-\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{ 1}^{\prime}|X_{2}^{\prime}|x_{2}|x_{2}|x_{2}}^{m_{1},k_{1},m_{2}k_{2}}\right. \right.\right.\end{split}\] \[\begin{split}&+(2+c+c^{-1})\sum_{m_{2}^{\prime}=m_{2}}\sum_{m_{1}^{ \prime}=m_{1}}\sum_{k_{2}^{\prime}=k_{2}}\sum_{k_{2}^{\prime}=k_{2}}\sum_{k_{2}^ {\prime}=k_{1}^{\prime}=k_{1}}Tr\left\{\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{ 2}^{\prime}|x_{2}|x_{2}|x_{2}|x_{2}}^{m_{1},k_{1},k_{2}}\right.\end{split}\] \[\begin{split}&=(1+c)Tr\left\{\left(I-\Gamma_{x_{1}^{ \prime}|X_{1}|X_{1}^{\prime}|X_{1}^{\prime}|X_{2}^{\prime}|x_{2}|x_{2}|x_{2}|x_{ 2}|x_{2}}^{m_{1},k_{1},k_{2}}\right.\right.\right.\\ &+(2+c+c^{-1})\sum_{m_{2}^{\prime}=m_{2}}\sum_{k_{2}^{\prime}=k_{2 }}\sum_{k_{2}^{\prime}=k_{2}}\sum_{k_{1}^{\prime}=k_{1}}Tr\left\{\Gamma_{x_{1}^{ \prime}|X_{1}|X_{1}|X_{2}^{\prime}|x_{2}|x_{2}|x_{2}|x_{2}}^{m_{1},k_{1},k_{2}} \right.\end{split}\right.\\ &+(2+c+c^{-1})\sum_{m_{2}^{\prime}=m_{2}}\sum_{m_{1}^{\prime}=m_{1 }}\sum_{k_{2}^{\prime}=k_{2}}\sum_{k_{2}^{\prime}=k_{2}}\sum_{k_{1}^{\prime}=k_{1 }}Tr\left\{\Gamma_{x_{1}^{\prime}|X_{1}|X_{1}|X_{2}^{\prime}|x_{2}|x_{2}|x_{2} }^{m_{1},k_{1},m_{2},k_{2}}\rho_{X_{1}^{\prime}|X_{1}|X_{1}|X_{2}^{\prime}|x_{ 2}|x_{2}|x_{2}}^{m_{1},m_{2},k_{1},k_{2}}\right.\\ &=(1+c)Tr\left\{\left(I-T_{x_{1}x_{2}Y}\right)\!N_{x_{1}^{ \prime}x_{2}^{\prime}\to Y\!\left(\rho_{X_{1}x_{2}^{\prime}}\otimes\sigma_{X_{1} ^{\prime}}\right)\right\}\] \[\begin{split}&+(2+c+c^{-1})(|\mathcal{M}_{1}|\left|\mathcal{M}_{1} -1)Tr\left\{\left(I-T_{x_{1}x_{2}Y}\right)\!N_{x_{1}^{\prime}x_{2}^{\prime} \to Y\!\left(\rho_{X_{1}}\otimes\rho_{x_{2}^{\prime}}\otimes\sigma_{X_{1}x_{2}^{ \prime}}\right)\right\}\\ &+(2+c+c^{-1})(|\mathcal{M}_{2}|\left|\mathcal{M}_{2}\right| \left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2} \right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{ 2}\right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2}\right|\left| \mathcal{M}_{2}\right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2} \right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2} \right|\left|\mathcal{M}_{2}\right|\left|\mathcal{M}_{2}\right|\right\}\\ &-1)Tr\left\{\left(I-T_{x_{1}x_{2}Y}\right)\!N_{x_{1}^{\prime}x_{2}^{ \prime}\to Y\!\left(\rho_{X_{1}}\otimes\rho_{x_{2}^{\prime}}\otimes\sigma_{X_{1} }\otimes\sigma_{x_{2}^{\prime}}\right)\right\}\end{split}\ ### Asymptotic analysis In this subsection, we want to evaluate our secrecy rate region in the asymptotic i.i.d. case (asymptotic limit of many uses of a memoryless channel). Consider PP-QWTC (\(\mathbf{x}(u_{1},u_{2})\rightarrow\rho_{x}^{YZ}\)). The capacity region of the channel can be expressed as follows: \[\mathcal{C}_{\infty}(\mathcal{N})\coloneqq\lim_{\epsilon_{1},\epsilon_{2} \rightarrow\infty}\lim_{n\rightarrow\infty}\frac{1}{n}\mathcal{C}^{\epsilon_ {1},\epsilon_{2}}(\mathcal{N}^{\otimes n}) \tag{20}\] where \(\mathcal{C}^{\epsilon_{1},\epsilon_{2}}(\mathcal{N}^{\otimes n})\equiv\max_{p (u_{1},u_{2})}\mathcal{R}^{\epsilon_{1},\epsilon_{2}}(\mathcal{N}^{\otimes n})\). Let \(\mathcal{R}(\mathcal{N})\) be the set of the maximum rate pairs \((R_{1}^{r},R_{2}^{r})\), \[\mathcal{R}(\mathcal{N})=\begin{cases}R_{1}^{r}\leq I(U_{1};Y|U_{2})_{\rho}-I( U_{1};Z)_{\rho}\\ R_{2}^{r}\leq I(U_{2};Y|U_{1})_{\rho}-I(U_{2};Z|U_{1})_{\rho}\end{cases} \tag{21}\] Then the capacity region \(\mathcal{C}_{\infty}(\mathcal{N})\) is the union over \(n\) uses of the channel \(\mathcal{N}\): \[\mathcal{C}_{\infty}(\mathcal{N})\coloneqq\max_{p(u_{1},u_{2})}\frac{1}{n} \bigcup_{n=1}^{\infty}\mathcal{R}(\mathcal{N}^{\otimes n}) \tag{22}\] Our aim is to prove the expression above. Consider both of single rates. Applying Fact 2 (and its conditional version) we have, \[R_{1}\geq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{1};Y|U_{2})_{\rho}- I_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}-\gamma}(U_{2};Z)_{\rho}-\log\frac{4 \epsilon_{1}}{\delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}}-\log\frac{3}{\gamma^{2}}\] \[R_{2}\geq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{2};Y|U_{1})_{\rho}- I_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}-\gamma}(U_{2};Z|U_{1})_{\rho}-\log\frac{4 \epsilon_{1}}{\delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}}-\log\frac{3}{\gamma^{2}}\] To prove the achievability, consider the one-shot lower bounds presented in Theorem 2, and apply quantum AEP [30] for the conditional smooth hypothesis testing, and max-mutual information. From Theorem 2, for \(r\) uses of the channel \(\mathcal{N}\), the following lower bound \(\mathcal{C}^{\epsilon_{1},\epsilon_{2}}(\mathcal{N}^{\otimes n})\) can be obtained: \[\bigcup_{n=1}^{r}\mathcal{R}(\mathcal{N}^{\otimes n})\subseteq\mathcal{C}^{ \epsilon_{1},\epsilon_{2}}(\mathcal{N}^{\otimes r})\] where \(\mathcal{R}(\mathcal{N}^{\otimes n})\) is the set of all rate pairs \((R_{1}^{r},R_{2}^{r})\) satisfying: \[R_{1}^{r}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{1}^{n};Y^{ \otimes n}|U_{2}^{n})_{\rho}-I_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}-\gamma}( U_{1}^{n};Z^{\otimes n})_{\rho}\] \[-\log\frac{4\epsilon_{1}}{\delta_{1}^{2}}-2\log\frac{1}{\delta_{ 2}}-\log\frac{3}{\gamma^{2}} \tag{23}\] \[R_{2}^{r}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{2}^{n};Y^{ \otimes n}|U_{1}^{n})_{\rho}\] \[-l_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}-\gamma}(U_{2}^{n};Z^{ \otimes n}|U_{1}^{n})_{\rho}\] \[-\log\frac{4\epsilon_{1}}{\delta_{1}^{2}}-2\log\frac{1}{\delta_{ 2}}-\log\frac{3}{\gamma^{2}} \tag{24}\] We can assume that the sequences of the random variables are generated in an i.i.d. fashion according to their distributions. This is due to the fact that the region above is basically a lower bound on the capacity region. This empowers us to make use of quantum AEP as described below. From Fact 3, we have, \[\lim_{\epsilon_{1}\to 0}\lim_{r\to 0}\frac{1}{r}I_{H}^{ \epsilon_{1}-\delta_{1}}(U_{1}^{r};Y^{\otimes r}|U_{2}^{r})_{\rho^{\otimes r}}= I(U_{1};Y|U_{2})_{\rho} \tag{25}\] \[\lim_{\epsilon_{1}\to 0}\lim_{r\to 0}\frac{1}{r}I_{H}^{ \epsilon_{1}-\delta_{1}}(U_{2}^{r};Y^{\otimes r}|U_{1}^{r})_{\rho^{\otimes r}}= I(U_{2};Y|U_{1})_{\rho} \tag{26}\] Also, using Fact 4, we have the following: \[\lim_{\epsilon_{2}\to 0}\lim_{r\to 0}\frac{1}{r}I_{max}^{ \sqrt{\epsilon_{2}}-\delta_{2}-\gamma}(U_{1}^{r};Z^{\otimes r})_{\rho^{\otimes r }}=I(U_{1};Z|U_{2})_{\rho} \tag{27}\] \[\lim_{\epsilon_{2}\to 0}\lim_{r\to 0}\frac{1}{r}I_{max}^{ \sqrt{\epsilon_{2}}-\delta_{2}-\gamma}(U_{2}^{r};Z^{\otimes r}|U_{1}^{r})_{\rho^{ \otimes r}}\] (28) \[=I(U_{2};Z|U_{1})_{\rho}\] Putting (25), (26), (27), and (28) into (23), and (24) gives (21): \[\mathcal{R}(\mathcal{N}^{\otimes n})\subseteq\lim_{\epsilon_{1},\epsilon_{2} \to 0}\lim_{r\to 0}\frac{1}{r}\mathcal{C}^{\epsilon_{1},\epsilon_{2}}( \mathcal{N}^{\otimes r})\] Given the argument above and using (20), and (22) completes the proof. ## V Discussion In this paper, we studied the problem of secure communication over a CQ-MA-WTC using three techniques: 1- Sen's joint typicality lemma. 2-simultaneous position-based decoding and, 3-successive position-based decoding. The first and the second decoding techniques use a newly introduced smooth technique [16] to analyze the privacy, while the third technique uses convex splitting [19]. We realized that the simultaneous position-based decoder tends to a multiple hypothesis testing problem which is unsolvable in the general case. We introduced a new channel (PP-QWTC) which can be considered as a dual for CQ-MA-WTC. Also, this channel can be derivate from the quantum broadcast channel. The results show that the PP-QWTC has a near-optimal achievable rate region to CQ-MA-WTC. ### _Appendix A: (Proof of Corollary 1)_ As mentioned, the proof has two steps: Reliable decoding and secure decoding. To these ends, consider two junk variables \(k_{i}\); \(i\in\{1,2\}\) for each of users \(m_{i},i\in\{1,2\}\). These junk variables are used to make two doubly indexed codebooks \(\{x_{1}(m_{1},k_{1})\}_{m_{1}\in\mathbb{M}_{1},k_{1}\in\mathbb{M}_{1}}\) and \(\{x_{2}(m_{2},k_{2})\}_{m_{2}\in\mathbb{M}_{2},k_{2}\in\mathbb{M}_{2}}\). Bob should be able to detect the pair messages \((m_{1},m_{2})\), and the junk variables \(k_{1}\), and \(k_{2}\) with high probability. Using Definition 10 (Sen's inner bound for QMAC), we have the following relation: \[\mathcal{R}_{prip-QMA-WTC}=\mathcal{R}_{Sen}-\mathcal{R}_{leaked}\] with decoding error at most \(49\sqrt{\epsilon}\), and privacy leakage at most \(20\delta^{r\frac{1}{6}}\) (Lemma 1). Also, \(\mathcal{R}_{Sen}\) refers to Sen's inner bound for QMAC (Definition 9), and \(\mathcal{R}_{leaked}\) refers to the leaked information from senders to Eve. From Lemma 1, we have the following: \[R_{1-leaked}\leq l_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{1};Z)_{\rho}+\log \frac{3}{\epsilon^{\prime 3}}-\frac{1}{4}\log\delta^{\prime}\] \[R_{2-leaked}\leq l_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{2};ZX_{1})_{ \rho}+\log\frac{3}{\epsilon^{\prime 3}}-\frac{1}{4}\log\delta^{\prime}+\mathcal{O}(1)\] This completes the proof. ## Appendix B (Proof of Theorem 1) Both of the messages are uniformly distributed on their sets. The receiver has to be able to decode both messages with negligible error probability. Before communication begins, Alice (A) and Bob (B) share randomness with Charlie (C), and wiretapper (Z). Let \(\rho_{x_{1}x_{1}^{\prime}x_{1}^{\prime\prime}}\) (6) and \(\sigma_{x_{2}x_{2}^{\prime}x_{2}^{\prime\prime}}\) (7) be shared-randomness between (A,C,Z) and shared-randomness between (B,C,Z), respectively. Alice has \(X_{1}^{\prime}\) system, Bob has \(X_{2}^{\prime}\) system, and Charlie has \((X_{1},X_{2})\) system, and wiretapper has \((X_{1}^{\prime\prime},X_{2}^{\prime\prime})\) system. Let \(\rho_{x_{1}x_{1}^{\prime}x_{1}^{\prime\prime}x_{2}}\) (8) and \(\sigma_{x_{2}x_{2}^{\prime\prime}x_{2}^{\prime\prime}}\) (9) denote the state resulting from sending \(X_{1}^{\prime}\) and \(X_{2}^{\prime}\) over the channel, respectively. Then the overall controlling state of the channel is as stated in (10). Sketch of the coding scheme:For each of the messages \((m_{i}),i\in\{1,2\}\), there exist local keys \(k_{i}\in[1;|\mathcal{X}_{i}|],i\in\{1,2\}\) as uniform randomness for randomizing Eve's knowledge about the sent messages. These local keys are not accessible to Charlie or Eve. Before the communication begins, assume that Alice, Charlie, and Eve share \(|\mathcal{M}_{1}||\mathcal{K}_{1}|\) copies of the state in (6) and Bob, Charlie and Eve share \(|\mathcal{M}_{2}||\mathcal{K}_{2}|\) copies of the state in (7): \[\rho_{x_{1}^{|\mathcal{M}_{1}||\mathcal{K}_{1}|}}x_{1}^{|\mathcal{ M}_{1}||\mathcal{K}_{1}|}x_{1}^{|\mathcal{M}_{1}||\mathcal{K}_{1}|}x_{1}^{| \mathcal{M}_{1}||\mathcal{K}_{1}|}=\rho_{x_{1}x_{1}^{\prime}x_{1}^{\prime}x_{ 1}^{\prime\prime}}^{\delta|\mathcal{M}_{1}||\mathcal{K}_{1}|}\] \[\sigma_{x_{2}^{|\mathcal{M}_{2}||\mathcal{M}_{2}|\mathcal{M}_{2} ^{\prime}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}| \mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{ 2}|}=\sigma_{x_{2}x_{2}^{\prime}x_{2}^{\prime\prime}}^{\delta|\mathcal{M}_{2} |\mathcal{M}_{2}|}\] To send the pair messages \(m_{1}\), and \(m_{2}\), Alice and Bob pick \(k_{1}\in[1;|\mathcal{K}_{1}|]\) and \(k_{2}\in[1;|\mathcal{K}_{2}|]\), respectively, and uniformly at random. They send \((m_{1},k_{1})\)-th system \(X_{1}^{\prime}\) and \((m_{2},k_{2})\)-th system \(X_{2}^{\prime}\) through the channel \(N_{X_{1}^{\prime}x_{2}^{\prime}=\gamma Z}\). There exists a simultaneous decoder for communication over a CQ-MA-WTC with the upper bound on the average error probability, as stated in (11). As it can be understood from (11), the security criterion is merged into the reliability criterion [20]. The simultaneous position-based decoder can be constructed as stated in (12), where, \[\Lambda_{x_{1}\otimes|x_{1}||x_{2}\otimes|x_{2}||x_{2}||x_{2}|}^{m_{1}m_{2}}= \sum_{k_{2}=1}^{|\mathcal{X}_{2}|}\sum_{k_{1}=1}^{|\mathcal{X}_{1}|}\Lambda_{x _{1}\otimes|x_{1}||x_{2}\otimes|x_{2}||x_{2}|}^{m_{1}m_{2}k_{1}k_{2}}\] Now, we consider the error term. Charlie constructs her position-based decoder to decode \(m_{1}\), \(m_{2}\), \(k_{1}\), and \(k_{2}\). Let \(\Lambda_{x_{1}\otimes|x_{1}||x_{2}\otimes|x_{2}||x_{2}||x_{2}|}^{m_{1}k_{1}k_{ 1}}\) denotes the POVM: \[Tr\left\{\left(I_{x_{1}^{|\mathcal{M}_{1}||\mathcal{M}_{1}|| \mathcal{M}_{2}||\mathcal{M}_{2}||\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{ 2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M }_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\mathcal{M}_{2}|\] \[Tr\left\{\left(I_{x_{1}^{|\mathcal{M}_{1}^{|\mathcal{M}_{1}^{| \mathcal{M}_{1}^{|\mathcal{M}_{1}^{|\mathcal{M}_{1}^{|\mathcal{M}_{2}^{| \mathcal{M}_{2}^{|\ chain of equalities and inequalities as stated in (17). Note that, we used (29)-(31). _Multiple quantum hypothesis testing_: As mentioned before, the problem of the existence of a simultaneous decoder for a general QMAC (more than two users) remained an open problem in the i.i.d. case. In [17], the authors presented a helpful discussion about the multiple quantum hypothesis testing and its relation with QMACs. In summary, the problem of multiple hypothesis testing is an open problem too. There are two possible hypothesis testing schemes: Symmetric and asymmetric. _Chernoff distance_ from symmetric hypothesis testing gives a lower bound on the randomness-assisted error exponent [31]; In contrast, the application of the results from asymmetric hypothesis testing leads to a lower bound on the one-shot randomness-assisted capacity (for QMAC without secrecy constraint) and in turn on the second-order coding rate for randomness-assisted communication. In other words, from [7], we know that there exists a general simultaneous decoder to decode more than two messages simultaneously in the case of commutative version of outputs, and from [17], we know that the multiple hypothesis testing problem can be solved if the composite alternative hypothesis forms a commutative set of operators. This means that, for a test operator \(T\), a finite set of positive semi-definite operators \(\theta\equiv\{\theta_{i}\colon 1\leq i\leq r\}\), for which \(supp(\rho)\subseteq supp(\theta_{i})\) and \(\min\limits_{i}D(\rho\|\theta_{i})>0\), there are two hypotheses, and we have: \[Tr\{(I-T)\rho\}\leq\epsilon \tag{32}\] \[-\log_{2}Tr\left\{T\theta_{i}\right\}\geq\left[\min\limits_{i}D(\rho\|\theta _{i})\right]-\delta \tag{33}\] where \(\delta\) is a positive integer. The last inequality holds when the set \(\theta\) forms a commutative set of operators. More information can be found in [17]. With these explanations, we use asymmetric hypothesis testing for our problem. Note that we want to decode two messages simultaneously. Consider the upper bound on error probability in (17). Then, we rewrite that as follows: \[Pr\left(\left(\tilde{M}_{1},\tilde{M}_{2}\right)\neq(M_{1},M_{2 })\right)\] \[\leq(1+c)Tr\{(I-T)\mu\}\] \[+(2+c+c^{-1})Tr\{T(\theta_{1}+\theta_{2}+\theta_{3})\}\] where, \[\mu=N_{x_{1}^{\prime}x_{2}^{\prime}\to YZ}\Big{(}\rho_{x_{1}x_{2}^{ \prime}}\otimes\sigma_{x_{1}x_{2}^{\prime}}\Big{)}\] \[\theta_{1}=N_{x_{1}^{\prime}x_{2}^{\prime}\to YZ}\Big{(}\rho_{x_{1}} \otimes\rho_{x_{2}^{\prime}}\otimes\sigma_{x_{1}x_{2}^{\prime}}\Big{)}\] \[\theta_{2}=N_{x_{1}^{\prime}x_{2}^{\prime}\to YZ}\Big{(}\rho_{x_{1}x _{2}^{\prime}}\otimes\sigma_{x_{1}}\otimes\sigma_{x_{2}^{\prime}}\Big{)}\] \[\theta_{3}=N_{x_{1}^{\prime}x_{2}^{\prime}\to YZ}\Big{(}\rho_{x_{1}} \otimes\rho_{x_{2}^{\prime}}\otimes\sigma_{x_{1}}\otimes\sigma_{x_{2}^{ \prime}}\Big{)}\] This is called asymmetric hypothesis testing, which tries to minimize all other probabilities subject to a constraint on the error probability \(Tr\{(I-T)\mu\}\). Note that we consider all three hypotheses \((\theta_{1}+\theta_{2}+\theta_{3})\) as a unique composite alternative hypothesis. We can say for such a sequence of test operators, as stated in (32), and (33), the above multiple hypothesis testing problem can be solved as: \[Pr\left(\left(\tilde{M}_{1},\tilde{M}_{2}\right)\neq(M_{1},M_{2 })\right)\] \[\leq(1+c)Tr\{(I-T)\mu\}\] \[+(2+c+c^{-1})Tr\{T(\theta_{1}+\theta_{2}+\theta_{3})\}\] \[=(1+c)\epsilon\] \[+(2+c+c^{-1})\{|\mathcal{K}_{1}|2^{R_{1}-D_{H}^{\epsilon}(\mu\| \theta_{1})}\] \[+|\mathcal{K}_{2}|2^{R_{2}-D_{H}^{\epsilon}(\mu\|\theta_{2})}\] \[+|\mathcal{K}_{1}||\mathcal{K}_{2}|2^{R_{1}+R_{2}-D_{H}^{\epsilon }(\mu\|\theta_{3})\}\] \[=(1+c)\epsilon\] \[+(2+c+c^{-1})\{|\mathcal{K}_{1}|2^{R_{1}-\tilde{H}_{1}(X_{1};X_{2 }Y)}\] \[+|\mathcal{K}_{2}|2^{R_{2}-D_{H}^{\epsilon}(X_{2};X_{1}Y)}\] \[+|\mathcal{K}_{1}||\mathcal{K}_{2}|2^{R_{1}+R_{2}-D_{H}^{\epsilon }(X_{1}X_{2};Y)}\}\] Let \(|\mathcal{K}_{1}|=2^{\tilde{R}_{1}}\) and \(|\mathcal{K}_{2}|=2^{\tilde{R}_{2}}\). Then, by setting the above term equal to \(\epsilon\), with a straightforward simplification, we have: \[R_{1}+\tilde{R}_{1}=I_{H}^{\epsilon}(X_{1}\colon X_{2}Y)+\log_{2 }\left(\frac{\epsilon-(1+c)\epsilon}{2+c+c^{-1}}\right)\] \[R_{2}+\tilde{R}_{2}=I_{H}^{\epsilon}(X_{2}\colon X_{1}Y)+\log_{2 }\left(\frac{\epsilon-(1+c)\epsilon}{2+c+c^{-1}}\right)\] \[R_{1}+\tilde{R}_{1}+R_{2}+\tilde{R}_{2}=I_{H}^{\epsilon}(X_{1}X_ {2}\colon Y)+\log_{2}\left(\frac{\epsilon-(1+c)\epsilon}{2+c+c^{-1}}\right)\] The global maximum of the above expression with respect to \(c\) occurs at \(c=\frac{\delta}{\epsilon}\): \[R_{1}+\tilde{R}_{1}=I_{H}^{\epsilon}(X_{1}\colon X_{2}Y)-\log_{2 }\left(\frac{4\epsilon}{\delta^{2}}\right) \tag{34}\] \[R_{2}+\tilde{R}_{2}=I_{H}^{\epsilon}(X_{2}\colon X_{1}Y)-\log_{2 }\left(\frac{4\epsilon}{\delta^{2}}\right)\] (35) \[R_{1}+\tilde{R}_{1}+R_{2}+\tilde{R}_{2}=I_{H}^{\epsilon}(X_{1}X_ {2}\colon Y)-\log_{2}\left(\frac{4\epsilon}{\delta^{2}}\right) \tag{36}\] and for such a \(c\), we have: \[Pr\left(\left(\tilde{M}_{1},\tilde{M}_{2}\right)\neq(M_{1},M_{2 })\right)\leq\epsilon+2\delta \tag{37}\] Now, we turn our attention to the secrecy criterion. Using Lemma 1, we have: \[\tilde{R}_{1}\leq I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{1}\colon Z)_{ \rho}+\log\frac{3}{\epsilon^{\prime\,3}}-\frac{1}{4}\log\delta^{\prime} \tag{38}\] \[\tilde{R}_{2}\leq I_{max}^{\delta^{\prime}-\epsilon^{\prime}}(X_{2}\colon ZX_{1})_ {\rho}+\log\frac{3}{\epsilon^{\prime\,3}}-\frac{1}{4}\log\delta^{\prime}+ \mathcal{O}(1) \tag{39}\] Substituting (38) and (39) in (34)-(36) completes the proof. _Appendix C: (Proof of Theorem 2)_ The proof uses two successive position-based decoders. The first decoder tries to decode the first message \(m_{1}\), and the second decoder tries to decode the second message \(m_{2}\) given the true decoded \(m_{1}\). This means that if the first decoder fails, the second decoder fails too. This decoding order can be shown as \(m_{1}\to m_{2}\). Constructing the first position-based decoder is the same as that presented in [20]. To decode \(m_{2}\), Bob performs his second position-based decoder conditioned on \(U_{1}\), which works for all \(u_{1}\in\mathcal{U}_{1}\). It should be noted that, the feeding state of the second decoder differs from the main state of the channel. Alice, Bob, and Eve are allowed to pre-share some quantum state as randomness. Also, Alice has access to two sources of uniform junk randomness \(k_{i};i\in\{1,2\}\). The pre-shared randomness is as follows: \[\rho_{x_{1}x_{1}^{\prime}(AX_{2}x_{2}^{\prime})^{\Theta|M_{2}||X_{ 2}|}}^{\otimes|M_{2}||X_{2}|} \tag{40}\] \[\coloneqq\left[\sum_{x_{1}}p(x_{1})|x_{1}\rangle\langle x_{1}|_{ x_{1}}\right.\] \[\left.\otimes\left|x_{1}\rangle\langle x_{1}\right|_{x_{1}^{ \prime}}\left(\sum_{x_{2}}p(x_{2})|x_{2}\rangle\langle x_{2}|_{x_{2}}\right.\right.\] \[\left.\left.\otimes\left|x_{2}\rangle\langle x_{2}\right|_{x_{2}^ {\prime}}\right)^{\otimes|M_{2}||X_{2}|}\right]^{\otimes|M_{1}||X_{1}|}\] The arguments connected to the decoding process for \(m_{1}\) are listed as follows: * The probability of error for decoding \(m_{1}\): \[p_{e_{1}}=p\big{\{}\hat{M}_{1}\neq M_{1}\big{\}}\] (41) \[\coloneqq\frac{1}{|\mathcal{M}_{1}|}\sum_{m_{1}=1}^{|\mathcal{X}_{ 1}|}\frac{1}{2}\left\|\mathcal{D}_{Yx_{2}\sim\theta_{1}}^{m_{1}}\left(\rho_{x_ {1}\otimes|0\rangle\langle x_{1}||X_{1}|Y}^{(m_{1},k_{1})(m_{2},k_{2})}\right)\right.\] \[-\left.|m_{1}\rangle\langle m_{1}|_{\theta_{1}}\otimes\left.\beta _{\tilde{P}_{2}}\right\|_{1}\leq\varepsilon_{1}+\sqrt{\varepsilon_{2}}\] where \(\mathcal{D}_{Yx_{1}\sim\theta_{1}}^{m_{1}}\left(\rho_{x_{1}\otimes|0\rangle \langle x_{1}||X_{1}|Y}^{(m_{1},k_{1})(m_{2},k_{2})}\right)\) is decoding map for \(m_{1}\): \[\mathcal{D}_{Yx_{1}\sim\theta_{1}}^{m_{1}}\left(\rho_{x_{1}\otimes|0 \rangle\langle x_{1}||X_{1}|Y}^{(m_{1},k_{1})(m_{2},k_{2})}\right)\] \[\coloneqq\sum_{k_{1}=1}^{|\mathcal{X}_{1}|}\sum_{m_{1}=1}^{| \mathcal{M}_{1}|}Tr\left\{\Lambda_{x_{1}|M_{1}||X_{1}|Y}^{m_{1},k_{1}}(\rho_{ x_{1}\otimes|0\rangle\langle x_{1}||X_{1}|Y}^{(m_{1},k_{1})(m_{2},k_{2})} \right)\right.\] \[\left.\left.\otimes\frac{\sqrt{\Lambda_{x_{1}|M_{1}||X_{1}|Y}^{m_{1 },k_{1}}}\rho_{x_{1}\otimes|0\rangle\langle x_{1}||X_{1}|Y}^{(m_{1},k_{1})(m_{ 2},k_{2})}}{Tr\left\{\Lambda_{x_{1}|M_{1}||X_{1}|Y}^{m_{1},k_{1}}(m_{1},k_{1}) (m_{2},k_{2})}\right.}\right.\] * \(\Lambda_{x_{1}|M_{1}||X_{1}|Y}^{m_{1},k_{1}}\) is a pretty good measurement (POVM) for \(m_{1}\in[1:|\mathcal{M}_{1}|]\): \[\Lambda_{x_{1}^{\prime}x_{1}|X_{1}|Y}^{m_{1},k_{1}}\] \[\coloneqq\left(\sum_{k_{1}=1}^{|\mathcal{X}_{1}|}\sum_{m_{1}=1}^{ |\mathcal{M}_{1}|}\Gamma_{x_{1}^{\prime}(x_{1}|M_{1}|X_{1}|Y}^{m_{1},k_{1}^{ \prime})}\right)^{-\nicefrac{{1}}{{2}}}\Gamma_{x_{1}^{\prime}(x_{1}|M_{1}|X_{1 }|Y)}^{m_{1},k_{1}}\] \[\left.\left.\sum_{k_{1}^{\prime}=1}^{|\mathcal{X}_{1}|}\sum_{m_{1}= 1}^{|\mathcal{M}_{1}|}\sum_{k_{1}^{\prime}=1}^{|\mathcal{M}_{1}|}\sum_{k_{1}^{ \prime}=1}^{|\mathcal{M}_{1}|}\sum_{k_{1}^{\prime}=1}^{|\mathcal{M}_{1}|} \Gamma_{x_{1}^{\prime}(x_{1}|M_{1}|X_{1}|Y)}^{m_{1},k_{1}^{\prime}}\right)^{- \nicefrac{{1}}{{2}}}\] where \(\Gamma_{x_{1}^{\prime}|M_{1}|X_{1}|Y}^{m_{1},k_{1}}\) is the element of the first POVM: \[\Gamma_{x_{1}^{\prime}x_{1}|M_{1}|X_{1}|Y}^{m_{1},k_{1}} \coloneqq I_{x_{1}}^{(1,1)}\otimes...\otimes I_{x_{1}^{\prime}(1,| \mathcal{X}_{1}|)}^{(1,|\mathcal{X}_{1}|)}\otimes...\otimes\pi_{x_{1}^{\prime} }\pi_{x_{1}^{\prime}}\otimes...\] \[\otimes I_{x_{1}^{\prime}(x_{1},|X_{1}|)}^{(\mathcal{M}_{1},|X_{1 }|)}\] and \(\tau_{x_{1}^{\prime}}^{m_{1},k_{1}}\) is a test operator in order to discriminate between two hypotheses \(\rho_{x_{1}Y}\), and \(\rho_{x_{1}}\otimes\rho_{Y}\). Also, it is obvious that to decode \(m_{1}\), it does not matter for the second position-based decoder, which copy is selected by Alice among \(|\mathcal{M}_{2}||X_{2}|\) copies. * We face a hypothesis testing problem. Null hypothesis is \(\rho_{x_{1}Y}\) and alternative hypothesis is \(\rho_{x_{1}}\otimes\rho_{Y}\). Therefore the probability of success in guessing null and alternative hypotheses are \(Tr\big{\{}\tau_{x_{1}Y}\rho_{X_{1}Y}\big{\}}\) and \(Tr\big{\{}\big{(}I_{x_{1}Y}-\tau_{x_{1}Y}\big{)}\big{(}\rho_{X_{1}}\otimes\rho_{ Y}\big{)}\big{\}}\). The rest of the decoding process for \(m_{1}\) is analogous to [20]. Therefore, we have: \[R_{1}\leq I_{H}^{\varepsilon_{1}-\delta_{1}}(U_{1};Y)_{\rho}-I_{ max}^{\sqrt{\varepsilon_{2}}-\delta_{2}}(U_{1};Z)_{\rho}-\log\frac{4\varepsilon_{1}}{\delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}} \tag{42}\] Now, we turn our attention to decoding the second message. As mentioned before, the channel state changes after the first measurement. There is a detailed discussion in [32]. Let \(\sigma_{x_{1}x_{1}(x_{1},x_{1})(m_{2},k_{2})}^{(m_{1},k_{1})(m_{2},k_{2})}\) denote the disturbed state after applying the first measurement (POVM): \[\sigma_{x_{1}x_{1}(x_{1},x_{1})(m_{2},k_{2})}^{(m_{1},k_{1})(m_{2},k_{2})}\] \[\coloneqq\sum_{x_{1}}p_{x_{1}}(x_{1})|x_{1}\rangle\langle x_{1}|_{ x_{1}}\otimes|x_{1}\rangle\langle x_{1}|_{x_{1}^{\prime}}\] \[\bigotimes\sigma_{x_{1}x_{1}^{\prime}(m_{1},k_{1})(\mathcal{M}_{ 2},|X_{1}|)}^{x_{1}(m_{1},k_{1})(m_{2},k_{2})}\otimes...\] Also, Bob's second POVM is as follows: \[\Lambda_{x_{1}x_{1}^{\prime}|M_{1}||X_{1}|Y}^{m_{1},k_{1}} \coloneqq\left(\sum_{k_{1}=1}^{|\mathcal{X}_{1}|}\sum_{m_{1}=1}^{| \mathcal{M}_{1}|}\lambda_{x_{1}^{\prime}x_{1 \(\theta_{x_{1}^{2}x_{2}}^{m_{x}x_{2}}\) is a binary test operator to discriminate between two hypotheses \(\alpha_{x_{2}^{\prime}}^{x_{1}}\) and \(\sigma_{x_{1}}^{x_{1}}\otimes\sigma_{y}^{x_{1}}\) with an error of \(\epsilon_{1}-\delta_{1}\); i.e., \[Tr\{\theta_{x_{2}^{\prime}}\sigma_{x_{2}^{\prime}}^{x_{1}}\}\geq 1-(\epsilon_{1} -\delta_{1})\quad\epsilon_{1}\in(0,1),\qquad\delta_{1}\in(0,\epsilon_{1})\] In other words, Bob has to be able to discriminate between the following states: \[\sum_{x_{1}}p_{x_{1}}(x_{1})|x_{1}\rangle\langle x_{1}|_{x_{1}} \otimes\sigma_{x_{2}^{\prime}}^{x_{1}}\] \[\sum_{x_{1}}p_{x_{1}}(x_{1})|x_{1}\rangle\langle x_{1}|_{x_{1}} \otimes\sigma_{x_{2}^{\prime}}^{x_{1}}\otimes\sigma_{y}^{x_{1}}\] Similar to what mentioned in [20], and [27], we have the following rate: \[R_{2}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{2};Y|U_{1})_{\rho} -\tilde{I}_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}}(U_{2};Z|U_{1})_{\rho}\] \[-\log\frac{4\epsilon_{1}}{\delta_{1}^{2}}-2\log\frac{1}{\delta_ {2}} \tag{43}\] The probability of error for \(m_{2}\) is as follows: \[p_{e_{2}}=p[\tilde{M}_{2}\neq M_{2}]\] \[=\frac{1}{|\mathcal{M}_{2}|}\sum_{m_{2}=1}^{|M_{2}|}\mathbbm{1} \mathbbm{1}\mathbbm{1}\mathbbm{1}_{m_{2}\times\theta_{2}}^{m_{1}}\left(\sigma _{(m_{2},k_{1}^{\prime}(x_{2},k_{2}^{\prime}))|0\delta_{1}|m_{2}|x_{2}\rangle \tau_{Z}}^{(m_{2},k_{2})}\right)|m_{2}\rangle\langle m_{2}|_{\theta_{2}}\] \[\otimes\theta_{x_{1}^{\prime}x_{2}^{\prime}\otimes(M_{2}|x_{2}|x_{ 2})}\mathbbm{1}_{1}\leq 2(\epsilon_{1}+\sqrt{\epsilon_{2}})+\sqrt{ \epsilon_{1}^{\prime}} \tag{44}\] Also, the error probability exponents stated in (41) and (44) are proved. See [20, 27]. This process can be repeated for another decoding order. In other words, we can first decode \(m_{2}\), and then decode \(m_{1}\) ( \(m_{2}\to m_{1}\) ). Then taking the intersection of the regions resulting from both orders, we give: \[R_{1}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{1};Y|U_{2})_{\rho}- I_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}}(U_{1};Z)_{\rho}-\log\frac{4\epsilon_{1}}{ \delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}}\] \[R_{2}\leq I_{H}^{\epsilon_{1}-\delta_{1}}(U_{2};Y|U_{1})_{\rho}- \tilde{I}_{max}^{\sqrt{\epsilon_{2}}-\delta_{2}}(U_{2};Z|U_{1})_{\rho}-\log \frac{4\epsilon_{1}}{\delta_{1}^{2}}\] \[-2\log\frac{1}{\delta_{2}}\] This completes the proof. **Appendix D**: _(Proof of Theorem 3)_ The proof uses superposition coding. Assume that the first receiver \(Y_{1}\) has a better reception signal than the second receiver \(Y_{2}\). In this setting, Alice is able to encode a further message superimposed on top of the common message. Using the successive decoding can be helpful. _Codebook generation_: Randomly and independently generate \(2^{R_{c}}\) sequence \(u(m_{c})\) according to the distribution \(p_{U}(u)\). For each sequence \(u(m_{c})\), randomly and conditionally independently generate \(2^{R_{1}}\) sequence \(x(m_{1},m_{c})\) according to the distribution \(p_{X|U}\big{(}x|u(m_{c})\big{)}\). The \(Y_{1}\)'s state can be calculated by tracing out \(Y_{2}\) from (13): \[\rho_{UXY_{1}}=\sum_{u,x}p_{U}(u)p_{X|U}(x|u)\;|u\rangle\langle u|_{U}\otimes \left|x\right\rangle\langle x|_{X}\otimes\rho_{x}^{Y_{1}}\] Similar to what mentioned for Theorem 2, we construct the POVM for the first receiver as: \[\Lambda_{m_{1},m_{c}}\] \[\coloneqq \left(\sum_{m_{c}^{\prime}=1}^{|\mathcal{M}_{c}|}\sum_{m_{1}^{ \prime}=1}^{|\mathcal{M}_{c}|}\Gamma_{m_{c}^{\prime},m_{c}^{\prime}}\right)^ {-\nicefrac{{1}}{{2}}}\Gamma_{m_{1},m_{c}}\left(\sum_{m_{c}^{\prime}=1}^{| \mathcal{M}_{c}|}\sum_{m_{1}^{\prime}=1}^{|\mathcal{M}_{1}|}\Gamma_{m_{1}^{ \prime},m_{c}^{\prime}}\right)^{-\nicefrac{{1}}{{2}}}\] Also, the POVM for the second receiver can be constructed as follows: \[\Lambda_{m_{c}}\coloneqq \left(\sum_{m_{c}^{\prime}=1}^{|\mathcal{M}_{c}|}\lambda_{m_{c}^{ \prime}}\right)^{-\nicefrac{{1}}{{2}}}\lambda_{m_{c}}\left(\sum_{m_{c}^{\prime}=1 }^{|\mathcal{M}_{c}|}\lambda_{m_{c}^{\prime}}\right)^{-\nicefrac{{1}}{{2}}}\] Consider the probability of error for \(m_{1}\): \[p_{e_{1}}=p\big{\{}\big{(}\tilde{M}_{1},\tilde{M}_{c}\big{)} \neq(M_{1},M_{c})\big{\}}\] \[\coloneqq\frac{1}{|\mathcal{M}_{1}||\mathcal{M}_{c}|}\sum_{m_{c}} \sum_{m_{1}}Tr\;\Big{\{}\big{(}I\] \[-\Lambda_{m_{1},m_{c}}\big{)}\rho_{x(m_{1},m_{c})}^{Y_{1}}\Big{\}}\] and for \(m_{c}\): \[p_{e_{2}}=p\big{\{}\tilde{M}_{c}\neq M_{c}\big{\}}\coloneqq\frac{1}{|\mathcal{M }_{c}|}\sum_{m_{c}}Tr\;\Big{\{}\big{(}I-\Lambda_{m_{c}}\big{)}\rho_{x(m_{c})}^{Y_{ 2}}\Big{\}}\] By a straightforward calculation analogous to [3] for i.i.d. case and in [29] (to calculate one-shot Marton inner bound for QBC), the above error probability exponents can be calculated as follows: \[p_{e_{1}}+p_{e_{2}}\leq 2^{-f_{H}} (X;Y_{1}|U)_{\rho}-2^{+\log\epsilon}+2^{-f_{H}^{2}(U;Y_{2})_{\rho}-2+ \log\epsilon}\] \[+2^{-f_{H}^{2}(U;Y_{1})_{\rho}-2+\log\epsilon}+O(\epsilon)\] This completes the proof.
2307.06591
Cusped Borel Anosov representations with positivity
We show that if a cusped Borel Anosov representation from a lattice $\Gamma \subset \mathsf{PGL}_2(\mathbb{R})$ to $\mathsf{PGL}_d(\mathbb{R})$ contains a unipotent element with a single Jordan block in its image, then it is necessarily a (cusped) Hitchin representation. We also show that the amalgamation of a Hitchin representation with a cusped Borel Anosov representation that is not Hitchin is never cusped Borel Anosov.
Tengren Zhang, Gye-Seon Lee
2023-07-13T07:26:30Z
http://arxiv.org/abs/2307.06591v1
# Cusped Borel Anosov representations with positivity ###### Abstract. We show that if a cusped Borel Anosov representation from a lattice \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) to \(\mathsf{PGL}_{d}(\mathbb{R})\) contains a unipotent element with a single Jordan block in its image, then it is necessarily a (cusped) Hitchin representation. We also show that the amalgamation of a Hitchin representation with a cusped Borel Anosov representation that is not Hitchin is never cusped Borel Anosov. Key words and phrases:Anosov representations, Hitchin representations, Positivity, Fuchsian groups 2020 Mathematics Subject Classification: 22E40, 20H10, 57M60 G.-S. Lee was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1C1C1A01013667). T. Zhang was supported by the NUS-MOE grant A-8000458-00-00. relative to the cusp subgroups in \(\Gamma\), then cusped Anosov representations are a special case of all the above notions. Canary, Zhang and Zimmer [22] also defined the notion of transverse representations, which extends the notion of cusped Anosov representations to allow for \(\Gamma\) to be any non-elementary, discrete subgroup of \(\mathsf{PGL}_{2}(\mathbb{R})\) (and more generally, any projectively visible group), see Remark 1.2. In this article, we will focus exclusively on transverse representations of non-elementary, discrete subgroups of \(\mathsf{PGL}_{2}(\mathbb{R})\), which we now define. For any non-elementary, discrete subgroup \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\), let \(\Lambda(\Gamma)\) denote its limit set, i.e. \(\Lambda(\Gamma)\) is the set of accumulation points in \(\partial\,\mathbb{H}^{2}\) of some/any \(\Gamma\)-orbit in \(\mathbb{H}^{2}\). Note that \(\Lambda(\Gamma)\) is an infinite, \(\Gamma\)-invariant, compact subset of \(\partial\,\mathbb{H}^{2}\). For any subset \(\theta\subset\Delta\), let \(\mathcal{F}_{\theta}(\mathbb{R}^{d})\) denote the corresponding partial flag manifold, i.e. if \(\theta=\{k_{1},\ldots,k_{s}\}\) with \(k_{1}<\cdots<k_{s}\), then \[\mathcal{F}_{\theta}(\mathbb{R}^{d}):=\{F=(F^{k_{1}},\ldots,F^{k_{s}})\mid F^ {k_{i}}\in\operatorname{Gr}_{k_{i}}(\mathbb{R}^{d})\text{ and }F^{k_{i}}\subset F^{k_{i+1}}\text{ for all }i\}.\] In the case when \(\theta=\Delta\), we will simply denote \(\mathcal{F}(\mathbb{R}^{d}):=\mathcal{F}_{\Delta}(\mathbb{R}^{d})\). **Definition 1.1**.: Let \(\theta\subset\Delta\) be symmetric, i.e. \(k\in\theta\) if and only if \(d-k\in\theta\), and let \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) be a non-elementary, discrete subgroup. A representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is \(P_{\theta}\)_-transverse_ if there is a continuous map \(\xi=(\xi^{k})_{k\in\theta}:\Lambda(\Gamma)\to\mathcal{F}_{\theta}(\mathbb{R}^ {d})\) that satisfies all of the following properties: * \(\xi\) is \(\rho\)_-equivariant_, i.e. \(\xi(\gamma\cdot x)=\rho(\gamma)\cdot\xi(x)\) for all \(\gamma\in\Gamma\) and \(x\in\Lambda(\Gamma)\). * \(\xi\) is _transverse_, i.e. \(\xi^{k}(x)+\xi^{d-k}(y)=\mathbb{R}^{d}\) for all distinct points \(x,y\in\Lambda(\Gamma)\) and all \(k\in\theta\). * \(\xi\) is _strongly dynamics preserving_, i.e. if \(\{\gamma_{n}\}\) is a sequence in \(\Gamma\) such that \(\gamma_{n}\cdot b_{0}\to x\) and \(\gamma_{n}^{-1}\cdot b_{0}\to y\) for some/any \(b_{0}\in\mathbb{H}^{2}\) and some \(x,y\in\Lambda(\Gamma)\), then \(\rho(\gamma_{n})\cdot F\to\xi(x)\) for all \(F\in\mathcal{F}_{\theta}(\mathbb{R}^{d})\) that is transverse to \(\xi(y)\). In the above definition, the strongly dynamics preserving property of \(\xi\) ensures that it is unique to \(\rho\). We thus refer to \(\xi\) as the _limit map_ of \(\rho\). **Remark 1.2**.: Canary, Zhang and Zimmer [22, Theorems 4.1 and 6.1] proved that if \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is geometrically finite, then for any symmetric \(\theta\subset\Delta\), a representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is \(P_{\theta}\)-transverse if and only if it is cusped \(P_{\theta}\)-Anosov. In the case when \(\theta=\Delta\), \(P_{\Delta}\)-transverse representations and cusped \(P_{\Delta}\)-Anosov representations are also called _Borel transverse representations_ and _cusped Borel Anosov representations_ respectively. When \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is a convex cocompact free subgroup, (cusped) Borel Anosov representations from \(\Gamma\) to \(\mathsf{PGL}_{d}(\mathbb{R})\) can be constructed via a ping pong type argument. However, when \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is a lattice, there are currently only two known families of cusped Borel Anosov representations: the Hitchin representations and the Barbot examples, see Section 2.2 and Appendix B respectively. The search for more examples of cusped Borel Anosov representations can be formulated as the following question: **Question 1.3**.: When \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is a lattice, are there cusped Borel Anosov representations that are neither Hitchin representations nor the Barbot examples? The two main results of this paper are rigidity results about Borel transverse representations of non-elementary discrete subgroups of \(\mathsf{PGL}_{2}(\mathbb{R})\) whose limit set is all of \(\partial\,\mathbb{H}^{2}\). When specialized to lattices in \(\mathsf{PGL}_{2}(\mathbb{R})\), they can be interpreted as providing supporting evidence to a negative answer to the above question. If \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is a non-elementary, discrete subgroup and \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is a Hitchin representation, then it follows from the work of Canary, Zhang and Zimmer [22] that \(\rho\) sends every (non-identity) parabolic element in \(\Gamma\) to a unipotent element with a single Jordan block, see Theorem 2.4 and Remark 2.5. Our first theorem resolves Question 1.3 under the additional assumption that the image of \(\rho\) contains a unipotent element with a single Jordan block. **Theorem 1.4**.: _Suppose that \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is a discrete subgroup with \(\Lambda(\Gamma)=\partial\,\mathbb{H}^{2}\). If \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is a Borel transverse representation whose image contains a unipotent element with a single Jordan block, then \(\rho\) is a Hitchin representation._ **Remark 1.5**.: If \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is a Barbot example, then \(d\) is necessarily odd, and \(\rho\) sends every parabolic element in \(\Gamma\) to a unipotent element in \(\mathsf{PGL}_{d}(\mathbb{R})\) with two Jordan blocks, one of size \(j\) and the other of size \(d-j\) for some \(j\in\{1,\ldots,\frac{d-1}{2}\}\), see Appendix B. As such, the hypothesis of Theorem 1.4 rules out the need to consider the Barbot examples. One might attempt to construct new examples of cusped Borel Anosov representations on a lattice \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) via the following "amalgamation" procedure. 1. Realize \(\Gamma\) as a free product of two non-elementary, geometrically finite subgroups \(\Gamma_{1}\) and \(\Gamma_{2}\), amalgamated over a cyclic subgroup \(\langle\gamma\rangle\). 2. Specify a Barbot example \(\rho_{1}:\Gamma_{1}\to\mathsf{PGL}_{d}(\mathbb{R})\) and a Hitchin representation \(\rho_{2}:\Gamma_{2}\to\mathsf{PGL}_{d}(\mathbb{R})\) so that \(\rho_{1}(\langle\gamma\rangle)\) is conjugate to \(\rho_{2}(\langle\gamma\rangle)\). 3. Find a cusped Borel Anosov representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) so that \(\rho|_{\Gamma_{1}}=\rho_{1}\) and \(\rho|_{\Gamma_{2}}=\rho_{2}\). There are situations (see for example [14, 15]) where this amalgamation procedure allows one to construct new classes of \(P_{\theta}\)-Anosov representations from existing ones. However, our next theorem implies that the amalgamation process described above will never yield a Borel transverse representation. **Theorem 1.6**.: _Suppose that \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is a discrete subgroup with \(\Lambda(\Gamma)=\partial\,\mathbb{H}^{2}\), and let \(\Gamma^{\prime}\subset\Gamma\) be a non-elementary subgroup. If \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is a Borel transverse representation such that \(\rho|_{\Gamma^{\prime}}:\Gamma^{\prime}\to\mathsf{PGL}_{d}(\mathbb{R})\) is Hitchin, then \(\rho\) is Hitchin._ By Remark 1.2 above, Theorems 1.4 and 1.6 hold for cusped Borel Anosov representations as well; one simply imposes the additional condition that \(\Gamma\) is geometrically finite. A key tool used in the proofs of Theorem 1.4 and Theorem 1.6 (and also in the definition of Hitchin representations) is Fock and Goncharov's notion of positivity for \(n\)-tuples in \(\mathcal{F}(\mathbb{R}^{d})\) for any integer \(n\geqslant 3\), see Section 2.1. With this, one can then define the notion of a positive map from a subset \(\Lambda\subset\mathbb{S}^{1}\) (with \(\#\Lambda\geqslant 3\)) to \(\mathcal{F}(\mathbb{R}^{d})\): we say that a map \(\xi:\Lambda\to\mathcal{F}(\mathbb{R}^{d})\) is _positive_ if for any integer \(n\geqslant 3\), the tuple \((\xi(a_{1}),\ldots,\xi(a_{n}))\) is positive for all \(a_{1}<\cdots<a_{n}<a_{1}\) in \(\Lambda\) (according to the clockwise cyclic order on \(\mathbb{S}^{1}\)). The proofs of both Theorem 1.4 and Theorem 1.6 rely on the following result about continuous, positive maps, which is a special case of more general results of Guichard, Labourie and Wienhard [11, Lemma 3.5 and Proposition 3.15] in the setting of \(\Theta\)-positive maps. **Proposition 1.7**.: _Let \(\xi:\mathbb{S}^{1}\to\mathcal{F}(\mathbb{R}^{d})\) be a continuous, transverse map. If there is a pairwise distinct triple of points \(x,y,z\in\mathbb{S}^{1}\) such that \((\xi(x),\xi(y),\xi(z))\) is positive, then \(\xi\) is a positive map._ In Section 2, we will recall Fock and Goncharov's notion of positivity of \(k\)-tuples of complete flags and the definition of Hitchin representations. Then, in Section 3, we provide an elementary and self-contained proof of Proposition 1.7. Finally, we use Proposition 1.7 to prove Theorems 1.4 and 1.6 in Section 4. In the appendices, we give an elementary proof of a well-known fact about positive triples of flags that was used to prove Proposition 1.7, and also describe the Barbot examples mentioned above. **Acknowledgements**.: We are thankful for helpful conversations with Fanny Kassel and Jaejeong Lee. ## 2. Positive tuples and positive maps ### Fock-Goncharov positivity We say that an upper triangular, unipotent matrix is _totally positive_ if its non-trivial minors (i.e. those that are not forced to be \(0\) by virtue of the matrix being upper triangular) are positive. Then given an (ordered) basis \(\mathcal{B}=(e_{1},\ldots,e_{d})\) of \(\mathbb{R}^{d}\), we say that a unipotent element in \(\mathsf{PGL}_{d}(\mathbb{R})\) is _totally positive with respect to \(\mathcal{B}\)_ if it is represented in the basis \(\mathcal{B}\) by an upper triangular, unipotent, totally positive matrix. Let \[U_{>0}(\mathcal{B})\subset\mathsf{PGL}_{d}(\mathbb{R})\] denote the set of unipotent elements that are totally positive with respect to \(\mathcal{B}\), and let \[U_{\geqslant 0}(\mathcal{B})\subset\mathsf{PGL}_{d}(\mathbb{R})\] denote the closure of \(U_{>0}(\mathcal{B})\). Note that the elements in \(U_{\geqslant 0}(\mathcal{B})\) are exactly the ones where all the non-trivial minors are non-negative. Using well-known formulas for how minors behave under products, it is straightforward to verify that both \(U_{>0}(\mathcal{B})\) and \(U_{\geqslant 0}(\mathcal{B})\) are sub-semigroups of \(\mathsf{PGL}_{d}(\mathbb{R})\). Recall that if \(F,G\in\mathcal{F}(\mathbb{R}^{d})\), then \(F\) and \(G\) are _transverse_ if \(F^{k}+G^{d-k}=\mathbb{R}^{d}\) for all \(k\in\{1,\ldots,d-1\}\). When \(n\geqslant 3\), we say that an \(n\)-tuple of complete flags \((F_{1},\ldots,F_{n})\) in \(\mathcal{F}(\mathbb{R}^{d})\) is _positive_ if \(F_{1}\) and \(F_{n}\) are transverse, and there is a basis \(\mathcal{B}=(e_{1},\ldots,e_{d})\) of \(\mathbb{R}^{d}\) and elements \(u_{2},\ldots,u_{n-1}\in U_{>0}(\mathcal{B})\) such that \(e_{i}\in F_{1}^{i}\cap F_{n}^{d-i+1}\) for all \(i\in\{1,\ldots,d\}\), and \(F_{j}=(u_{n-1}\cdots u_{j})\cdot F_{n}\) for all \(j\in\{2,\ldots,n-1\}\). The fact that \(U_{>0}(\mathcal{B})\) is a semigroup implies that if \((F_{1},\ldots,F_{n})\) is positive, then so is \((F_{1},F_{i_{1}},\ldots,F_{i_{\ell}},F_{n})\) for all integers \(i_{1},\ldots,i_{\ell}\) such that \(1<i_{1}<\cdots<i_{\ell}<n\). Recall from the introduction that given a subset \(\Lambda\) of \(\mathbb{S}^{1}\), a map \(\xi:\Lambda\to\mathcal{F}(\mathbb{R}^{d})\) is _positive_ provided that if \(n\geqslant 3\) and \((x_{1},\ldots,x_{n})\) is a cyclically ordered subset of pairwise distinct points in \(\Lambda\), then \((\xi(x_{1}),\ldots,\xi(x_{n}))\) is a positive \(n\)-tuple of flags. The following proposition summarizes the basic properties of positive tuples of flags. It follows easily from a well-known parameterization result of Fock and Goncharov [10, Theorem 9.1(a)] (see Kim-Tan-Zhang [26, Observation 3.20]). **Proposition 2.1**.: _Let \(F_{1},\ldots,F_{n}\) be flags in \(\mathcal{F}(\mathbb{R}^{d})\)._ 1. _If_ \(n\geqslant 3\)_, then the following are equivalent:_ * \((F_{1},F_{2},\ldots,F_{n})\) _is positive,_ * \((F_{n},\ldots,F_{2},F_{1})\) _is positive,_ * \((F_{2},\ldots,F_{n},F_{1})\) _is positive,_ * \(g\cdot(F_{1},F_{2},\ldots,F_{n})\) _is positive for some/all_ \(g\in\mathsf{PGL}_{d}(\mathbb{R})\)_._ _In particular, if_ \((F_{1},\ldots,F_{n})\) _is positive, then_ \((F_{i_{1}},\ldots,F_{i_{\ell}})\) _is positive for all_ \(1\leqslant i_{1}<i_{2}<\cdots<i_{\ell}\leqslant n\)_, and so_ \(F_{i}\) _and_ \(F_{j}\) _are transverse for all distinct pairs_ \(i,j\in\{1,\ldots,n\}\)_._ 2. _If_ \(n\geqslant 4\)_, then_ \((F_{1},\ldots,F_{n})\) _is positive if and only if_ \((F_{1},\ldots,F_{n-1})\) _is positive and_ \((F_{1},F_{i},F_{n-1},F_{n})\) _is positive for some/all_ \(i=2,\ldots,n-2\)_. In particular,_ \((F_{1},\ldots,F_{n})\) _is positive if and only if_ \((F_{i_{1}},F_{i_{2}},F_{i_{3}},F_{i_{4}})\) _is positive for all_ \(1\leqslant i_{1}<i_{2}<i_{3}<i_{4}\leqslant n\)_._ Let \(\mathcal{P}\) denote the set of positive triples of flags in \(\mathcal{F}(\mathbb{R}^{d})\), and let \(\mathcal{T}\) denote the set of pairwise transverse triples of flags in \(\mathcal{F}(\mathbb{R}^{d})\). The following theorem is also a well-known property of positive triples of flags, which has been generalized to the setting of triples of \(\Theta\)-positive flags by Guichard, Labourie and Wienhard [11, Proposition 2.5(1)]. We provide an elementary proof in Appendix A. **Theorem 2.2**.: _Let \(F\), \(G\) and \(H\) be complete flags in \(\mathcal{F}(\mathbb{R}^{d})\) such that both \(G\) and \(H\) are transverse to \(F\). Let \(u\in\mathsf{PGL}_{d}(\mathbb{R})\) be the unipotent element that fixes \(F\) and sends \(H\) to \(G\), and let \(\mathcal{B}=(e_{1},\ldots,e_{d})\) be any basis of \(\mathbb{R}^{d}\) such that \(e_{k}\in F^{k}\cap H^{d-k+1}\) for all \(k\in\{1,\ldots,d\}\). If \(u\in U_{\geqslant 0}(\mathcal{B})-U_{>0}(\mathcal{B})\), then \(G\) and \(H\) are not transverse. In particular, \(\mathcal{P}\) is a union of connected components of \(\mathcal{T}\)._ ### Hitchin representations Suppose for now that \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is surface group (i.e. \(\Gamma\) is cocompact and torsion-free). Then the discrete and faithful representations from \(\Gamma\) to \(\mathsf{PGL}_{2}(\mathbb{R})\) form a single connected component of \(\mathrm{Hom}(\Gamma,\mathsf{PGL}_{2}(\mathbb{R}))/\mathsf{PGL}_{2}(\mathbb{R})\), known as the Teichmuller component. Hitchin [10] noticed that for all \(d\geqslant 2\), there is a distinguished connected component of \(\mathrm{Hom}(\Gamma,\mathsf{PGL}_{d}(\mathbb{R}))/\mathsf{PGL}_{d}(\mathbb{R})\) that is analogous to the Teichmuller component. Today, this connected component is commonly known as the _Hitchin component_, and the _Hitchin representations_ are the ones whose conjugacy class lies in the Hitchin component. Fock and Goncharov [11] characterized the Hitchin representations as the representations for which there exists a \(\rho\)-equivariant positive map \(\xi:\Lambda(\Gamma)\to\mathcal{F}(\mathbb{R}^{d})\), and Labourie [12] showed that every Hitchin representation is (cusped) Borel Anosov. Motivated by Fock and Goncharov's characterization of Hitchin representations, Canary, Zhang and Zimmer [13] extended the notion of Hitchin representations to the case when \(\Gamma\) is a discrete subgroup of \(\mathsf{PGL}_{2}(\mathbb{R})\). **Definition 2.3**.: Let \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) be a non-elementary, discrete subgroup. A representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is _Hitchin_ if there is a continuous, \(\rho\)-equivariant, positive map \(\xi:\Lambda(\Gamma)\to\mathcal{F}(\mathbb{R}^{d})\). Labourie's result can also be generalized to this case using the proof of [13, Theorem 1.4]. **Theorem 2.4**.: _Every Hitchin representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is Borel transverse, and the continuous, \(\rho\)-equivariant, positive map is the limit map of \(\rho\) (and hence is unique). Furthermore, \(\rho\) sends parabolic elements in \(\Gamma\) to unipotent elements in \(\mathsf{PGL}_{d}(\mathbb{R})\) with a single Jordan block._ **Remark 2.5**.: Even though [13, Theorem 1.4] is stated only in the case when \(\Gamma\subset\mathsf{PGL}_{2}(\mathbb{R})\) is geometrically finite, the proof does not use the geometric finiteness of \(\Gamma\). ## 3. Proof of Proposition 1.7 To prove Proposition 1.7, we will use the following lemma, which is already well-known to experts (see for example [11, Proposition 3.15]). We give an elementary proof of the lemma for the reader's convenience. We remark that the lemma is false without the continuity assumption on \(\xi\). **Lemma 3.1**.: _If \(\xi:\mathbb{S}^{1}\to\mathcal{F}(\mathbb{R}^{d})\) is a continuous map such that \((\xi(a),\xi(b),\xi(c))\) is positive for every pairwise distinct triple \(a,b,c\in\mathbb{S}^{1}\), then \(\xi\) is a positive map._ Proof.: By Proposition 2.1(2), it suffices to show that \((\xi(x),\xi(y),\xi(z),\xi(w))\) is positive for all quadruples \(x,y,z,w\in\mathbb{S}^{1}\) such that \(x<y<z<w<x\) along \(\mathbb{S}^{1}\). Pick any such quadruple \(x,y,z,w\in\mathbb{S}^{1}\), and let \(I\subset\mathbb{S}^{1}\) denote the closed subinterval that contains \(z\) with endpoints \(y\) and \(w\). By Proposition 2.1(1), the map \(\xi\) is transverse. Thus, for all \(t\in I\), we may define the map \[u:I\to\mathsf{PGL}_{d}(\mathbb{R})\] by setting \(u(t)\in\mathsf{PGL}_{d}(\mathbb{R})\) to be the unipotent element that fixes \(\xi(x)\) and sends \(\xi(w)\) to \(\xi(t)\). The continuity of \(\xi\) then implies that the map \(u\) is continuous. Since \((\xi(x),\xi(y),\xi(w))\) is positive, there is a basis \(\mathcal{B}=(e_{1},\ldots,e_{d})\) such that \(e_{k}\in\xi(x)^{k}\cap\xi(w)^{d-k+1}\) and \(u(y)\in U_{>0}(\mathcal{B})\). First, we prove that \(u(z)\in U_{>0}(\mathcal{B})\) as well. If this were not the case, then the continuity of \(u\) implies that there is some \(t_{0}\in(y,z]\subset I\) such that \(u(t_{0})\in U_{\geqslant 0}(\mathcal{B})-U_{>0}(\mathcal{B})\). By Theorem 2.2, \(\xi(t_{0})\) and \(\xi(w)\) are not transverse, thus contradicting the fact that \(\xi\) is a transverse map. Next, we show that \(u(z)^{-1}u(y)\in U_{>0}(\mathcal{B})\) as well. To do so, let \[v:[z,w]\to\mathsf{PGL}_{d}(\mathbb{R})\] be the continuous map defined by \(v(t):=u(t)^{-1}u(y)\). Observe that \(v(w)=u(y)\in U_{>0}(\mathcal{B})\). Thus, if \(u(z)^{-1}u(y)=v(z)\notin U_{>0}(\mathcal{B})\), then there is some \(t_{0}\in[z,w)\) such that \(v(t_{0})\in U_{\geqslant 0}(\mathcal{B})-U_{>0}(\mathcal{B})\). By Theorem 2.2, the pair of flags \(\xi(w)\) and \(v(t_{0})\cdot\xi(w)\) are not transverse, which means that \(\xi(t_{0})=u(t_{0})\cdot\xi(w)\) and \(\xi(y)=u(t_{0})v(t_{0})\cdot\xi(w)\) are not transverse. This contradicts the fact that \(\xi\) is a transverse map. Since we have proven that both \(u(z)\) and \(u(z)^{-1}u(y)\) lie in \(U_{>0}(\mathcal{B})\), the quadruple of flags \[\big{(}\xi(x),\xi(y),\xi(z),\xi(w)\big{)}=\big{(}\xi(x),u(z)u(z)^{-1}u(y)\cdot \xi(w),u(z)\cdot\xi(w),\xi(w)\big{)}\] is positive, so the lemma follows. Proof of Proposition 1.7.: By Lemma 3.1, it suffices to show that \((\xi(a),\xi(b),\xi(c))\) is positive for any pairwise distinct triple \(a,b,c\in\mathbb{S}^{1}\). By Proposition 2.1(1), we may assume that \(a<b<c\) and \(x<y<z\) by switching the roles of \(a\) and \(c\) and the roles of \(x\) and \(z\) if necessary. Then there are continuous maps \[f_{1},f_{2},f_{3}:[0,1]\to\mathbb{S}^{1}\] such that \((f_{1}(0),f_{2}(0),f_{3}(0))=(x,y,z)\), \((f_{1}(1),f_{2}(1),f_{3}(1))=(a,b,c)\), and \((f_{1}(t),f_{2}(t),f_{3}(t))\) are pairwise distinct triples for all \(t\). Recall that \(\mathcal{P}\) denotes the set of positive triples of flags in \(\mathcal{F}(\mathbb{R}^{d})\), and \(\mathcal{T}\) denotes the set of pairwise transverse triples of flags in \(\mathcal{F}(\mathbb{R}^{d})\). Since \(\xi\) is continuous and transverse, this implies that the map \[F:[0,1]\to\mathcal{T}\] given by \(F(t)=\big{(}\xi(f_{1}(t)),\xi(f_{2}(t)),\xi(f_{3}(t))\big{)}\) is well-defined and continuous. Since \(F(0)\in\mathcal{P}\) by hypothesis, Theorem 2.2 implies that \(F(1)\in\mathcal{P}\). ## 4. Proof of Theorems 1.4 and 1.6 Using Proposition 1.7, we will now prove Theorems 1.4 and 1.6. Proof of Theorem 1.4.: The \(d\)_-th upper triangular Pascal matrix_\(Q_{d}\) is the \(d\times d\) upper triangular matrix whose \((i,j)\)-th entry (with \(i\leqslant j\)) is the integer \(\binom{j-1}{i-1}\). To prove the theorem, we will first recall some basic properties of \(Q_{d}\). **Lemma 4.1**.: \(Q_{d}\) _is totally positive, unipotent, and has a single Jordan block_ Proof.: The claim that \(Q_{d}\) is unipotent is obvious, and the claim that \(Q_{d}\) has a single Jordan block is a straightforward calculation: one simply verifies that \(Q_{d}\) has a unique eigenvector. To prove that \(Q_{d}\) is totally positive, observe that the natural \(\mathsf{GL}_{2}(\mathbb{R})\) action on the symmetric tensor \(\mathrm{Sym}^{d-1}(\mathbb{R}^{2})\) given by \[g(v_{1}\odot\cdots\odot v_{d-1}):=g(v_{1})\odot\cdots\odot g(v_{d-1})\] induces a representation \[\iota_{d}:\mathsf{GL}_{2}(\mathbb{R})\to\mathsf{GL}(\mathrm{Sym}^{d-1}( \mathbb{R}^{2}))\cong\mathsf{GL}_{d}(\mathbb{R}).\] Here, the identification \(\mathsf{GL}(\mathrm{Sym}^{d-1}(\mathbb{R}^{2}))\cong\mathsf{GL}_{d}(\mathbb{R})\) is induced by the linear identification \[\mathrm{Sym}^{d-1}(\mathbb{R}^{2})\cong\mathbb{R}^{d}\] given by identifying the standard basis \((e_{1},\ldots,e_{d})\) of \(\mathbb{R}^{d}\) with the basis \((e_{1}^{d-1},e_{1}^{d-2}e_{2},\ldots,e_{1}e_{2}^{d-2},e_{2}^{d-1})\) of \(\operatorname{Sym}^{d-1}(\mathbb{R}^{2})\) induced by the standard basis \((e_{1},e_{2})\) of \(\mathbb{R}^{2}\). Note that the representation \(\iota_{d}\) descends to a representation, also denoted \[\iota_{d}:\mathsf{PGL}_{2}(\mathbb{R})\to\mathsf{PGL}_{d}(\mathbb{R}).\] If we take \(\mathcal{B}\) to be the standard basis of \(\mathbb{R}^{d}\), then by [13, Proposition 5.7], \[\iota_{d}(U_{>0}(e_{1},e_{2}))\subset U_{>0}(\mathcal{B}).\] It is also straightforward to verify that \([Q_{d}]=\iota_{d}\left(\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\right)\) and that \(\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\) clearly lies in \(U_{>0}(e_{1},e_{2})\). Thus, \([Q_{d}]\in U_{>0}(\mathcal{B})\), so \(Q_{d}\) is totally positive. The proof of this theorem relies on the following lemma, which demonstrates the inherent positive nature of a unipotent element in \(\mathsf{PGL}_{d}(\mathbb{R})\) with a single Jordan block. **Lemma 4.2**.: _Let \(u\in\mathsf{PGL}_{d}(\mathbb{R})\) be a unipotent element with a single Jordan block, and let \(F\) be the fixed flag of \(u\). Then for any flag \(G\) that is transverse to \(F\) and for any sufficiently large \(t\), the triple \((F,u^{t}\cdot G,G)\) is positive._ Proof.: By Lemma 4.1, \(Q_{d}\) is a unipotent upper triangular matrix with a single Jordan block, so we may choose a basis \(\mathcal{B}=(f_{1},\ldots,f_{d})\) of \(\mathbb{R}^{d}\) such that \(u\) is represented in \(\mathcal{B}\) by \(Q_{d}\). Then \(u^{t}\) is represented in \(\mathcal{B}\) by the matrix \(Q_{d}^{t}\), which is upper triangular, and whose \((i,j)\)-th entry (with \(i\leqslant j\)) is \(\binom{j-1}{i-1}t^{j-i}\). Furthermore, for all \(k\in\{1,\ldots,d-1\}\), the subspace \(F^{k}\subset\mathbb{R}^{d}\) is spanned by \(\{f_{1},\ldots,f_{k}\}\). Let \(H\in\mathcal{F}(\mathbb{R}^{d})\) be the flag such that for all \(k\in\{1,\ldots,d-1\}\), the subspace \(H^{k}\subset\mathbb{R}^{d}\) is spanned by \(\{f_{d-k+1},\ldots,f_{d}\}\). Since \(G\) is transverse to \(F\), there is some unipotent \(v\in\mathsf{PGL}_{d}(\mathbb{R})\) that fixes \(F\) and sends \(H\) to \(G\). It is now sufficient to verify that \(v^{-1}u^{t}v\in U_{>0}(\mathcal{B})\) for sufficiently large \(t\). Indeed, if this were the case, then the observation that \[v^{-1}\cdot(F,u^{t}\cdot G,G)=(F,v^{-1}u^{t}v\cdot H,H)\] implies that \((F,u^{t}\cdot G,G)\) is positive for sufficiently large \(t\). In fact, we will show that if \(v^{\prime}\) and \(v\) are two unipotent elements that fix \(F\), then \(v^{\prime}u^{t}v\in U_{>0}(\mathcal{B})\) for sufficiently large \(t\). Observe that since \(v^{\prime}\) and \(v\) are represented in the basis \(\mathcal{B}\) by upper triangular matrices whose diagonal entries are all \(1\), the product \(v^{\prime}u^{t}v\) is also represented in the basis \(\mathcal{B}\) by an upper triangular matrix \(M_{t}\) whose diagonal entries are all \(1\). Furthermore, for each \(i<j\), the \((i,j)\)-th entry of \(M_{t}\) is a polynomial in the variable \(t\) whose leading term is \(\binom{j-1}{i-1}t^{j-i}\), which is the \((i,j)\)-th entry of \(Q_{d}^{t}\). By Lemma 4.1, \(Q_{d}^{t}\) is totally positive, so the leading term of any minor of \(M_{t}\) is the corresponding minor of \(Q_{d}^{t}\). Hence, for sufficiently large \(t\), we have \(v^{\prime}u^{t}v\in U_{>0}(\mathcal{B})\). Let \(\gamma\in\Gamma\) be the element such that \(\rho(\gamma)\) is unipotent with a single Jordan block. Since \(\rho\) is Borel transverse, the strongly dynamics preserving property of its limit map \(\xi:\Lambda(\Gamma)\to\mathcal{F}(\mathbb{R}^{d})\) ensures that \(\gamma\) is parabolic. Let \(x\in\Lambda(\Gamma)\) be the unique fixed point of \(\gamma\) and let \(y\in\Lambda(\Gamma)-\{x\}\). Then \(\xi(x)\) is the fixed flag of \(\rho(\gamma)\). By Lemma 4.2 and the \(\rho\)-equivariance and transversality of \(\xi\), the triple of flags \(\big{(}\xi(x),\xi(\gamma^{n}y),\xi(y)\big{)}\) is positive for sufficiently large \(n\). Proposition 1.7 then implies that \(\xi\) is a positive map, so \(\rho\) is a Hitchin representation. Proof of Theorem 1.6.: Let \(\Lambda(\Gamma^{\prime})\subset\Lambda(\Gamma)\) be the limit set of \(\Gamma^{\prime}\), and let \(x,y,z\) be pairwise distinct points in \(\Lambda(\Gamma^{\prime})\) (this exists because \(\Gamma^{\prime}\) is non-elementary). Since \(\rho\) is Borel transverse with limit map \(\xi:\Lambda(\Gamma)\to\mathcal{F}(\mathbb{R}^{d})\), note that \(\rho|_{\Gamma^{\prime}}\) is also Borel transverse with limit map \(\xi|_{\Lambda(\Gamma^{\prime})}:\Lambda(\Gamma^{\prime})\to\mathcal{F}( \mathbb{R}^{d})\). Since \(\rho|_{\Gamma^{\prime}}\) is Hitchin, the map \(\xi|_{\Lambda(\Gamma^{\prime})}\) is a positive map. Therefore, \(\big{(}\xi(x),\xi(y),\xi(z)\big{)}\) is a positive triple, so Proposition 1.7 implies that \(\xi\) is a positive map. As such, \(\rho\) is a Hitchin representation. ## Appendix A Proof of Theorem 2.2 In this proof, we fix the basis \(\mathcal{B}\), and hence may view every \(u\in U_{\geqslant 0}(\mathcal{B})\) as a unipotent upper triangular \(d\times d\) matrix. Given (strictly) increasing tuples \[I=(i_{1},\ldots,i_{k})\quad\text{and}\quad J=(j_{1},\ldots,j_{\ell})\] of integers (weakly) between \(1\) and \(d\), we denote by \(u_{I,J}\) the submatrix of \(u\) corresponding to the \(I\) rows and \(J\) columns. We say that \(I\) is _consecutive_ if \(i_{p}=i_{1}+p-1\) for each \(p\in\{1,\ldots,k\}\). If \(k>1\), we also denote \(I^{\prime}:=(i_{1},\ldots,i_{k-1})\) and \(I^{\prime\prime}:=(i_{2},\ldots,i_{k})\). **Lemma A.1**.: _Let \(u\in U_{\geqslant 0}(\mathcal{B})\) and let \(k\in\{1,\ldots,d\}\). Suppose that all the non-trivial \(\ell\times\ell\)-minors of \(u\) are positive for all \(\ell<k\). If all the non-trivial \(k\times k\) minors \(\det(u_{I,J})\) of \(u\) with consecutive \(I\) and consecutive \(J\) are positive, then all the non-trivial \(k\times k\) minors of \(u\) are positive._ Proof.: Notice that it suffices to prove the following pair of claims (assuming that all the non-trivial \(\ell\times\ell\)-minors of \(u\) are positive for all \(\ell<k\)): 1. Fix \(I\) of length \(k\). If all the non-trivial \(k\times k\) minors of \(u\) of the form \(\det(u_{I,J})\) with consecutive \(J\) are positive, then all the non-trivial \(k\times k\) minors of \(u\) of the form \(\det(u_{I,J})\) are positive. 2. Fix \(J\) of length \(k\). If all the non-trivial \(k\times k\) minors of \(u\) of the form \(\det(u_{I,J})\) with consecutive \(I\) are positive, then all the non-trivial \(k\times k\) minors of \(u\) of the form \(\det(u_{I,J})\) are positive. Indeed, if all the non-trivial \(k\times k\) minors \(\det(u_{I,J})\) of \(u\) with consecutive \(I\) and consecutive \(J\) are positive, then we may apply Claim (1) to deduce that all the non-trivial \(k\times k\) minors \(\det(u_{I,J})\) of \(u\) with consecutive \(I\) are positive. Applying Claim (2) now gives the desired conclusion. We only prove Claim (1); the proof of Claim (2) is the same, except that the roles of \(I\) and \(J\) are switched. When \(k=1\), Claim (1) is obvious because every tuple of length \(1\) is consecutive. We may thus assume that \(k\in\{2,\ldots,d\}\). Denote \(J=(j_{1},\ldots,j_{k})\), and notice that \[m:=j_{k}-j_{1}+1\in\{k,\ldots,d\}.\] We will proceed by induction on \(m\). In the base case when \(m=k\), \(J\) is consecutive, so \(\det(u_{I,J})\) is positive by assumption. For the inductive step, fix \(m\in\{k+1,\ldots,d\}\). Since \(k<m\), \(J\) is not consecutive, so there exist \(q\in\{1,\ldots,k-1\}\) and an integer \(n\) such that \(j_{q}<n<j_{q+1}\). Suppose for the purpose of contradiction that \(\det(u_{I,J})=0\). Then we may write \[c_{1}u_{I,j_{1}}+\cdots+c_{k}u_{I,j_{k}}=\vec{0} \tag{1}\] for some \(c_{1},\ldots,c_{k}\in\mathbb{R}\) that are not all zero. Thus, \[0 = \det(\vec{0},u_{I,j_{2}},\ldots,u_{I,j_{q}},u_{I,n},u_{I,j_{q+1}},\ldots,u_{I,j_{k-1}})\] \[= \det(c_{1}u_{I,j_{1}}+\cdots+c_{k}u_{I,j_{k}},u_{I,j_{2}},\ldots, u_{I,j_{q}},u_{I,n},u_{I,j_{q+1}},\ldots,u_{I,j_{k-1}})\] \[= c_{1}\det(u_{I,(j_{1},j_{2},\ldots,j_{q},n,j_{q+1},\ldots,j_{k-1 })})+(-1)^{k-1}c_{k}\det(u_{I,(j_{2},\ldots,j_{q},n,j_{q+1},\ldots,j_{k-1},j_{k })}).\] Since \(\det(u_{I,J})\) is a non-trivial \(k\times k\) minor of \(u\), i.e. \(i_{p}\leqslant j_{p}\) for all \(p\in\{1,\ldots,k\}\), both \(\det(u_{I^{\prime},J^{\prime}})\) and \(\det(u_{I^{\prime},J^{\prime\prime}})\) are non-trivial \((k-1)\times(k-1)\) minors of \(u\), so they are both positive by assumption. So, (1) implies that \(c_{1}\neq 0\neq c_{k}\). At the same time, notice that \(j_{k}-j_{2}+1<m\) Since \(\det(u_{I,J})\) is a non-trivial \(k\times k\)-minor of \(u\), the same is true for \(\det(u_{I,(j_{2},\ldots,j_{q},n,j_{q+1},\ldots,j_{k})})\), so it is positive by the inductive hypothesis. It now follows from (2) that \[(-1)^{k}\frac{c_{k}}{c_{1}}=\frac{\det(u_{I,(j_{1},\ldots,j_{q},n,j_{q+1}, \ldots,j_{k-1})})}{\det(u_{I,(j_{2},\ldots,j_{q},n,j_{q+1},\ldots,j_{k})})} \geqslant 0.\] On the other hand, we also have \[0 = \det(\vec{0},u_{I^{\prime},j_{2}},\ldots,u_{I^{\prime},j_{k-1}})\] \[= \det(c_{1}u_{I^{\prime},j_{1}}+\cdots+c_{k}u_{I^{\prime},j_{k}},u _{I^{\prime},j_{2}},\ldots,u_{I^{\prime},j_{k-1}})\] \[= c_{1}\det(u_{I^{\prime},J^{\prime}})+(-1)^{k-2}c_{k}\det(u_{I^{ \prime},J^{\prime\prime}}),\] so \[(-1)^{k-1}\frac{c_{k}}{c_{1}}=\frac{\det(u_{I^{\prime},J^{\prime}})}{\det(u_{ I^{\prime},J^{\prime\prime}})}>0\] because \(c_{1}\neq 0\neq c_{k}\) and both \(\det(u_{I^{\prime},J^{\prime}})\) and \(\det(u_{I^{\prime},J^{\prime\prime}})\) are positive. We thus arrive at a contradiction, so \(\det(u_{I,J})\neq 0\). Since \(u\in U_{\geqslant 0}(\mathcal{B})\), it follows that \(\det(u_{I,J})>0\). This completes the inductive step. **Lemma A.2**.: _Let \(u\in U_{\geqslant 0}(\mathcal{B})\). Suppose that there exists \(k\in\{1,\ldots,d\}\) such that_ * _all the non-trivial_ \(\ell\times\ell\)_-minors of_ \(u\) _are positive for all_ \(\ell<k\)_;_ * _there is a non-trivial_ \(k\times k\) _minor_ \(\det(u_{I,J})\) _of_ \(u\) _such that_ \(I\) _and_ \(J\) _are consecutive and_ \(\det(u_{I,J})=0\)_._ _Then \(\det(u_{I_{0},J_{0}})=0\), where \(I_{0}=(1,\ldots,k)\) and \(J_{0}=(d-k+1,\ldots,d)\)._ Proof.: Notice that it suffices to prove that \(\det(u_{I,J_{0}})=0\) and that \(\det(u_{I_{0},J})=0\). We will only prove the former; the proof of the latter is the same. Let \(I=(i_{1},\ldots,i_{k})\) and \(J=(j_{1},\ldots,j_{k})\). By assumption, \(\det\left(u_{I^{\prime},J^{\prime}}\right)>0\), so \(u_{I^{\prime},j_{1}},\ldots,u_{I^{\prime},j_{k-1}}\) and hence \(u_{I,j_{1}},\ldots,u_{I,j_{k-1}}\) is a linearly independent collection of vectors. Since \(\det\left(u_{I,J}\right)=0\), it follows that \(u_{I,j_{k}}\) is a linear combination of \(u_{I,j_{1}},\ldots,u_{I,j_{k-1}}\). Fix \(n\in\{j_{k}+1,\ldots,d\}\). Since \(\det(u_{I,J})\) is a non-trivial minor and \(\det(u_{I,J})=0\), we have \(i_{k}<j_{k}\). Then \[0 \leqslant \det\left(u_{(i_{1},\ldots,i_{k},j_{k}),(j_{1},\ldots,j_{k},n)}\right)\] \[= \det\left(\begin{smallmatrix}u_{I,J^{\prime}}&u_{I,j_{k}}&u_{I,n }\\ \vec{0}&1&u_{j_{k},n}\end{smallmatrix}\right)\] \[= u_{j_{k},n}\det(u_{I,J})-\det\left(u_{I,(j_{1},\ldots,j_{k-1},n)}\right)\] \[= -\det\left(u_{I,(j_{1},\ldots,j_{k-1},n)}\right)\] where the first inequality holds because \(u\in U_{\geqslant 0}(\mathcal{B})\). At the same time, \(\det\left(u_{I,(j_{1},\ldots,j_{k-1},n)}\right)\geqslant 0\) because \(u\in U_{\geqslant 0}(\mathcal{B})\), so \(\det\left(u_{I,(j_{1},\ldots,j_{k-1},n)}\right)=0\). It follows that \(u_{I,n}\) is a linear combination of the linearly independent collection of vectors \(u_{I,j_{1}},\ldots,u_{I,j_{k-1}}\). Since \(J\) is consecutive, we have proven that the \(k\) vectors \(u_{I,d-k+1},u_{I,d-k+2},\ldots,u_{I,d}\) are all linear combinations of \(u_{I,j_{1}},\ldots,u_{I,j_{k-1}}\). In particular, their span has dimension \(k-1\), so \(\det(u_{I,J_{0}})=0\). Proof of Theorem 2.2.: Since \(u\in U_{\geqslant 0}(\mathcal{B})-U_{>0}(\mathcal{B})\), there is some \(k\in\{1,\ldots,d-1\}\) such that some non-trivial \(k\times k\) minor \(\det(u_{I,J})\) of \(u\) is zero, while all the non-trivial \(\ell\times\ell\)-minors of \(u\) are positive for all \(\ell<k\). By Lemma A.1, we may assume that both \(I\) and \(J\) are consecutive. Then Lemma A.2 implies \(\det(u_{I_{0},J_{0}})=0\) with \(I_{0}=(1,\ldots,k)\) and \(J_{0}=(d-k+1,\ldots,d)\). Therefore, the span of the vectors \(u_{I_{0},d-k+1},\ldots,u_{I_{0},d}\) has dimension at most \(k-1\), so \[G^{k}+H^{d-k} = u\cdot\operatorname{Span}(e_{d},\ldots,e_{d-k+1})+\operatorname{ Span}(e_{d},\ldots,e_{k+1})\] \[= \operatorname{Span}\left(u\cdot e_{d},\ldots,u\cdot e_{d-k+1},e_{ d},\ldots,e_{k+1}\right)\] \[= \operatorname{Span}\left(\left(\begin{smallmatrix}u_{I_{0},d}\\ \hd\end{smallmatrix}\right),\ldots,\left(\begin{smallmatrix}u_{I_{0},d-k+1} \\ \hd\end{smallmatrix}\right),e_{d},\ldots,e_{k+1}\right)\] \[\neq \mathbb{R}^{d}.\] This implies that \(G\) and \(H\) are not transverse. ## Appendix B The Barbot examples Fix a lattice \(\Gamma\subset\mathsf{SL}_{2}(\mathbb{R})\) and some odd integer \(d>2\). In this appendix, we define the Barbot examples, which are representations \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) that are Borel transverse (or equivalently, cusped Borel Anosov), but not Hitchin. These are a straightforward generalization of examples (due to Barbot [1]) of Borel Anosov representations of a surface group into \(\mathsf{PGL}_{3}(\mathbb{R})\) that are not Hitchin. To define the Barbot examples, we need some preliminary results. First, let \((e_{1},\ldots,e_{d})\) be the standard basis of \(\mathbb{R}^{d}\), and equip \(\mathbb{R}^{d}\) with the standard inner product. For any \(g\in\mathsf{PGL}_{d}(\mathbb{R})\), let \[\sigma_{1}(g)\geqslant\ldots\geqslant\sigma_{d}(g)>0\] denote the singular values of (any unit-determinant, linear representative of) \(g\), and let \[A_{g}:=\operatorname{diag}(\log\sigma_{1}(g),\ldots,\log\sigma_{d}(g)).\] By the singular value decomposition theorem, we may write every \(g\in\mathsf{PGL}_{d}(\mathbb{R})\) as the product \[g=m\exp(A_{g})\ell\] for some \(m,\ell\in\mathsf{PO}(d)\) (which are not necessarily unique). For every \(g\in\mathsf{PGL}_{d}(\mathbb{R})\), choose \(m_{g},\ell_{g}\in\mathsf{PO}(d)\) such that \(g=m_{g}\exp(A_{g})\ell_{g}\). Let \(F_{0}\in\mathcal{F}(\mathbb{R}^{d})\) be the flag such that \[F_{0}^{k}=\operatorname{Span}(e_{1},\ldots,e_{k})\] for all \(k\in\{1,\ldots,d-1\}\), and define \[U(g):=m_{g}\cdot F_{0}.\] One can verify that if \(\sigma_{k}(g)>\sigma_{k+1}(g)\) for all \(k\in\{1,\ldots,d-1\}\), then \(U(g)\) does not depend on the choice of \(m_{g}\) and \(\ell_{g}\), and hence is canonical to \(g\). The following proposition is a standard linear algebra fact, see [1, Appendix A] for a proof. **Proposition B.1**.: _Let \(\{g_{n}\}\) be a sequence in \(\mathsf{PGL}_{d}(\mathbb{R})\) and \(F_{+},F_{-}\in\mathcal{F}(\mathbb{R}^{d})\). The following are equivalent:_ 1. \(U(g_{n})\to F_{+}\)_,_ \(U(g_{n}^{-1})\to F_{-}\)_, and_ \(\frac{\sigma_{k}(g_{n})}{\sigma_{k+1}(g_{n})}\to\infty\) _for all_ \(k\in\{1,\ldots,d-1\}\)_._ 2. \(g_{n}(F)\to F_{+}\) _for all_ \(F\) _transverse to_ \(F_{-}\)_, and_ \(g_{n}^{-1}(F)\to F_{-}\) _for all_ \(F\) _transverse to_ \(F_{+}\)_._ Next, recall that \(g\in\mathsf{PGL}_{d}(\mathbb{R})\) is _weakly unipotent_ if its multiplicative Jordan-Chevalley decomposition has elliptic semisimple part and non-trivial unipotent part. We say that a representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is _type preserving_ if it sends parabolic elements in \(\Gamma\) to weakly unipotent elements in \(\mathsf{PGL}_{d}(\mathbb{R})\). If \(\Gamma\subset\mathsf{SL}_{2}(\mathbb{R})\) is geometrically finite, then given a type preserving representation \(\sigma:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\), one can define \[\operatorname{Hom}_{\mathrm{tp}}(\sigma)\subset\operatorname{Hom}(\Gamma, \mathsf{PGL}_{d}(\mathbb{R}))\] to be the set of representations \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) such that \(\rho(\alpha)\) is conjugate to \(\sigma(\alpha)\) for all parabolic \(\alpha\in\Gamma\). The following are results of Canary, Zhang and Zimmer [13, Theorem 4.1(2) and Theorem 8.1] **Theorem B.2** (Canary-Zhang-Zimmer).: _Suppose that \(\Gamma\subset\mathsf{SL}_{2}(\mathbb{R})\) is geometrically finite. If \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is \(P_{\theta}\)-transverse for some symmetric \(\theta\subset\Delta\), then:_ 1. \(\rho\) _is type-preserving._ 2. _The set of_ \(P_{\theta}\)_-transverse representations in_ \(\operatorname{Hom}_{\mathrm{tp}}(\rho)\) _is open._ Finally, let \(k\geqslant 1\) be an integer. Recall from the proof of Lemma 4.1 the representation \[\iota_{k}:\mathsf{GL}_{2}(\mathbb{R})\to\mathsf{GL}(\operatorname{Sym}^{k-1}( \mathbb{R}^{2}))\cong\mathsf{GL}_{k}(\mathbb{R}).\] One can verify that \(\iota_{k}\) restricts to a representation \[\iota_{k}:\mathsf{SL}_{2}(\mathbb{R})\to\mathsf{SL}_{k}(\mathbb{R}).\] Now, given any \(j\in\{1,\ldots,\frac{d-1}{2}\}\), let \[\tau_{d,j}:=\iota_{d-j}\oplus\iota_{j}:\mathsf{SL}_{2}(\mathbb{R})\to\mathsf{ SL}_{d-j}(\mathbb{R})\oplus\mathsf{SL}_{j}(\mathbb{R})\subset\mathsf{SL}_{d}( \mathbb{R}).\] Let \((e_{1},\ldots,e_{d})\) be the standard basis of \(\mathbb{R}^{d}\), let \[(f_{1},\ldots,f_{d-j}):=(e_{1},e_{2},\ldots,e_{d-j})\quad\text{and}\quad(f_{1 }^{\prime},\ldots,f_{j}^{\prime}):=(e_{d-j+1},\ldots,e_{d}),\] and let \(k:=\frac{d-2j+1}{2}\). Then let \(B^{\prime}\subset\mathsf{SL}_{d}(\mathbb{R})\) be the upper triangular group with respect to the basis \[\mathcal{B}:=(f_{1},f_{2},\ldots,f_{k},f_{1}^{\prime},f_{k+1},f_{2}^{\prime}, f_{k+2},\ldots,f_{j}^{\prime},f_{k+j},f_{k+j+1},f_{k+j+2},\ldots,f_{d-j})\] of \(\mathbb{R}^{d}\). Observe that \(\tau_{d,j}^{-1}(B^{\prime})\) is the upper triangular subgroup of \(\mathsf{SL}_{2}(\mathbb{R})\) with respect to the standard basis \((e_{1},e_{2})\) of \(\mathbb{R}^{2}\), so we may define the \(\tau_{d,j}\)-equivariant embedding \[\xi_{d,j}:\mathbb{RP}^{1}\cong\mathsf{SL}_{2}(\mathbb{R})/\tau_{d,j}^{-1}(B^{ \prime})\to\mathsf{SL}_{d}(\mathbb{R})/B^{\prime}\cong\mathcal{F}(\mathbb{R}^ {d}).\] Let \(F_{+}\) and \(F_{-}\) be the flags in \(\mathcal{F}(\mathbb{R}^{d})\) with the defining property that for all \(k\in\{1,\ldots,d-1\}\), \(F_{+}^{k}\) is spanned by the first \(k\) vectors of the basis \(\mathcal{B}\) and \(F_{-}^{k}\) is spanned by the last \(k\) vectors of \(\mathcal{B}\). Observe that \(\xi_{d,j}([e_{1}])=F_{+}\) and \(\xi_{d,j}([e_{2}])=F_{-}\). **Proposition B.3**.: _For every \(j\in\{1,\ldots,\frac{d-1}{2}\}\), the following hold:_ 1. _The map_ \(\xi_{d,j}\) _is transverse._ 2. _If_ \(\{g_{n}\}\) _is a sequence in_ \(\mathsf{SL}_{2}(\mathbb{R})\) _and_ \(x,y\in\mathbb{RP}^{1}\) _such that_ \(g_{n}\cdot b_{0}\to x\) _and_ \(g_{n}^{-1}\cdot b_{0}\to y\) _for some/all_ \(b_{0}\in\mathbb{H}^{2}\)_, then_ \(\tau_{d,j}(g_{n})\cdot F\to\xi_{d,j}(x)\) _for all_ \(F\) _transverse to_ \(\xi_{d,j}(y)\)_, and_ \(\tau_{d,j}(g_{n}^{-1})\cdot F\to\xi_{d,j}(y)\) _for all_ \(F\) _transverse to_ \(\xi_{d,j}(x)\)_._ _In particular, if \(\Gamma\subset\mathsf{SL}_{2}(\mathbb{R})\) is a non-elementary, discrete subgroup and \(\pi:\mathsf{SL}_{d}(\mathbb{R})\to\mathsf{PSL}_{d}(\mathbb{R})\subset\mathsf{ PGL}_{d}(\mathbb{R})\) is the obvious quotient map, then_ \[\rho:=\pi\circ\tau_{d,j}|_{\Gamma}:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\] _is Borel transverse with limit map \(\xi_{d,j}|_{\Lambda(\Gamma)}\)._ Proof.: To simplify notation, we will denote \(\xi:=\xi_{d,j}\) and \(\tau:=\tau_{d,j}\). (1) Pick any pair of distinct points \(a,b\in\mathbb{RP}^{1}\). Then there is some \(g\in\mathsf{SL}_{2}(\mathbb{R})\) such that \((g\cdot a,g\cdot b)=([e_{1}],[e_{2}])\). By the \(\tau\)-equivariance of \(\xi\), it follows that \[(\xi(a),\xi(b))=\left(\tau(g^{-1})\cdot\xi([e_{1}]),\tau(g^{-1})\cdot\xi([e_{2} ])\right),\] so it suffices to verify that \(\xi([e_{1}])\) and \(\xi([e_{2}])\) are transverse. This holds because \(\xi([e_{1}])=F_{+}\) and \(\xi([e_{2}])=F_{-}\). (2) Note that \(g_{n}\cdot z\to x\) for all \(z\in\mathbb{RP}^{1}-\{y\}\) and \(g_{n}^{-1}\cdot z\to y\) for all \(z\in\mathbb{RP}^{1}-\{x\}\). Proposition B.1 then implies that \[m_{n}\cdot[e_{1}]=U(g_{n})\to x,\quad\ell_{n}^{-1}\cdot[e_{2}]=U(g_{n}^{-1}) \to y\quad\text{ and }\quad\frac{\sigma_{1}(g_{n})}{\sigma_{2}(g_{n})}\to\infty,\] where \(g_{n}=m_{n}\exp(A_{g_{n}})\ell_{n}\) is a singular value decomposition of \(g_{n}\). In particular, any subsequential limit \(m\) of \(\{m_{n}\}\) and \(\ell\) of \(\{\ell_{n}\}\) satisfy \[m\cdot[e_{1}]=x\quad\text{and}\quad\ell^{-1}\cdot[e_{2}]=y.\] Note that \[\tau(g_{n})=\tau(m_{n})\tau(\exp(A_{g_{n}}))\tau(\ell_{n})\] is a singular value decomposition of \(\tau(g_{n})\). It then follows that \[U(\tau(g_{n}))=\tau(m_{n})\cdot F_{+}\to\tau(m)\cdot F_{+}=\tau(m)\cdot\xi([e_ {1}])=\xi(x),\] where \(m\) is some/any subsequential limit of \(\{m_{n}\}\). Similarly, \[U(\tau(g_{n})^{-1})\to\xi(y).\] This also implies that \[\tau(\exp(A_{g_{n}}))=\exp(A_{\tau(g_{n})})\quad\text{and}\quad\frac{\sigma_{ i}(\tau(g_{n}))}{\sigma_{i+1}(\tau(g_{n}))}\to\infty,\] because \[\frac{\sigma_{i}(\tau(g_{n}))}{\sigma_{i+1}(\tau(g_{n}))}=\left\{\begin{array} []{rl}\frac{\sigma_{1}(g_{n})}{\sigma_{2}(g_{n})}&\text{ if }1\leqslant i \leqslant k-1\text{ or }d-k+1\leqslant i\leqslant d-1,\\ \sqrt{\frac{\sigma_{1}(g_{n})}{\sigma_{2}(g_{n})}}&\text{ if }k\leqslant i \leqslant d-k.\end{array}\right.\] Thus, by Proposition B.1, \(\tau(g_{n})\cdot F\to\xi(x)\) for all \(F\) transverse to \(\xi(y)\) and \(\tau(g_{n})\cdot F\to\xi(y)\) for all \(F\) transverse to \(\xi(x)\). Therefore, \(\rho\) is Borel transverse with limit map \(\xi|_{\Lambda(\Gamma)}\). Indeed, \(\xi\) is continuous and \(\tau\)-equivariant, \(\xi\) is transverse by (1), and \(\xi\) is strongly dynamics preserving by (2). We may now define the Barbot examples. Given \(j\in\{1,\dots,\frac{d-1}{2}\}\), a representation \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) is a _\((\Gamma,d,j)\)-Barbot example_ if there is a continuous path \(f:[0,1]\to\operatorname{Hom}_{\text{tp}}(\pi\circ\tau_{d,j}|_{\Gamma})\) such that \(f(0)=\rho\), \(f(1)=\pi\circ\tau_{d,j}|_{\Gamma}\), and \(f(t)\) is Borel transverse for all \(t\in[0,1]\). By Theorem B.2 and Proposition B.3, the \((\Gamma,d,j)\)-Barbot examples form a connected, non-empty, open set in \(\operatorname{Hom}_{\text{tp}}(\pi\circ\tau_{d,j}|_{\Gamma})\). **Remark B.4**.: We may define the \((\Gamma,d,j)\)-Barbot examples for discrete subgroups \(\Gamma\subset\mathsf{SL}_{2}^{\pm}(\mathbb{R})\) as well: these are representations \(\rho:\Gamma\to\mathsf{PGL}_{d}(\mathbb{R})\) whose restriction to \(\Gamma\cap\mathsf{SL}_{2}(\mathbb{R})\) is a \((\Gamma,d,j)\)-Barbot example as described above. Since \(\Gamma\cap\mathsf{SL}_{2}(\mathbb{R})\subset\Gamma\) is a finite-index subgroup, these representations are also Borel-transverse.
2303.14462
Lecture notes on the harmonic approximation to quadratic optimal transport
These lecture notes present the quantitative harmonic approximation result for quadratic optimal transport and general measures obtained by Goldman and Otto. The aim is to give a clear presentation of the proof of the main theorem with more motivations, less PDE machinery, and a number of simplifications.
Lukas Koch, Felix Otto
2023-03-25T13:08:29Z
http://arxiv.org/abs/2303.14462v1
# Lecture Notes on the Harmonic Approximation to Quadratic Optimal Transport ###### Abstract. These lecture notes present the quantitative harmonic approximation result for quadratic optimal transport and general measures obtained in [5]. The aim is to give a clear presentation of the proof of [5, Theorem 4.1] with more motivations, less PDE machinery, and a number of simplifications. ## 1. A brief introduction These notes grew out of a couple of lecture series given by the second author at 2022 summer schools on the topic of optimal transportation, its regularity theory, and its application to the matching of random point clouds, see for instance [https://kantorovich.org/event/2022-optimal-transport-summer-school/schedule/](https://kantorovich.org/event/2022-optimal-transport-summer-school/schedule/). They presented results in [6], [5], and [7]. The traditional approach to regularity [2] and partial regularity [4, 3] for optimal transportation relies on the seminal regularity theory [1] for the corresponding Euler-Lagrange equation, the Monge-Ampere equation, based on the comparison principle. The variational approach introduced in [6] avoids these arguments and was first used to re-derive the partial regularity result of [4], and then in [8] to re-derive [3] for more general cost functions of quadratic behavior. The added value of the variational approach lies in its robustness, in particular in its ability to deal with general measures: There is no need to have a Lebesgue density bounded away from zero and infinity in order to allow for explicit barrier functions as in the approach based on comparison principle. This robustness for instance allows to give a mesoscopic characterization of the optimal transport between what is allowed to be an atomic measure and the uniform distribution [5, Corollary 1.1], and allows to analyze the matching between independent copies of the Poisson point process [7]. We refer to www.mis.mpg.de/services/media/imprs-ringvorlesung-2022.html for a gentle introduction into this aspect of matching. In analogy to de Giorgi's strategy for \(\epsilon\)-regularity of minimal surfaces, the core of the variational regularity theory is a harmonic approximation result [5, Theorem 1.4], which we will focus on in these notes. Hence we do not discuss the literature further, but refer to [8] for a careful review of the literature, and the connection to minimal surface theory. Compared to [5, Section 3], which we mainly rely on, these notes come with more motivations, less PDE machinery, and a couple of simplifications. They allow for an independent reading. ### Standing assumptions and language Throughout the entire text, we will consider two non-negative (finite) measures \(\lambda\) and \(\mu\) on \(\mathbb{R}^{d}\) with \(\lambda(\mathbb{R}^{d})=\mu(\mathbb{R}^{d})\), which one should think of as two different spatial distributions of the same amount of mass. It is convenient to assume that \(\lambda\) and \(\mu\) have bounded support. A non-negative measure \(\pi\) on the product space \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) is called admissible if its marginals are given by \(\lambda\) and \(\mu\), which spelled out means \[\int\zeta(x)\pi(dxdy)=\int\zeta d\lambda\quad\text{and}\quad\int\zeta(y)\pi( dxdy)=\int\zeta d\mu \tag{1}\] for all continuous and compactly supported functions ("test functions") \(\zeta\) on \(\mathbb{R}^{d}\). One should think of \(\pi\) as one possible way of transporting the mass as distributed according to \(\lambda\) into the shape as described by \(\mu\) (a "transport/transference plan"). An admissible \(\pi\) is called optimal if it minimizes \[\int|x-y|^{2}d\pi\stackrel{{\text{short}}}{{=}}\text{for}\, \int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|x-y|^{2}\pi(dxdy),\] which one interprets as a total transportation cost, since it integrates the cost of transporting a unit mass from \(x\) to \(y\), which here is given by the _square_ of the Euclidean distance. This is Kantorovich' relaxation of Monge's problem, applied to the quadratic cost function. The infimum (which actually is attained) \[W^{2}(\lambda,\mu)=\inf\{\,\int|x-y|^{2}d\pi\,|\,\pi\text{ is admissible for}\,\lambda,\mu\,\} \tag{2}\] defines a distance function \(W\), which we call Wasserstein distance. ## 2. Connection of optimal transportation and the Neumann problem for the Poisson equation In this section, we motivate the connection between optimal transportation (OT) and the Neumann boundary value problem for the Poisson equation. ### Trajectories For the above connection, it is convenient to adopt a dynamical view upon OT, identifying a pair \((x,y)\) of (matched) points with the (straight) trajectory \[[0,1]\ni t\mapsto X(t):=ty+(1-t)x. \tag{3}\] Given an optimal transfer plan \(\pi\) for \(\lambda,\mu\), we ask the question on how to choose a function \(\phi\) in such a way that its gradient \(\nabla\phi\) captures the velocity of the trajectories, meaning \[\dot{X}(t)\approx\nabla\phi(X(t))\quad\text{for }(x,y)\in\text{supp}\pi. \tag{4}\] As we shall see, the answer relates to the Poisson equation \(-\triangle\phi=\mu-\lambda\). We are interested in connecting to a boundary value problem for the Poisson equation on some domain, say a ball \(B_{R}\) of some radius \(R\) (to be optimized later) and center w. l. o. g. given by the origin. We are thus led to restrict ourselves1 to the set of trajectories that spend some time in the closure \(\bar{B}_{R}\): Footnote 1: we will proceed to a further restriction in (12) \[\Omega:=\{\,(x,y)\,|\,\exists t\in[0,1]\;X(t)\in\bar{B}_{R}\,\}. \tag{5}\] To every \((x,y)\in\Omega\), we associate the entering and exiting times \(0\leq\sigma\leq\tau\leq 1\) of the corresponding trajectory \[\sigma :=\min\{t\in[0,1]\,|\,X(t)\in\bar{B}_{R}\},\] \[\tau :=\max\{t\in[0,1]\,|\,X(t)\in\bar{B}_{R}\}, \tag{6}\] see also Fig. 1. (Note that some trajectories may both enter and exit.) Given a transfer plan \(\pi\), we keep track of _where_ the trajectories enter and exit \(B_{R}\), which is captured by two (non-negative) measures \(f\) and \(g\) concentrated on \(\partial B_{R}\), defined through \[\int\zeta df =\int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}}\zeta(X(\sigma) )d\pi, \tag{8}\] \[\int\zeta dg =\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}\zeta(X(\tau))d\pi \tag{7}\] Figure 1. Entering and exiting times of trajectories for all test functions functions \(\zeta\). Note that the set of trajectories \(\Omega\cap\{X(\sigma)\in\partial B_{R}\}\) implicitly defines a Borel measurable subset of \(\mathbb{R}^{d}\times\mathbb{R}^{d}\), namely the pre-image under the mapping (3), which is continuous from \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) into \(C^{0}([0,1])\). Hence the integration against \(\pi\) in (7) is legitimate. **Lemma 1**.: _We have for any admissible \(\pi\) and any continuously differentiable function \(\phi\) on \(\bar{B}_{R}\)_ \[\int_{\Omega}\int_{\sigma}^{\tau}|\dot{X}(t)-\nabla\phi(X(t))|^{2 }dtd\pi\] \[=\int_{\Omega}\int_{\sigma}^{\tau}|\dot{X}(t)|^{2}dtd\pi+\int_{ \Omega}\int_{\sigma}^{\tau}|\nabla\phi(X(t))|^{2}dtd\pi \tag{9}\] \[-2\int_{B_{R}}\phi d(\mu-\lambda)-2\int_{\partial B_{R}}\phi d(g- f).\] _For later purpose, we record_ \[\lambda(B_{R})+f(\partial B_{R})=\mu(B_{R})+g(\partial B_{R}). \tag{10}\] Proof of Lemma 1. For identity (9) we note that for the mixed term we have by the chain rule \(\dot{X}(t)\cdot\nabla\phi(X(t))=\frac{d}{dt}[\phi(X(t))]\) and thus by the fundamental theorem of calculus \(\int_{\sigma}^{\tau}\dot{X}(t)\cdot\nabla\phi(X(t))dt=\phi(X(\tau))\)\(-\phi(X(\sigma))\). In view of definition (6) we either have \(X(\sigma)\in\partial B_{R}\) or \(X(\sigma)\in B_{R}\). By definition (6) of \(\sigma\) the latter implies \(\sigma=0\) and thus \(X(\sigma)=x\), so that the constraint \((x,y)\in\Omega\) may be dropped. Hence \(\int_{\Omega}\phi(X(\sigma))d\pi=\int_{\Omega\cap\{X(\sigma)\in\partial B_{R} \}}\phi(X(\sigma))d\pi+\int_{\{x\in B_{R}\}}\phi(x)d\pi\). By definition (7), the first integral is \(\int\phi df\). By admissibility (1) of \(\pi\), the second integral is \(\int_{B_{R}}\phi d\lambda\). Likewise, one obtains \(\int_{\Omega}\phi(X(\tau))d\pi\)\(=\int\phi dg+\int_{B_{R}}\phi d\mu\). Specifying to \(\phi=1\), and thus \(\nabla\phi=0\) so that the mixed term vanishes, we learn (10) from the above two identities. ### Perturbative regime We will focus on a "perturbative regime", which comes in form of two local smallness conditions. Any smallness condition has to be formulated in a non-dimensionalized way, which we implement by expressing this local smallness condition on a ball of non-dimensionalized radius, it will be convenient to take 5 as this radius. The first smallness condition involves the data (thus the letter \(D\)), that is, the two measures \(\lambda\) and \(\mu\). We monitor how close these measures are to the Lebesgue measure on \(B_{5}\). It is natural to quantify this in terms of the Wasserstein distance, see (2). Since the mass \(\lambda(B_{5})\) in general is not equal to the Lebesgue volume \(|B_{5}|\), we have to split this into two: We monitor how Wasserstein-close the restriction \(\lambda_{\!\!\!\perp}B_{5}\) is to the uniform measure \(\kappa_{\lambda}dx_{\!\!\!\perp}B_{5}\), where \(\kappa_{\lambda}:=\frac{\lambda(B_{5})}{|B_{5}|}\), and we monitor how close this density \(\kappa_{\lambda}\) is to unity. It is convenient to do both on the squared level: \[D: =W^{2}(\lambda_{\!\!\!\perp}B_{5},\kappa_{\lambda}dx_{\!\!\!\perp}B_ {5})+(\kappa_{\lambda}-1)^{2}\] \[+\text{same expression with }\lambda\rightsquigarrow\mu. \tag{11}\] In view of the localization (11), it is convenient to further restrict the set of trajectories, imposing that they start or end in \(B_{4}\), thereby replacing (5) by \[\Omega=\{(x,y)\in(B_{4}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{4})| \;\exists t\in[0,1]\;X(t)\in\bar{B}_{R}\}. \tag{12}\] The second smallness condition involves the solution itself, i. e. \(\pi\). It monitors the length of trajectories that start or end in \(B_{5}\). It does so in a square-averaged sense, like the total cost function itself. In fact, it is a localization of the cost functional (or energy, thus the letter \(E\)): \[E:=\int_{(B_{5}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{5})}|x-y|^{2 }d\pi. \tag{13}\] We expect (and shall rigorously argue in Subsection 3.4) that in the perturbative regime \(E+D\ll 1\)2 and for a suitable \(R\in[2,3]\) we have for the second r. h. s. term in (9) Footnote 2: Here and in the following we use the notation \(\ll 1\) to mean that there is \(\varepsilon>0\) such that the statement holds if \((E+D)\leq\varepsilon\). \[\int_{\Omega}\int_{\sigma}^{\tau}|\nabla\phi(X(t))|^{2}dtd\pi\approx\int_{B_{ R}}|\nabla\phi|^{2}. \tag{14}\] Indeed, for \(E\ll 1\), trajectories are short so that \[\int_{\Omega}\int_{\sigma}^{\tau}|\nabla\phi(X(t))|^{2}dtd\pi\approx\int_{\{x \in B_{R}\}}|\nabla\phi(x)|^{2}d\pi=\int_{B_{R}}|\nabla\phi|^{2}d\lambda,\] where the last identity follows from admissibility (1). Furthermore, for \(D\ll 1\), \(\lambda\) is close to Lebesgue so that \[\int_{B_{R}}|\nabla\phi|^{2}d\lambda\approx\int_{B_{R}}|\nabla\phi|^{2}.\] ### Connection to the Neumann problem for the Poisson equation Hence in order to achieve (4), in view of (9) and (14), we are led to minimize \[\int_{B_{R}}|\nabla\phi|^{2}-2\int_{B_{R}}\phi d(\mu-\lambda)-2\int_{\partial B _{R}}\phi d(g-f) \tag{15}\] in \(\phi\). A minimizer \(\phi\) of (15), if it exists as a continuously differentiable function on \(\bar{B}_{R}\), would be characterized by the Euler-Lagrange equation \[\int_{B_{R}}\nabla\zeta\cdot\nabla\phi-\int_{B_{R}}\zeta d(\mu-\lambda)-\int_{ \partial B_{R}}\zeta d(g-f)=0 \tag{16}\] for all continuously differentiable test functions \(\zeta\) on \(B_{R}\). If \(\phi\) even exists as a twice continuously differentiable function on \(\bar{B}_{R}\), we could appeal to the calculus identity \(\nabla\zeta\cdot\nabla\phi=\nabla\cdot(\zeta\nabla\phi)-\zeta\triangle\phi\) and the divergence theorem in form of \(\int_{B_{R}}\nabla\cdot(\zeta\nabla\phi)=\int_{\partial B_{R}}\zeta\nu\cdot \nabla\phi\), where \(\nu(x)=\frac{x}{R}\) denotes the outer normal to \(\partial B_{R}\) in a point \(x\), to obtain the integration by parts formula \[\int_{B_{R}}\nabla\zeta\cdot\nabla\phi=\int_{B_{R}}\zeta(-\triangle\phi)+\int_{ \partial B_{R}}\zeta\nu\cdot\nabla\phi. \tag{17}\] Hence (16) can be reformulated and regrouped as \[\int_{B_{R}}\zeta(-\triangle\phi-d(\mu-\lambda))+\int_{\partial B_{R}}\zeta(\nu \cdot\nabla\phi-d(g-f))=0. \tag{18}\] Considering first all test functions \(\zeta\)'s that vanish on \(\partial B_{R}\), we learn from (18) that \(-\triangle\phi=\mu-\lambda\) distributionally in \(B_{R}\). Since \(\mu-\lambda\) is a bounded measure, the first term in (18) thus vanishes also for test functions that do not vanish on \(\partial B_{R}\). Hence the second term in (18) vanishes individually, which means \(\nu\cdot\nabla\phi=g-f\) distributionally on \(\partial B_{R}\). Hence we end up with what is called the Poisson equation with Neumann boundary conditions \[-\triangle\phi=\mu-\lambda\;\text{in}\;B_{R},\quad\nu\cdot\nabla\phi=g-f\; \text{on}\;\partial B_{R}. \tag{19}\] This is a classical elliptic boundary value problem, which for sufficiently regular \(\mu-\lambda\) and \(g-f\) has a unique twice differentiable solution, provided (10) holds, and \[\int_{B_{R}}\phi=0 \tag{20}\] is imposed. This motivates the connection between optimal transportation and the (short) Neumann-Poisson problem. However, for rough (like sum of Diracs) measures \(\lambda,\mu\), and thus also rough measures \(f,g\), the solution \(\phi\) of (19), even if it exists for this linear problem, will be rough, too. In particular, (14) may not be true; even worse, both the l. h. s. and the r. h. s. might be infinite. Hence we shall approximate both \(\mu-\lambda\) and \(g-f\) by smooth functions (in fact, we shall approximate \(\mu-\lambda\) by a constant function). The best way to organize the output of Lemma 1 is given by **Corollary 1**.: _We have for any admissible \(\pi\) and any twice continuously differentiable function \(\phi\) on \(\bar{B}_{R}\)_ \[\int_{\Omega}\int_{\sigma}^{\tau}|\dot{X}(t)-\nabla\phi(X(t))|^{2} dtd\pi\] \[\leq\int_{\Omega}|x-y|^{2}d\pi-\int_{B_{R}}|\nabla\phi|^{2}\] \[+2\int_{B_{R}}\phi(-\triangle\phi-d(\mu-\lambda))+2\int_{\partial B _{R}}\phi(\nu\cdot\nabla\phi-d(g-f)) \tag{21}\] \[+\int_{\Omega}\int_{\sigma}^{\tau}|\nabla\phi(X(t))|^{2}dtd\pi- \int_{B_{R}}|\nabla\phi|^{2}.\] As we argued, see (14), we expect the term in last line (21) to be of higher order. The integrals in the second r. h. s. line can be made small by approximately solving (19) - and there will be a trade-off between making the last line and the second line small. However, the main open task is to argue, based on the optimality of \(\pi\), that the difference in the first r. h. s. line is small for an approximate solution of (19). Hence we turn to this task before dealing with the second and third line in Subsection 3.4. Proof of Corollary 1. The upgrade of identity (9) to inequality (21) relies on \[\int_{\Omega}\int_{\sigma}^{\tau}|\dot{X}(t)|^{2}dtd\pi \leq\int_{\Omega}|x-y|^{2}d\pi, \tag{23}\] \[\int_{B_{R}}|\nabla\phi|^{2} =\int_{B_{R}}\phi(-\triangle\phi)+\int_{\partial B_{R}}\phi\nu \cdot\nabla\phi, \tag{22}\] after adding and subtracting \(2\int_{B_{R}}|\nabla\phi|^{2}\). Inequality (22) follows from \(\int_{\sigma}^{\tau}|\dot{X}(t)|^{2}dt\leq\int_{0}^{1}|\dot{X}(t)|^{2}dt=|x-y| ^{2}\). Identity (23) follows from (17) for \(\zeta=\phi\). ### Localizing optimality As mentioned after Corollary 1, the main open task is to estimate the first r. h. s. line of (21). For this, we will (for the first time) use that \(\pi\) is optimal. In order to connect to the Neumann-Poisson problem on \(B_{R}\), we need to leverage optimality in a localized way. Of course, it will in general not be true that the cost of \(\pi\) localized to \((B_{R}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{R})\) is estimated by the transportation cost between the localized measures \(\lambda_{\vdash}B_{R}\) and \(\mu_{\vdash}B_{R}\). However, this is almost true if one adds the distribution of the entering points \(f\), see (7), and exiting points \(g\), see (8), respectively: **Lemma 2**.: _For \(\pi\) optimal we have_ \[\big{(}\int_{\Omega}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}} \leq W(\lambda_{\!\!\sqcup}B_{R}+f,\mu_{\!\!\sqcup}B_{R}+g)\] \[+\big{(}2\int_{\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R }\}}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}}. \tag{24}\] Lemma 2 controls the transportation cost coming from those trajectories that spend some time in \(\bar{B}_{R}\), which amounts to the l. h. s. of (24) according to definition (12), by an OT problem localized to \(\bar{B}_{R}\) as described by the first r. h. s. term. It does so up to the transportation cost coming from those (fewer) trajectories that cross (or touch) the boundary \(\partial B_{R}\), see the second r. h. s. term. We shall argue in Lemma 5, cf. (66), that this last term (without the square root) is \(o(E)\) for a good choice of \(R\). As its form suggests, (24) has the structure of a triangle inequality. In fact, its proof has similarities with the proof of the triangle inequality for \(W\), using a disintegration (or conditioning) argument, c. f. [10, Section 5.1]. Proof of Lemma 2. We now introduce the distribution of \(x=X(0)\) under \(\pi\) conditioned on the event that the trajectory \(X\) enters at \(z\in\partial B_{R}\). In less probabilistic and more measure-theoretic terms ("disintegration"), we introduce the (weakly continuous) family of probability measures \(\{\lambda_{z}\}_{z\in\partial B_{R}}\) such that \[\int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}}\zeta(x,X(\sigma))\pi(dxdy)= \int_{\partial B_{R}}\int\zeta(x,z)\lambda_{z}(dx)f(dz), \tag{25}\] which is possible by (7). Here, \(\zeta\) is an arbitrary test function on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\). Likewise, we introduce the probability distribution \(\{\mu_{w}\}_{w\in\partial B_{R}}\) of the end points of trajectories that exit in \(w\): \[\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}\zeta(X(\tau),y)\pi(dxdy)=\int_{ \partial B_{R}}\int\zeta(w,y)\mu_{w}(dy)g(dw). \tag{26}\] Let \(\bar{\pi}\) denote an optimal plan for \(W(\lambda_{\!\!\sqcup}B_{R}+f,\mu_{\!\!\sqcup}B_{R}+g)\). Equipped with these objects, we now define a competitor \(\tilde{\pi}\) for \(\pi\) that mixes \(\pi\) with \(\bar{\pi}\), in the sense that it takes the trajectories from \(\pi\) that stay outside of \(\bar{B}_{R}\), the trajectories from \(\bar{\pi}\) that stay inside (the open) \(B_{R}\), and concatenates trajectories \(X\) from \(\pi\) that enter or exit \(\bar{B}_{R}\) with trajectories of \(\bar{\pi}\) that start or end in \(\partial B_{R}\): \[\int\zeta(x,y)\tilde{\pi}(dxdy) =\int_{\Omega^{c}}\zeta(x,y)\pi(dxdy)\] \[+\int_{B_{R}\times B_{R}}\zeta(x,y)\bar{\pi}(dxdy)\] \[+\int_{\partial B_{R}\times B_{R}}\int\zeta(x,y)\lambda_{z}(dx) \bar{\pi}(dzdy)\] \[+\int_{B_{R}\times\partial B_{R}}\int\zeta(x,y)\mu_{w}(dy)\bar{ \pi}(dxdw)\] \[+\int_{\partial B_{R}\times\partial B_{R}}\int\int\zeta(x,y)\mu_{ w}(dy)\lambda_{z}(dx)\bar{\pi}(dzdw) \tag{27}\] \[=:(1)+(2)+(3)+(4)+(5),\] see Fig. 2. It is straightforward to see that \(\tilde{\pi}\) has marginals \(\lambda\) and \(\mu\); by symmetry, it is sufficient to check the first condition in (1) by using (27) for a function \(\zeta=\zeta(x)\): Since \(\mu_{w}\) is a probability measure to the effect of \(\int\zeta(x)\mu_{w}(dy)=\zeta(x)\), the second and the fourth r. h. s. term of (27) combine to \(\int_{B_{R}\times\mathbb{R}^{d}}\zeta(x)\bar{\pi}(dxdy)\) because \(\bar{\pi}\), like \(\mu_{\ll}B_{R}+g\), is supported in \(\bar{B}_{R}\) (in \(dy\)). Likewise, the third and the fifth term combine to \(\int_{\partial B_{R}\times\mathbb{R}^{d}}\int\zeta(x)\ \lambda_{z}(dx)\ \bar{\pi}(dzdy)\). By admissibility of \(\bar{\pi}\), the combination of the second and fourth term gives \(\int_{B_{R}}\zeta(x)\ \mu(dx)\), which as in the proof of Lemma 1 (by admissibility of \(\pi\)) can be seen to be \(\int_{\Omega\cap\{X(\sigma)\in B_{R}\}}\zeta(x)\pi(dxdy)\). Since \(\int\zeta(x)\lambda_{z}(dx)\) does not depend on \(y\), for the same reason, the combination of the third and fifth term renders \(\int_{\partial B_{R}}\zeta(z)f(dz)\), which by definition (7) is equal to \(\int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}}\zeta(x)\) Figure 2. The terms in (27) \(\pi(dxdy)\). Hence these four terms combine to \(\int_{\Omega}\zeta(x)\pi(dxdy)\). Therefore, the r. h. s. of (27) collapses as desired to \(\int\zeta(x)\pi(dxdy)\), which coincides with \(\int\zeta(x)\lambda(dx)\) by admissibility of \(\pi\). By optimality of \(\pi\), we have \(\int|x-y|^{2}d\pi\leq\int|x-y|^{2}d\tilde{\pi}\); rewriting this as \(\int_{\Omega}|x-y|^{2}d\pi+\int_{\Omega^{c}}|x-y|^{2}d\pi\leq\int|x-y|^{2}d \tilde{\pi}\), and using (27) for \(\zeta(x,y)=|x-y|^{2}\), we gather \[\big{(}\int_{\Omega}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}}\leq\|(f_{2},f_{3},f_{4},f_{5})\|, \tag{28}\] where the four functions \(f_{2},\cdots,f_{5}\geq 0\) are given by \[f_{2}(x,y):=|x-y|,\quad f_{5}^{2}(z,w):=\int\int|x-y|^{2}\mu_{w} (dy)\lambda_{z}(dx),\] \[f_{3}^{2}(z,y):=\int|x-y|^{2}\lambda_{z}(dx),\quad f_{4}^{2}(x,w ):=\int|x-y|^{2}\mu_{w}(dy),\] and the vector-valued \(L^{2}\)-type norm is defined through \[\|(f_{2},f_{3},f_{4},f_{5})\|^{2}\] \[=\int_{B_{R}\times B_{R}}f_{2}^{2}(x,y)\bar{\pi}(dxdy)+\int_{ \partial B_{R}\times B_{R}}f_{3}^{2}(z,y)\bar{\pi}(dzdy) \tag{29}\] \[+\int_{B_{R}\times\partial B_{R}}f_{4}^{2}(x,w)\bar{\pi}(dxdw)+ \int_{\partial B_{R}\times\partial B_{R}}f_{5}^{2}(z,w)\bar{\pi}(dzdw).\] By the triangle inequality w. r. t. \(L^{2}(\lambda_{z})\) and \(L^{2}(\mu_{w})\), and using that \(\lambda_{z}\), \(\mu_{w}\) are probability measures, we obtain \[f_{3}\leq|z-y|+\tilde{f}_{3},\quad f_{4}\leq|x-w|+\tilde{f}_{4},\quad f_{5} \leq|z-w|+\sqrt{2}\tilde{f}_{5}, \tag{30}\] where the three functions \(\tilde{f}_{3},\tilde{f}_{4},\tilde{f}_{5}\geq 0\) are defined by \[\tilde{f}_{3}^{2}(z,y):=\int|x-z|^{2}\lambda_{z}(dx),\quad\tilde{ f}_{4}^{2}(x,w):=\int|w-y|^{2}\mu_{w}(dy), \tag{32}\] \[\tilde{f}_{5}^{2}(z,w):=\tilde{f}_{3}^{2}(z,y)+\tilde{f}_{4}^{2} (x,w). \tag{31}\] The factor of \(\sqrt{2}\) in (30) arises because of \(\tilde{f}_{3}+\tilde{f}_{4}\leq\sqrt{2}\tilde{f}_{5}\). From (30) we obtain by the triangle inequality for \(\|\cdot\|\) (33) \[\big{(}\int_{\Omega}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}}\overset{( \ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: is equal to \(\int_{\partial B_{R}}\int|x-z|^{2}\lambda_{z}(dx)f(dz)\,+\int_{\partial B_{R}} \int|w-y|^{2}\mu_{y}(dy)g(dw)\). By definitions (25) and (26), this coincides with \(\int_{\Omega\cap\{X(\sigma)\in\partial B_{R}\}}|x-X(\sigma)|^{2}d\pi\,+\int_{ \Omega\cap\{X(\tau)\in\partial B_{R}\}}|X(\tau)-y|^{2}d\pi\). Since we have \(|x-X(\sigma)|^{2}\)\(+|X(\tau)-y|^{2}\leq|x-y|^{2}\), this sum is \(\leq\int_{\Omega\cap\{(X(\sigma)\in\partial B_{R}\}\cup\{X(\tau)\in\partial B_{ R}\})}|x-y|^{2}d\pi\). Note that this set of integration coincides with \(\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R}\}\), as desired. ### Constructing a competitor based on the Neumann-Poisson problem As mentioned after Corollary 1, the remaining task is to estimate the first r. h. s. line of (21). For this, we will use Lemma 2 and construct a competitor for \(W(\lambda_{\!\!\!\sqcup}B_{R}+f,\mu_{\!\!\!\sqcup}B_{R}+g)\) based on \(\phi\), the solution of the Neumann-Poisson problem (19), where we momentarily think of the measures \(\lambda,\mu\) as having continuous densities with respect to the Lebesgue measure. **Lemma 3**.: (34) \[W^{2}(\lambda_{\!\!\!\sqcup}B_{R}+f,\mu_{\!\!\!\sqcup}B_{R}+g)\leq\frac{1}{ \min\{\min_{\bar{B}_{R}}\lambda,\min_{\bar{B}_{R}}\mu\}}\int_{B_{R}}|\nabla\phi |^{2}.\] Lemma 3 makes a second dilemma apparent: The intention was to use it in conjunction with Lemma 2 to obtain an estimate on the first r. h. s. line in (21). This however would require that we have \(\min_{\bar{B}_{R}}\lambda,\min_{\bar{B}_{R}}\mu\,\hbox to 0.0pt{\lower 3.0pt \hbox{$\sim$}}\raise 1.0pt\hbox{$>$}\,1\), so a (one-sided) closeness of \(\mu\) and \(\lambda\) to the Lebesgue measure in a strong topology, as opposed to the closeness in a weak topology as expressed by (11). Hence this provides another reason for approximating \(\lambda\) and \(\mu\) by more regular versions. Proof Lemma 3. The proof is short if one uses the Benamou-Brenier formulation in its distributional version, as we shall do. We recommend [10, Section 6.1] to the reader regarding more details on the Benamou-Brenier formulation. For every \(t\in[0,1]\) we introduce the (singular non-negative) measure \[\rho_{t}:=t(\mu_{\!\!\!\sqcup}B_{R}+g)+(1-t)(\lambda_{\!\!\!\sqcup}B_{R}+f) \tag{35}\] and the (\(t\)-independent) vector-valued measure \[j_{t}:=\nabla\phi dx_{\!\!\!\sqcup}B_{R}. \tag{36}\] We note that (19) in its distributional form of (16) can be re-expressed as \[\frac{d}{dt}\int\zeta d\rho_{t}=\int\nabla\zeta\cdot dj_{t} \tag{37}\] for all test functions \(\zeta\). In the jargon of the Benamou-Brenier formulation, which is inspired from continuum mechanics, \(\rho_{t}\) is a (mass) density, \(j_{t}\) is a flux, and (37) is the distributional version of the continuity equation \(\partial_{t}\rho_{t}+\nabla\cdot j_{t}=0\) expressing conservation of mass. Following Benamou-Brenier one takes the Radon-Nikodym derivative \(\frac{dj_{t}}{d\rho_{t}}\) of the (vectorial) measure \(j_{t}\) w. r. t. \(\rho_{t}\) (it plays the role of an Eulerian velocity field), and considers the expression that corresponds to the total kinetic energy: \[\frac{1}{2}\int\bigg{|}\frac{dj_{t}}{d\rho_{t}}\bigg{|}^{2}d\rho_{t}:=\sup \bigg{\{}\int\xi\cdot dj_{t}-\int\frac{1}{2}|\xi|^{2}d\rho_{t}\bigg{\}}\in[0, \infty], \tag{38}\] where the supremum is taken over all continuous vector fields \(\xi\) with compact support. Benamou-Brenier (see [10, Section 5.4]) gives \[W^{2}(\rho_{0},\rho_{1})\leq\int_{0}^{1}\int\bigg{|}\frac{dj_{t}}{d\rho_{t}} \bigg{|}^{2}d\rho_{t}dt. \tag{39}\] Since in our case, \(j_{t}\) is supported in (the open) \(B_{R}\), see (36), in the r. h. s. of (38) we may restrict ourselves to \(\xi\) supported in \(B_{R}\). For these \(\xi\)'s, definition (35) yields \(\int\xi\cdot dj_{t}-\int\frac{1}{2}|\xi|^{2}d\rho_{t}=\int_{B_{R}}\big{(}\xi \cdot\nabla\phi\)\(-\frac{1}{2}|\xi|^{2}(t\mu+(1-t)\lambda)\big{)}\). By Young's inequality in form of \(\xi\cdot\nabla\phi\leq\frac{1}{2}(t\mu+(1-t)\lambda)|\xi|^{2}+\frac{1}{2(t\mu+ (1-t)\lambda)}|\nabla\phi|^{2}\) we thus obtain for the r. h. s. of (39) \[\int\bigg{|}\frac{dj_{t}}{d\rho_{t}}\bigg{|}^{2}d\rho_{t}\leq\int_{B_{R}}\frac {|\nabla\phi|^{2}}{t\mu+(1-t)\lambda}\leq\frac{1}{\min\{\min_{\bar{B}_{R}} \lambda,\min_{\bar{B}_{R}}\mu\}}\int_{B_{R}}|\nabla\phi|^{2}. \tag{40}\] Since by definition (35), the l. h. s. of (39) coincides with the l. h. s. of (34), we are done. ## 3. Harmonic approximation The purpose of this section is to establish that the displacement in an optimal plan \(\pi\) can locally be approximated by a harmonic gradient \(\nabla\phi\) (by which we mean that for each Cartesian direction \(i=1,\cdots,d\), the component \(\partial_{i}\phi\) is harmonic, as a consequence of \(-\triangle\phi=const\)). This holds provided we are in the perturbative regime, see Subsection 2.2, where \(E\) and \(D\) are defined. More precisely, given any fraction \(0<\theta\ll 1\), there exists a threshold \(\epsilon>0\) for \(E+D\) so that below that threshold, the l. h. s. of (40) is only a fraction \(\theta\) of \(E\), plus a possibly large multiple of \(D\). **Proposition 1**.: _For every \(\theta>0\), there exist \(\epsilon(d,\theta)>0\) and \(C(d,\theta)<\infty\) such that the following holds. Let \(\pi\) be optimal for \(\lambda,\mu\); provided \(E+D\leq\epsilon\), there exists a harmonic \(\nabla\phi\) on \(B_{1}\) such that_ \[\int_{(B_{1}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{1})}| (y-x)-\nabla\phi(x)|^{2}d\pi\leq\theta E+CD, \tag{42}\] \[\int_{B_{1}}|\nabla\phi|^{2}\leq C(E+D). \tag{41}\] (The proof actually reveals an explicit dependence of \(\epsilon\) and \(C\) on \(\theta\).) We will obtain \(\nabla\phi\) by solving the Neumann-Poisson problem \[-\triangle\phi=\frac{\mu(B_{R})}{|B_{R}|}-\frac{\lambda(B_{R})}{|B_{R}|}\; \text{in}\;B_{R}\quad\text{and}\quad\nu\cdot\nabla\phi=\bar{g}-\bar{f}\;\text{ on}\;\partial B_{R}, \tag{42}\] where \(\bar{f},\bar{g}\) are suitable regular approximations of \(f,g\), which are constructed in the nonlinear approximation Lemma 4, which also guides the choice of \(R\in[2,3]\). (In fact, in Subsection 3.4, we will replace \(\bar{g}-\bar{f}\) by its mollification.) We note that \(\bar{f},\bar{g}\in L^{2}(\partial B_{R})\) provides sufficient regularity: Indeed, according to (23) and (42) we have \(\int_{B_{R}}|\nabla\phi|^{2}=\int_{\partial B_{R}}\phi(\bar{g}-\bar{f})\), recalling the normalization \(\int_{B_{R}}\phi=0\). Applying Cauchy-Schwarz and then the Poincare-trace estimate \(\int_{\partial B_{R}}\phi^{2}\leq C_{P}\int_{B_{R}}|\nabla\phi|^{2}\), we obtain \[\int_{B_{R}}|\nabla\phi|^{2}\leq C_{P}\int_{\partial B_{R}}(\bar{g}-\bar{f})^ {2}. \tag{43}\] In particular, (41) is a consequence of (51), for a suitable choice of \(R\in[2,3]\). By an application of Lemma 3 to the setting of (42), we have \[W^{2}(\frac{\lambda(B_{R})}{|B_{R}|}dx_{\vdash}B_{R}+\bar{f}, \frac{\mu(B_{R})}{|B_{R}|}dx_{\vdash}B_{R}+\bar{g}) \tag{44}\] \[\leq\frac{1}{\min\{\frac{\lambda(B_{R})}{|B_{R}|},\frac{\mu(B_{R} )}{|B_{R}|}\}}\int_{B_{R}}|\nabla\phi|^{2}.\] Working with (42) instead of (19) creates the additional task of estimating the first r. h. s. term (24) of Lemma 2 by the l. h. s. of (44), which is conveniently done with help of the triangle inequality: \[W(\lambda_{\vdash}B_{R}+f, \mu_{\vdash}B_{R}+g)\leq W(\frac{\lambda(B_{R})}{|B_{R}|}dx_{ \vdash}B_{R}+\bar{f},\frac{\mu(B_{R})}{|B_{R}|}dx_{\vdash}B_{R}+\bar{g})\] \[+W(\lambda_{\vdash}B_{R},\frac{\lambda(B_{R})}{|B_{R}|}dx_{ \vdash}B_{R})+\text{same term with }\lambda\rightsquigarrow\mu \tag{45}\] \[+W(f,\bar{f})+\text{same term with }f\rightsquigarrow g.\] We now return to the first r. h. s. line in Corollary 1; in view of the elementary \[\int_{\Omega}|x-y|^{2}d\pi-\int_{B_{R}}|\nabla\phi|^{2} \tag{46}\] \[\leq 2\big{(}\int_{\Omega}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}} \Big{(}\big{(}\int_{\Omega}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}}-\big{(}\int_{B_{ R}}|\nabla\phi|^{2}\big{)}^{\frac{1}{2}}\Big{)},\] and noting that by definitions (12) and (13), the first r. h. s. factor is estimated by \(E\); by Young's inequality, it suffices to estimate the second factor. Combining (24), (44) and (45) we see that it is \(\leq\) \[\big{(}\frac{1}{(\min\{\frac{\lambda(B_{R})}{|B_{R}|},\frac{\mu(B_{R})}{|B_{R}|} \})^{\frac{1}{2}}}-1\big{)}\big{(}\int_{B_{R}}|\nabla\phi|^{2}\big{)}^{\frac{1} {2}} \tag{47}\] plus \[\big{(}2\int_{\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R }\}}|x-y|^{2}d\pi\big{)}^{\frac{1}{2}}\] \[+W(\lambda_{\mathbb{L}}B_{R},\frac{\lambda(B_{R})}{|B_{R}|}dx \llcorner B_{R})+\text{same term with }\lambda\rightsquigarrow\mu \tag{48}\] \[+W(f,\bar{f})+\text{same term with }f\rightsquigarrow g.\] We expect (and will show for a good radius \(R\in[2,3]\)) that in the regime \(D=o(1)\), the prefactor on the r. h. s. of (47) is \(O(\sqrt{D})\), and that the second line in (48) is \(O(\sqrt{D})\), as consistent with (40). These two technicalities are stated in Lemma 6. The main task is thus to control the last line in (48). ### Approximating the boundary data The main remaining task is to identify a good radius \(R\in[2,3]\) and to construct \(\bar{f}\) and \(\bar{g}\). Again, there is a trade-off/conflict of interest: * On the one hand, the Neumann boundary data \(\bar{g}-\bar{f}\) have to be sufficiently regular so that the solution \(\phi\) of (42) is. In particular, we need (41) (with \(B_{1}\) replaced by the larger \(B_{R}\)) to obtain that the error (47) is \(o(E+D)\). Via (43), this is ensured by (51) in the upcoming Lemma 4. In fact, it even yields uniform integrability of \(|\nabla\phi|^{2}\) on \(B_{R}\), which is crucial to show that also the last line in (21) is \(o(E+D)\). * On the other hand, \((\bar{f},\bar{g})\) has to be sufficiently close to \((f,g)\). In particular, in view of the last term in (48) we need \(W^{2}(f,\bar{f})\)\(+W^{2}(g,\bar{g})=o(E)+O(D)\). This is ensured by (50) in the upcoming Lemma 4. Here, as for (24), we will eventually need to appeal to \(\int_{1}^{2}\int_{\{\exists t\in[0,1]\;X(t)\in\partial B_{R}\}}|x-y|^{2}d\pi \,dR=o(E+D)\), see Subsection 3.2. In the upcoming approximation lemma we restrict to \(g\) for brevity. **Lemma 4**.: _We suppose that_ \[(X\in\Omega\implies y\in B_{5})\quad\text{for}\;(x,y)\in\mathrm{supp}\pi. \tag{49}\] _Then for every \(R\in[2,3]\) there exists a non-negative function \(\bar{g}_{R}\) on \(\partial B_{R}\) such that_ \[W^{2}(g_{R},\bar{g}_{R}) \leq 8\big{(}\int_{\Omega\cap\{\exists t\in[0,1]\;X(t)\in \partial B_{R}\}}|x-y|^{2}d\pi+D\big{)}, \tag{51}\] \[\int_{2}^{3}\int_{\partial B_{R}}\bar{g}_{R}^{2}\,dR \leq 5^{d-1}\kappa_{\mu}(3E+D). \tag{50}\] Note that we put an index \(R\) on \(g\) because the definition (8) obviously depends on \(R\). Proof of Lemma 4. We fix an \(R\in[2,3]\) and start with the construction of \(\bar{g}_{R}\), momentarily returning to our short-hand notation \(\bar{g}\). Let \(\bar{\pi}\) be optimal for \(W^{2}(\mu_{\!\!\!\sqcup}B_{5},\kappa_{\mu}dz_{\!\!\!\sqcup}B_{5})\); note that \(\bar{\pi}\) is supported on \(B_{5}\times B_{5}\). We extend it (trivially) by the identity to \(\mathbb{R}^{d}\times\mathbb{R}^{d}\); the extension (which we still call) \(\bar{\pi}\) is admissible for \(W^{2}(\mu,\kappa_{\mu}dz_{\!\!\!\sqcup}B_{5}+\mu_{\!\!\!\sqcup}B_{5}^{c})\). We retain (52) \[\int|y-z|^{2}d\bar{\pi}=W^{2}(\mu_{\!\!\!\sqcup}B_{5},\kappa_{\mu}dz_{\!\!\! \sqcup}B_{5})\stackrel{{\eqref{eq:ww-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-zz-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-zz-z-zz-z-z-z-z-z-z-zz-z-z-z-z-z-z-z-z-zz-z-z-z-zz-z-zz-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-zz-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-z-zz- Hence \(g^{\prime}\) admits a Lebesgue density, which we still denote by \(g^{\prime}\) and that satisfies \[g^{\prime}\leq\kappa_{\mu}. \tag{57}\] Finally, we radially project \(g^{\prime}\) onto \(\partial B_{R}\): \[\int\zeta d\bar{g}=\int\zeta(R\frac{z}{|z|})g^{\prime}(dz), \tag{58}\] see Fig. 3. This concludes the construction of \(\bar{g}\), we now turn to its estimate. We start with (50) and note that an admissible plan for \(W^{2}(g,\bar{g})\) is given by \[\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}\zeta(X(\tau),R\frac{z}{|z|})d \tilde{\pi}.\] Indeed, on the one hand, for \(\zeta\) only depending on the first variable, \(\tilde{\pi}\) may be replaced by \(\pi\) according to the first item in (54) so that we obtain \(\int\zeta dg\) by its definition (8). On the other hand, for \(\zeta\) only depending on the second variable, we obtain \(\int\zeta d\bar{g}\) by combining (55) and (58). Hence we have \[W^{2}(g,\bar{g})\leq\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}|X(\tau)-R \frac{z}{|z|}|^{2}d\tilde{\pi}. \tag{59}\] Since \(X(\tau)\in\partial B_{R}\), an elementary geometric argument on the radial projection yields for the integrand \(|X(\tau)-R\frac{z}{|z|}|\leq 2|X(\tau)-z|\), so that \(|X(\tau)-R\frac{z}{|z|}|^{2}\leq 8(|x-y|^{2}+|y-z|^{2})\). This allows us to appeal to the compatibility (54) and to (52): \[W^{2}(g,\bar{g})\leq 8\big{(}\int_{\Omega\cap\{\exists t\in[0,1]\ X(t)\in \partial B_{R}\}}|x-y|^{2}d\pi+D\big{)}.\] In preparation for establishing (51), we first provide an estimate of the measure \(g^{\prime}\) defined in (55), which shows that it is concentrated near \(\partial B_{R}\), see (60). By definition (55) we have \[\int||z|-R|dg^{\prime}=\int_{\Omega\cap\{X(\tau)\in\partial B_{R}\}}||z|-R| \tilde{\pi}(dxdydz).\] Since \(|X(\tau)|=R\), we may write \(||z|-R|=||z|-|X(\tau)||\leq|x-y|\)\(+|y-z|\). Since \(X(\tau)\in\partial B_{R}\) implies \(\min_{[0,1]}|X|\leq R\leq\max_{[0,1]}|X|\), and \(X(1)\in B_{5}\) by assumption (49), we thus obtain \[\int||z|-R|dg^{\prime}\] \[\leq\int_{\{y\in B_{5}\}\cap\{\min_{[0,1]}|X|\leq R\leq\max_{[0,1 ]}|X|\}}(|x-y|+|y-z|)\tilde{\pi}(dxdydz).\] Making the index \(R\) appear and integrating in \(R\), this gives \[\int_{2}^{3}\int||z|-R|dg^{\prime}_{R}\,dR\] \[\leq\int_{\{y\in B_{5}\}}(\max_{[0,1]}|X|-\min_{[0,1]}|X|)(|x-y|+| y-z|)\tilde{\pi}(dxdydz).\] Using \(\max_{[0,1]}|X|-\min_{[0,1]}|X|\leq|x-y|\) and then Young's inequality in form of \(|x-y|(|x-y|+|y-z|)\leq\frac{3}{2}|x-y|^{2}+\frac{1}{2}|y-z|^{2}\) we thus obtain from (54) \[\int_{2}^{3}\int||z|-R|dg^{\prime}_{R}\,dR\] \[=\frac{3}{2}\int_{\{y\in B_{5}\}}|x-y|^{2}\pi(dxdy)+\frac{1}{2} \int|y-z|^{2}\bar{\pi}(dydz).\] By definition (13) and by (52) this turns into \[\int_{2}^{3}\int||z|-R|dg^{\prime}_{R}\,dR\leq\frac{3}{2}E+\frac{1}{2}D. \tag{60}\] In order to pass from (60) to (51) we need \[\int_{\partial B_{R}}\frac{1}{2}\bar{g}^{2}\leq 5^{d-1}\kappa_{\mu}\int||z|-R|dg^ {\prime}. \tag{61}\] Here comes the argument for (61): By (57), it follows once we show for some density \(g^{\prime}\) with (56) \[\int_{\partial B_{R}}\frac{1}{2}\bar{g}^{2}\leq 5^{d-1}(\operatorname{ess} \sup g^{\prime})\int||z|-R|g^{\prime}dz. \tag{62}\] Introducing polar coordinates \(z=r\hat{z}\) with \(r\in(0,\infty)\) and \(\hat{z}\in\partial B_{1}\), which are natural to re-express (58), (62) reduces to the single-variable statement \[\frac{1}{2}\big{(}\int g^{\prime}r^{d-1}dr\big{)}^{2}\leq 5^{d-1}(\operatorname{ ess\sup}\kappa)\int|r-R|g^{\prime}r^{d-1}dr.\] It is convenient to rephrase this in terms of \(\tilde{g}=g^{\prime}r^{d-1}\); since because of (56) we have \(\operatorname{ess\sup}\tilde{g}\leq 5^{d-1}\operatorname{ess\sup}g^{\prime}\), it suffices to show for an arbitrary function \(\tilde{g}\geq 0\) of \(r\in(-\infty,\infty)\) that \[\frac{1}{2}\big{(}\int\tilde{g}dr\big{)}^{2}\leq(\operatorname{ess\sup}\tilde{ g})\int|r-R|\tilde{g}dr. \tag{63}\] The argument for (63) is elementary: By translation in \(r\), we may assume \(R=0\); by homogeneity in \(\tilde{g}\), we may assume \(\operatorname{ess\sup}\tilde{g}=1\), that is, \(\tilde{g}\in[0,1]\). We now change perspective and seek to minimize the r. h. s. \(\int|r|\tilde{g}dr\) under constraining the l. h. s. through prescribing \(m=\int\tilde{g}dr\). By linearity of \(\int|r|\tilde{g}dr\) in \(\tilde{g}\), this functional assumes its minimum on extremal points w. r. t. the constraints \(\tilde{g}\in[0,1]\) and \(\int\tilde{g}dr=m\). Those are characteristic functions of sets of Lebesgue measure \(m\). Clearly, the set \(I\) with \(|I|=m\) that minimizes \(\int_{I}|r|dr\) is given by \(I=[-\frac{m}{2},\frac{m}{2}]\); the minimum is \(\frac{m^{2}}{2}\), as desired. ### Crossing trajectories In view of the second r. h. s. term in (24) of Lemma 2 and the first r. h. s. term in (50) of Lemma 4, we need to argue that for a suitable radius \(R\), the trajectories crossing \(\partial B_{R}\) do not contribute much to \(E\). This is the only part of the argument where we directly rely on the optimality criterion for \(\pi\), namely the cyclical monotonicity of its support, c. f. [10, Section 1.6.2]. We just need it in form of plain monotonicity: \[(x-x^{\prime})\cdot(y-y^{\prime})\geq 0\quad\text{for all }(x,y),(x^{\prime},y^{ \prime})\in\operatorname{supp}\pi. \tag{64}\] This in fact implies that all trajectories are short in our regime of \(E+D\ll 1\): **Lemma 5**.: _We have_ \[|x-y|=o(1)\quad\text{for }(x,y)\in\big{(}(B_{4}\times\mathbb{R}^{d} )\cup(\mathbb{R}^{d}\times B_{4})\big{)}\cap\operatorname{supp}\pi, \tag{66}\] \[\int_{2}^{3}\int_{\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B _{R}\}}|x-y|^{2}d\pi dR=o(E+D),\] (67) \[\int_{2}^{3}\pi\big{(}\Omega\cap\{\exists t\in[0,1]\;X(t)\in \partial B_{R}\}\big{)}dR=o(1). \tag{65}\] As a consequence of (66) we may indeed chose \(R\in[2,3]\) so that the terms in (24) and (50) are \(o(E)+O(D)\). As a consequence of (65), also (49) is satisfied. Proof of Lemma 5. We start by deriving (66) and (67) from (65). As in the proof of (51) we note that \(X(t)\in\partial B_{R}\) implies \(R\leq\max_{[0,1]}|X|\) and thus \[\int_{2}^{3}\int_{\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R }\}}|x-y|^{2}\pi(dxdy)dR\] \[\leq\int_{(B_{3}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{ 3})}(\max_{[0,1]}|X|-\min_{[0,1]}|X|)|x-y|^{2}\pi(dxdy).\] Since \(\max_{[0,1]}|X|-\min_{[0,1]}|X|\leq|x-y|\), (66) now follows from (65) by the definition (13). Likewise, we have \[\int_{2}^{3}\pi(\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{ R}\})dR\] \[=o(1)\pi((B_{3}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_ {3})).\] By (1) we obtain that \(\pi((B_{3}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{3}))\leq\lambda(B _{5})+\mu(B_{5})\), so that we may appeal to (11) to the effect of \(\lambda(B_{5})\leq|B_{5}|(1+\sqrt{D})\lesssim 1\). Here and in the following we use \(\lesssim\) to mean up to constants that only depend on \(d\). We now turn to proving (65); a more explicit but pedestrian argument can be found in [5, Lemma 2.9]. Since \(E\ll 1\), we expect there to be many short trajectories. We exploit monotonicity in order to upgrade this to a statement about all trajectories. By definition (12) and due to \((x\leftrightarrow y)\)-symmetry, we may assume that \(x\in B_{4}\). We will use (64) in form of \[(x-y)\cdot(x-x^{\prime})\leq\frac{3}{2}|x-x^{\prime}|^{2}+\frac{1}{2}|x^{ \prime}-y^{\prime}|^{2}. \tag{68}\] For \(\zeta\geq 0\) supported in \(B_{5}\) we integrate (68) against \(\zeta(x^{\prime})\pi(dx^{\prime}dy^{\prime})\). Using the admissibility of \(\pi\) and the definition (13) of \(E\), we find \[(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\ as \(D\ll 1\), uniformly in the center but at fixed radius (as the subscript in \(o_{r}(1)\) is to indicate). Specifying \(\hat{\zeta}\) to have unit integral and vanishing first moment, this yields \[\int(x-x^{\prime})\zeta(x^{\prime})\lambda(dx^{\prime}) =r^{d}(x-x_{0}^{\prime})+o_{r}(1),\] \[\int|x-x^{\prime}|^{2}\zeta(x^{\prime})\lambda(dx^{\prime}) \lesssim r^{d}(|x-x_{0}^{\prime}|^{2}+r^{2})+o_{r}(1).\] This prompts the choice of the center \(x_{0}^{\prime}=x-r\frac{x-y}{|x-y|}\), which is admissible since in view of \(x\in B_{4}\) we still have that \(\zeta\) is supported in \(B_{5}\), to the effect of \[(x-y)\cdot\int(x-x^{\prime})\zeta(x^{\prime})\lambda(dx^{\prime} ) =r^{d+1}|x-y|+o_{r}(|x-y|),\] \[\int|x-x^{\prime}|^{2}\zeta(x^{\prime})\lambda(dx^{\prime}) \lesssim r^{d+2}+o_{r}(1).\] Inserting this into (69) yields \[|x-y|\lesssim r+\frac{E}{r^{d+1}}+o_{r}(|x-y|)+o_{r}(1).\] We now may conclude: We first choose \(r\) so that the first r. h. s. term is small; we then choose \(E\) so that the second term is small, and we finally choose \(D\) so small such that the third term may be absorbed into the l. h. s. and such that the last term is small. ### Restricting the data term \(D\) While the data term \(D\) is defined w. r. t. to \(B_{5}\), we rather need it w. r. t. \(B_{R}\) for our suitably chosen \(R\in[2,3]\). This is most prominent in the middle line of (48) and the prefactor of (47), but also related to the middle and last line on the r. h. s. of (21), as was discussed in Subsection 2.2. Annoyingly, this restriction property for \(D\) does not come for free and and requires arguments similar to the ones used in the proof of Lemma 4. By symmetry, it is enough to consider \(\lambda\). **Lemma 6**.: _With \(\kappa_{R}=\frac{\lambda(B_{R})}{|B_{R}|}\) we have_ \[\int_{2}^{3}W^{2}(\lambda_{\ll}B_{R},\kappa_{R}dx_{\ll}B_{R})+(\kappa_{R}-1)^ {2}dR=O(D). \tag{70}\] For the proof of Lemma 6, it is convenient to have the following two extensions of Lemma 3 on the relationship between OT and the Poisson-Neumann problem; the first is a rough generalization, the second provides the reverse relationship in a restricted setting: **Corollary 2**.: (71) \[W^{2}(\lambda_{\ll}B_{R}+f,\mu_{\ll}B_{R}+g)\leq\frac{4}{\min_{B_{R}}\mu}\int_ {B_{R}}|\nabla\phi|^{2}.\] **Lemma 7**.: _Provided \(f,g\equiv 0\) in (19),_ \[W^{2}(\lambda_{\ll}B_{R},\mu_{\ll}B_{R})\geq\frac{1}{\max\{\max_{\bar{B}_{R}} \lambda,\max_{\bar{B}_{R}}\mu\}}\int_{B_{R}}|\nabla\phi|^{2}. \tag{72}\] Proof of Corollary 2. We first argue that for arbitrary \(\lambda,\mu\) and \(0\leq M<\infty\), \[W(\lambda,\mu)\leq\frac{1}{\sqrt{1+M}-\sqrt{M}}W(\lambda+M\mu,(1+M)\mu). \tag{73}\] Indeed, by scaling we have \(W(\lambda,\mu)=\frac{1}{\sqrt{1+M}}W((1+M)\lambda,(1+M)\mu)\). By the triangle inequality, \(W((1+M)\lambda,(1+M)\mu)\leq W((1+M)\lambda,\,\lambda+M\mu)\)\(+W(\lambda+M\mu,(1+M)\mu)\), which we combine with the obvious \(W((1+M)\lambda,\lambda+M\mu)=W(M\lambda,M\mu)\). Once more by scaling, \(W(M\lambda,M\mu)\)\(=\sqrt{M}W(\lambda,\mu)\), so that we may absorb. We now argue that with help of Lemma 3 we obtain \[W^{2}(\lambda_{\ll}B_{R}+f,\mu_{\ll}B_{R}+g)\leq\frac{1}{M(\sqrt{1+M}-\sqrt{M })^{2}}\frac{1}{\min_{\bar{B}_{R}}\mu}\int_{B_{R}}|\nabla\phi|^{2},\] which yields (71) under \(M\uparrow\infty\). Indeed, we first apply (73) with \((\lambda,\mu)\) replaced by \((\lambda_{\ll}B_{R}+f,\mu_{\ll}B_{R}+g)\). We then use (34) to estimate \(W^{2}(\lambda_{\ll}B_{R}+f+M(\mu_{\ll}B_{R}+g),\,(1+M)(\mu_{\ll}B_{R}+g))\), noting that when taking the difference of \((1+M)\mu\) and \(\lambda+M\mu\) and of \((1+M)g\) and \(f+Mg\), the \(M\)-dependent terms drop out and thus does not affect the definition (19) of \(\phi\). It remains to observe that the minimum of the Lebesgue densities of \((1+M)\mu\) and \(\lambda+M\mu\) on \(\bar{B}_{R}\) is bounded below by \(M\min_{\bar{B}_{R}}\mu\). Proof of Lemma 7. We recall the Benamou-Brenier formulation from Lemma 3 and note that for the optimal \((\rho_{t},j_{t})\) \[W^{2}(\lambda_{\ll}B_{R},\mu_{\ll}B_{R})\geq\int_{0}^{1}\int\left|\frac{dj_{t }}{d\rho_{t}}\right|^{2}d\rho_{t}dt. \tag{74}\] Setting \(M:=\max\{\max_{\bar{B}_{R}}\lambda,\max_{\bar{B}_{R}}\mu\}<\infty\), so that \(\lambda_{\ll}B_{R}\), \(\mu_{\ll}B_{R}\)\(\leq Mdx_{\ll}B_{R}\), we have by McCann's displacement convexity (in conjunction with the convexity of \(B_{R}\)), c.f. [10, Section 7.3], \[\rho_{t}\leq Mdx_{\ll}B_{R} \tag{75}\] for all \(t\in[0,1]\). In view of definition (38), we thus obtain for any test vector field \(\xi\) \[\int\xi\cdot dj_{t}\leq\frac{1}{2}\int|\frac{dj_{t}}{d\rho_{t}}|^{2}d\rho_{t}+ \frac{M}{2}\int_{B_{R}}|\xi|^{2}.\] After integration in \(t\in[0,1]\), replacing \(\xi\) by a \(a\xi\) and optimizing in the constant \(a>0\), this yields \(\int\xi\cdot dj_{t}\leq(\int|\frac{dj_{t}}{d\rho_{t}}|^{2}d\rho_{t}\ M\int_{B_ {R}}|\xi|^{2})^{\frac{1}{2}}\). In conjunction with (74) this implies \[\int_{0}^{1}j_{t}dt\ll dx_{\llcorner}B_{R} \tag{76}\] \[\text{and}\quad\int_{B_{R}}|\int_{0}^{1}j_{t}dt|^{2}\leq MW^{2}( \lambda_{\llcorner}B_{R},\mu_{\llcorner}B_{R}),\] where we identify the measure \(\int_{0}^{1}j_{t}dt\) with its Lebesgue density. From the admissibility condition (37) we obtain by integration in \(t\in[0,1]\) and using \(\rho_{t=0}=\lambda_{\llcorner}B_{R}\), \(\rho_{t=1}=\mu_{\llcorner}B_{R}\) \[-\nabla\cdot\int_{0}^{1}j_{t}dt=\mu_{\llcorner}B_{R}-\lambda_{\llcorner}B_{R} \quad\text{distributionally on $\mathbb{R}^{d}$}.\] Thus by (19) (with \(f,g\equiv 0\)), we have that \(\int_{0}^{1}j_{t}dt-\nabla\phi\) (with \(\nabla\phi\) trivially extended beyond \(\bar{B}_{R}\)) is distributionally divergence-free in \(\mathbb{R}^{d}\). Extending \(\phi\) in a (compactly supported) \(C^{1}\)-manner outside of \(\bar{B}_{R}\) and testing with this extension, because \(\int_{0}^{1}j_{t}dt\) is supported in \(B_{R}\), cf. (76), we obtain \(\int_{B_{R}}\nabla\phi\cdot(\int_{0}^{1}j_{t}dt-\nabla\phi)=0\). By Cauchy-Schwarz this yields \[\int_{B_{R}}|\nabla\phi|^{2}\leq\int_{B_{R}}|\int_{0}^{1}j_{t}dt|^{2}.\] Combining this with (76) we obtain (72). Proof of Lemma 6. Inside this proof, we denote by \(\pi\) the optimal plan in \(W^{2}(\lambda_{\llcorner}B_{5},\kappa_{\lambda}dx_{\llcorner}B_{5})\). Following the proof of Lemma 4, we monitor where entering and exiting trajectories end up: \[\int\zeta df^{\prime} =\int_{\Omega\cap\{X(0)\not\in B_{R}\}\cap\{X(1)\in B_{R}\}}\zeta (X(1))d\pi, \tag{77}\] \[\int\zeta dg^{\prime} =\int_{\Omega\cap\{X(0)\in B_{R}\}\cap\{X(1)\not\in B_{R}\}}\zeta (X(1))d\pi.\] Clearly, the non-negative measures \(f^{\prime},g^{\prime}\) are supported in \(B_{R}\) and \(\bar{B}_{5}-B_{R}\), respectively, and satisfy \(f^{\prime},g^{\prime}\leq\kappa_{\lambda}\). We introduce the corresponding mass densities w. r. t. \(B_{R}\) \[\kappa_{f}:=\frac{f^{\prime}(\mathbb{R}^{d})}{|B_{R}|}\leq\kappa_{\lambda}, \quad\kappa_{g}:=\frac{g^{\prime}(\mathbb{R}^{d})}{|B_{R}|}. \tag{78}\] We start with the triangle inequality \[W(\lambda_{\llcorner}B_{R},(\kappa_{\lambda}-\kappa_{f}+\kappa_ {g})dx_{\llcorner}B_{R})\] \[\leq W(\lambda_{\llcorner}B_{R},\kappa_{\lambda}dx_{\llcorner}B_{ R}-f^{\prime}+g^{\prime})\] \[+W(\kappa_{\lambda}dx_{\llcorner}B_{R}-f^{\prime}+g^{\prime},( \kappa_{\lambda}-\kappa_{f})dx_{\llcorner}B_{R}+g^{\prime}) \tag{79}\] \[+W((\kappa_{\lambda}-\kappa_{f})dx_{\llcorner}B_{R}+g^{\prime}, (\kappa_{\lambda}-\kappa_{f}+\kappa_{g})dx_{\llcorner}B_{R}).\] Restricting the optimal \(\pi\) to trajectories that start in \(B_{R}\), we obtain an admissible plan for the first r. h. s. term of (79): \[W^{2}(\lambda_{\!\perp}B_{R},\kappa_{\lambda}dx_{\!\perp}B_{R}-f^{\prime}+g^{ \prime})\leq W^{2}(\lambda_{\!\perp}B_{5},\kappa_{\lambda}dx_{\!\perp}B_{5}) \leq D.\] We now turn to the second r. h. s. term of (79), and let \(\phi^{\prime}\) denote the solution of \[-\triangle\phi^{\prime}=f^{\prime}-\kappa_{f}\ \ \mbox{in}\ B_{R},\quad\nu \cdot\nabla\phi^{\prime}=0\ \ \mbox{on}\ \partial B_{R}.\] On the one hand, we obtain from Corollary 2 \[W(\kappa_{\lambda}dx_{\!\perp}B_{R}-f^{\prime}+g^{\prime},(\kappa_{\lambda}- \kappa_{f})dx_{\!\perp}B_{R}+g^{\prime})\leq\frac{4}{\kappa_{\lambda}-\kappa_ {f}}\int_{B_{R}}|\nabla\phi^{\prime}|^{2}.\] On the other hand, we obtain from Lemma 7 and \(f^{\prime}\leq\kappa_{\lambda}\) \[\frac{1}{2\kappa_{\lambda}}\int_{B_{R}}|\nabla\phi^{\prime}|^{2}\leq W(\kappa _{\lambda}dx_{\!\perp}B_{R},(\kappa_{\lambda}-\kappa_{f})dx_{\!\perp}B_{R}+f^ {\prime}).\] The combination of these two inequalities yields \[W(\kappa_{\lambda}dx_{\!\perp}B_{R}-f^{\prime}+g^{\prime},(\kappa _{\lambda}-\kappa_{f})dx_{\!\perp}B_{R}+g^{\prime}) \tag{80}\] \[\leq\frac{8\kappa_{\lambda}}{\kappa_{\lambda}-\kappa_{f}}W(\kappa _{\lambda}dx_{\!\perp}B_{R},(\kappa_{\lambda}-\kappa_{f})dx_{\!\perp}B_{R}+f ^{\prime}).\] In view of (80), we may treat the second r. h. s. term in (79) alongside of the last term, and focus on the latter. Like in Lemma 4, we introduce the projection \(\bar{g}\) of \(g^{\prime}\) onto \(\partial B_{R}\), that is \[\int\zeta d\bar{g}=\int\zeta(\frac{Rx}{|x|})g^{\prime}(dx). \tag{81}\] We start with the triangle inequality \[W((\kappa_{\lambda}-\kappa_{f})dx_{\!\perp}B_{R}+g^{\prime},( \kappa_{\lambda}-\kappa_{f}+\kappa_{g})dx_{\!\perp}B_{R}) \tag{82}\] \[\leq W((\kappa_{\lambda}-\kappa_{f})dx_{\!\perp}B_{R}+\bar{g},( \kappa_{\lambda}-\kappa_{f}+\kappa_{g})dx_{\!\perp}B_{R})+W(\bar{g},g^{ \prime}).\] According to Lemma 3, we have for the first term \[W((\kappa_{\lambda}-\kappa_{f})dx_{\!\perp}B_{R}+\bar{g},\!(\kappa_{\lambda}- \kappa_{f}+\kappa_{g})dx_{\!\perp}B_{R})\leq\frac{1}{\kappa_{\lambda}-\kappa _{f}}\int_{B_{R}}|\nabla\bar{\phi}|^{2}, \tag{83}\] where \(\bar{\phi}\) is defined through \[-\triangle\bar{\phi}=\kappa_{g}\ \ \mbox{in}\ B_{R},\quad\nu\cdot\nabla\bar{\phi}= \bar{g}\ \ \mbox{on}\ \partial B_{R}.\] We recall (61), which we may apply since \(g^{\prime}\leq\kappa_{\lambda}\) is supported in \(\bar{B}_{5}-B_{R}\), and which takes the form of \[\int_{\partial B_{R}}\bar{g}^{2}\leq 2\cdot 5^{d-1}\kappa_{\lambda}\int||x|-R| dg^{\prime}. \tag{84}\] Combining this with (43), where \(\phi\) is replaced by \(\bar{\phi}\), we obtain \[\int_{B_{R}}|\nabla\bar{\phi}|^{2}\leq 2\cdot 5^{d-1}C_{P}\kappa_{\lambda}\int||x|-R |dg^{\prime}. \tag{85}\] Before continuing with the r. h. s. of (85), we turn to the last term in (82). In view of (81), \(\int\zeta(R\frac{x}{|x|},x)g^{\prime}(dx)\) defines an admissible plan, so that \(W^{2}(\bar{g},g^{\prime})\leq\int|R\frac{x}{|x|}-x|^{2}g^{\prime}(dx)\). Noting that \(|R\frac{x}{|x|}-x|=||x|-R|\) and recalling that \(g^{\prime}\) is supported in \(\bar{B}_{5}-B_{R}\), we obtain \[W^{2}(\bar{g},g^{\prime})\leq 5\int||x|-R|dg^{\prime}.\] In view of this and (85), we are lead to estimate \(\int||x|-R|dg^{\prime}\). A simplification of the argument leading to (60) gives \[\int_{2}^{3}\int||x|-R|dg^{\prime}dR\leq D. \tag{86}\] Since in view of (79) we have \(\kappa_{R}=\kappa_{\lambda}-\kappa_{f}+\kappa_{g}\), in order to obtain (70), it remains to show \(\kappa_{f}^{2}+\kappa_{g}^{2}\lesssim D\). Because of our assumption \(D\ll 1\), this also deals with the pre-factors in (80) and (83). By symmetry, we may restrict to \(\kappa_{g}\). By definitions (77), (78), and (81) we have \(\kappa_{g}=\frac{1}{|B_{R}|}\bar{g}(\mathbb{R}^{d})\), and thus by Cauchy-Schwarz \(\kappa_{g}^{2}\leq\frac{|\partial B_{R}|}{|B_{R}|^{2}}\int_{\partial B_{R}}\bar {g}^{2}\). Hence it remains to appeal to (84) and (86). ### A final approximation and proof of Proposition 1 Note that the two terms in (21) involving \(\nabla\phi(X(t))\) make only sense for general \(\pi\) provided \(\nabla\phi\in C^{0}(\bar{B}_{R})\), which however is not ensured by \(\bar{g}-\bar{f}\in L^{2}(\partial B_{R})\) in (42). Hence, a final - however more conventional - approximation argument is unavoidable: We approximate \(\bar{g}-\bar{f}\) by its mollification \((\bar{g}-\bar{f})_{r}\) on a scale \(r>0\), and denote by \(\phi^{r}\) the corresponding solution of (42). With this replacement, (21) assumes the form \[\int_{\Omega}\int_{\sigma}^{\tau}|\dot{X}(t)-\nabla\phi^{r}(X(t) )|^{2}dtd\pi \tag{88}\] \[\leq\int_{\Omega}|x-y|^{2}d\pi-\int_{B_{R}}|\nabla\phi^{r}|^{2}\] (89) \[+2\int_{B_{R}}\phi^{r}\big{(}\frac{\mu(B_{R})}{|B_{R}|}-\frac{ \lambda(B_{R})}{|B_{R}|}-d(\mu-\lambda)\big{)}\] (90) \[+2\int_{\partial B_{R}}\phi^{r}\big{(}(\bar{g}-\bar{f})_{r}-d(g- f)\big{)}\] (91) \[+\int_{\Omega}\int_{\sigma}^{\tau}|\nabla\phi^{r}(X(t))|^{2}dtd \pi-\int_{B_{R}}|\nabla\phi^{r}|^{2}. \tag{87}\] Before controlling the difference in line (91), for which the mollification was made, we address its effect on the lines (88) (where it is cumbersome), (89) (where it is beneficial), and (90) (where it is both). We now fix the radius \(R\) such as to benefit from the Lemmas 4, 5, and 6. More precisely, taking the sum of the estimates (50) divided by \(o(E)+O(D)\), (51) divided by \(E+D\), (66) by \(o(E+D)\), (67) by \(o(1)\), and (70) by \(D\),3 Footnote 3: To pick out just the two terms of (50) and (67), what we mean is the following: According to these to statements, for any given \(\delta>0\) we have for sufficiently small \(E+D\) that \[\int_{2}^{3}dR\frac{1}{\delta E+D}W^{2}(f_{R},\bar{f}_{R})+\frac{1}{\delta} \pi(\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R}\})\lesssim 1.\] This means that there exists an \(R\in[2,3]\) such that the integrand is \(\leq 1\), which translates into \[W^{2}(f_{R},\bar{f}_{R})\lesssim\delta E+D\quad\text{and}\quad\pi(\Omega\cap\{ \exists t\in[0,1]\;X(t)\in\partial B_{R}\})\lesssim\delta,\] which amounts to (92) and (95). we learn that there exists an \(R\in[2,3]\) and \(\bar{g},\bar{f}\in L^{2}(\partial B_{R})\) with (92) \[W^{2}(f,\bar{f})+W^{2}(g,\bar{g}) =o(E)+O(D),\] (93) \[\int_{\partial B_{R}}\bar{f}^{2}+\bar{g}^{2} =O(E+D),\] (94) \[\int_{\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R}\}}|x-y |^{2}d\pi =o(E+D),\] (95) \[\pi(\Omega\cap\{\exists t\in[0,1]\;X(t)\in\partial B_{R}\}) =o(1),\] (96) \[W^{2}(\lambda_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Combining this with (43) yields \[\int_{B_{R}}|\nabla\phi|^{2}-\int_{B_{R}}|\nabla\phi^{r}|^{2}\leq 2\int_{B_{R}} \nabla(\phi-\phi^{r})\cdot\nabla\phi\lesssim r^{\frac{1}{2}}\int_{\partial B_{R }}(\bar{g}-\bar{f})^{2}.\] Hence in view of (93) we have \[\int_{B_{R}}|\nabla\phi|^{2}-\int_{B_{R}}|\nabla\phi^{r}|^{2}\leq r^{\frac{1}{ 2}}O(E+D). \tag{100}\] Equipped with (100) and the more obvious \[\int_{B_{R}}|\nabla\phi|^{2}=O(E+D), \tag{101}\] we may now conclude the estimate of line (88). We refer back to Subsection 3, namely (46) followed by Young's inequality. Next to an \(o(E)\) coming from the first factor in Young's inequality, this gives rise to the square of the term in (47) and the three terms in (48). According to (96) & (97) and (101), the former contribution is \(O(D(E+D))\). According to (94), once more (96) & (97), and (92), the latter three contributions in (48) are \(o(E+D)\), \(O(D)\), and \(o(E+D)\), respectively. Hence in combination with (100) we obtain \[\int_{\Omega}|x-y|^{2}d\pi-\int_{B_{R}}|\nabla\phi|^{2}\] \[\leq o(E)+O(D(E+D))+o(E+D)+O(D)+r^{\frac{1}{2}}O(E+D),\] which is \(o(E)+O(D)\) provided \(r\) goes to zero as \(E\) does. We now turn to the terms in the two lines (89) and (90). Combining (43) with (99), we obtain by (93) \[\int_{B_{R}}|\nabla\phi^{r}|^{2}\lesssim\int_{\partial B_{R}}(\bar{g}-\bar{f} )^{2}. \tag{102}\] Hence in conjunction with (98) we learn that \[\text{in line (\ref{eq:101}) we may replace $(\bar{g}-\bar{f})_{r}$ by $\bar{g}-\bar{f}$}, \tag{103}\] once more at the expense of an error \(r^{\frac{1}{2}}O(E+D)\). Now comes the beneficial effect of mollification: By standard regularity theory for the Neumann-Poisson problem, \(\sup_{\bar{B}_{R}}|\nabla\phi^{r}|\) is estimated by a sufficiently high norm of \((\bar{g}-\bar{f})_{r}\), which due to the mollification is controlled by \((\int_{\partial B_{R}}(\bar{g}-\bar{f})^{2})^{\frac{1}{2}}\). A closer inspection shows that in line with scaling, this estimate assumes the form \[\sup_{\bar{B}_{R}}|\nabla\phi^{r}|^{2}\lesssim\frac{1}{r^{d-1}}\int_{\partial B _{R}}(\bar{g}-\bar{f})^{2}\stackrel{{\eqref{eq:102}}}{{=}}\frac{ 1}{r^{d-1}}O(E+D). \tag{104}\] Thus we may appeal to the following easy consequence of the definition of \(W^{2}\) (and Cauchy-Schwarz) \[\big{|}\int_{\partial B_{R}}\phi^{r}(\bar{g}-dg)\big{|}\leq\sup_{\bar{B}_{R}}| \nabla\phi^{r}|\Big{(}(\int_{\partial B_{R}}\bar{g}+g(\partial B_{R}))W^{2}( \bar{g},g)\Big{)}^{\frac{1}{2}}, \tag{105}\] and a corresponding estimate for \(f\). Hence we learn from both (92) and (93) that the contribution from (103) is \(r^{-\frac{d-1}{2}}(o(E)+O(D))\). In conclusion, we obtain that the term in line (90) satisfies \[\int_{\partial B_{R}}\phi^{r}((\bar{g}-\bar{f})_{r}-d(g-f))=\frac{1}{r^{\frac{ d-1}{2}}}(o(E)+O(D))+r^{\frac{1}{2}}O(E+D),\] which still is \(o(E)+O(D)\) provided \(r\) only slowly goes to zero as \(E+D\) does. Appealing once more to an estimate similar to (105), but this time in conjunction with (96) and (97), we learn that the term in line (89) is \[\int_{B_{R}}\phi^{r}\big{(}\frac{\mu(B_{R})}{|B_{R}|}-\frac{\lambda(B_{R})}{|B _{R}|}-d(\mu-\lambda)\big{)}=\frac{1}{r^{\frac{d-1}{2}}}O(D),\] and thus is well-behaved, too. It remains to address the term in line (91); we claim that \[\int_{\Omega}\int_{\sigma}^{\tau}|\nabla\phi^{r}(X(t))|^{2}dtd\pi -\int_{B_{R}}|\nabla\phi^{r}|^{2} \tag{106}\] \[\leq\frac{1}{r^{d-1}}o(E+D)+\frac{1}{r^{d}}(E+D)^{\frac{3}{2}}.\] Indeed, setting \(\zeta=|\nabla\phi^{r}|^{2}\), and appealing to (1) and (6), we split the l. h. s. of (106) into the three differences \[\int_{0}^{1}\int_{\Omega} \big{(}I(X(t)\in\bar{B}_{R})\zeta(X(t))-I(X(0)\in B_{R})\zeta(X(0 ))\big{)}d\pi dt\] \[+\left(\int_{B_{R}}\zeta d\lambda-\kappa_{R}\int_{B_{R}}\zeta \right)+(\kappa_{R}-1)\int_{B_{R}}\zeta. \tag{107}\] We turn to the first difference in (107) and note \[I(X(t)\in\bar{B}_{R})\zeta(X(t))-I(X(0)\in B_{R})\zeta(X(0))\] \[\leq I\big{(}\exists s\in[0,1]\;X(s)\in\partial B_{R},X(t)\in \bar{B}_{R}\big{)}\zeta(X(t))\] \[+I\big{(}\forall s\in[0,1]\;X(s)\in B_{R}\big{)}\big{(}\zeta(X(t ))-\zeta(X(0))\big{)},\] so that it is \(\leq\) \[\sup_{B_{R}}\zeta\pi\big{(}\Omega\cap\{\exists s\in[0,1]\;X(s)\in\partial B_{ R}\}\big{)}+\sup_{B_{R}}|\nabla\zeta|\int_{\Omega}|x-y|d\pi. \tag{108}\] According to (95), the second factor of the first term is \(o(1)\). The second factor of the second term is \(\leq(\pi(\Omega)\int_{\Omega}|x-y|^{2}d\pi)^{\frac{1}{2}}\), and thus \(O(E^{\frac{1}{2}})\) by definitions (12) and (13). Hence it remains to control the two first factors in (108). Recalling the definition of \(\zeta\) and (104), we have \(\sup_{\bar{B}_{R}}\zeta=r^{-(d-1)}O(E+D)\), so that the first term in (108) is \(r^{-(d-1)}o(E+D)\). By the same argument that led to (104), we have \[\sup_{\bar{B}_{R}}|\nabla^{2}\phi^{r}|^{2}\lesssim\frac{1}{r^{d+1}}\int_{ \partial B_{R}}(\bar{g}-\bar{f})^{2}\stackrel{{\eqref{eq:104}}}{{= }}\frac{1}{r^{d}}O(E+D), \tag{109}\] which in combination with (104) yields \[\sup_{\bar{B}_{R}}|\nabla\zeta|\lesssim\frac{1}{r^{d}}\int_{\partial B_{R}}( \bar{g}-\bar{f})^{2}=\frac{1}{r^{d}}O(E+D), \tag{110}\] so that the second term in (108) is \(r^{-d}O(E^{\frac{1}{2}}(E+D))\). We now address the second difference in (107). In analogy to (105), we have that it is \(\leq\) \[\sup_{B_{R}}|\nabla\zeta|\big{(}\lambda(B_{R})+\kappa_{R}|B_{R}|)W^{2}(\lambda _{\vdash}B_{R},\kappa_{R}dx_{\vdash}B_{R})\big{)}^{\frac{1}{2}}.\] Due to (70) in Lemma 4 and to (110) this is \(r^{-d}O((E+D)D^{\frac{1}{2}})\). Finally, the third term in (107) is \(\leq(\sup\zeta)|\kappa_{R}-1|\) and thus \(r^{-(d-1)}O((E+D)D^{\frac{1}{2}})\). The final task left is to estimate the l. h. s. of (40) by the l. h. s. (87). We start by approximating the argument of \(\nabla\phi^{r}\): \[\int_{\Omega}\int_{\sigma}^{\tau}|\nabla\phi^{r}(X(t))-I(x\in B_{R})\nabla \phi^{r}(x)|^{2}dtd\pi.\] Using \[I(t\in[\sigma,\tau])\big{|}\nabla\phi^{r}(X(t))-I(x\in B_{R}) \nabla\phi^{r}(x)\big{)}\big{|}^{2}\] \[\stackrel{{\eqref{eq:104}}}{{=}}\big{|}I(X(t)\in \bar{B}_{R})\big{(}\nabla\phi^{r}(X(t))-I(X(0)\in B_{R})\nabla\phi^{r}(X(0)) \big{)}\big{|}^{2}\] \[\leq I\big{(}\exists s\in[0,1]\;X(s)\in\partial B_{R},X(t)\in \bar{B}_{R}\big{)}|\nabla\phi^{r}(X(t))|^{2}\] \[+I\big{(}\forall s\in[0,1]\;X(s)\in B_{R}\big{)}\big{|}\nabla\phi ^{r}(X(t))-\nabla\phi^{r}(X(0))\big{|}^{2},\] we obtain \[\int_{\Omega}\int_{\sigma}^{\tau} |\nabla\phi^{r}(X(t))-I(x\in B_{R})\nabla\phi^{r}(x)|^{2}dtd\pi\] \[\leq\sup_{\bar{B}_{R}}|\nabla\phi^{r}|^{2}\pi\big{(}\Omega\cap\{ \exists s\in[0,1]\;X(s)\in\partial B_{R}\}\big{)}\] \[+\sup_{\bar{B}_{R}}|\nabla^{2}\phi^{r}|^{2}\int_{\Omega}|x-y|^{2}d\pi.\] By definition (12) and (13), as well as (94), (104) and (109) this yields \[\int_{\Omega}\int_{\sigma}^{\tau} |\nabla\phi^{r}(X(t))-I(x\in B_{R})\nabla\phi^{r}(x)|^{2}dtd\pi \tag{111}\] \[\leq\frac{1}{r^{d-1}}o(E+D)+\frac{1}{r^{d}}O((E+D)E).\] By the triangle inequality we obtain from (111) \[\int_{\Omega}(\tau-\sigma)|(y-x)-I(x\in B_{R})\nabla\phi^{r}(x)|^{2}d\pi\] \[\leq 2\int_{\Omega}\int_{\sigma}^{\tau}|\dot{X}(t)-\nabla\phi^{r}( X(t))|^{2}dtd\pi\] \[+\frac{1}{r^{d-1}}o(E+D)+\frac{1}{r^{d}}O((E+D)E).\] By definition (12) of \(\Omega\) and \(R\geq 1\), we clearly have \[\big{(}(B_{1}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{1})\big{)}\cap \operatorname{supp}\pi\subset\Omega.\] Because of \(R\geq 2\) and in our regime of short trajectories, cf. (65), we have \[\tau-\sigma=1,\;x\in B_{R}\quad\text{for }(x,y)\in\big{(}(B_{1}\times\mathbb{R}^ {d})\cup(\mathbb{R}^{d}\times B_{1})\big{)}\cap\operatorname{supp}\pi.\] This implies \[\int_{(B_{1}\times\mathbb{R}^{d})\cup(\mathbb{R}^{d}\times B_{1}) }|(y-x)-\nabla\phi^{r}(x)|^{2}d\pi\] \[\leq\int_{\Omega}(\tau-\sigma)|(y-x)-I(x\in B_{R})\nabla\phi^{r} (x)|^{2}d\pi.\] The combination of the two last inequalities connects the l. h. s. of (40) (with \(\phi\) replaced by \(\phi^{r}\)) with the l. h. s. (87). This concludes the proof of Proposition 1.
2307.07857
A Multi-Heuristic Search-based Motion Planning for Automated Parking
In unstructured environments like parking lots or construction sites, due to the large search-space and kinodynamic constraints of the vehicle, it is challenging to achieve real-time planning. Several state-of-the-art planners utilize heuristic search-based algorithms. However, they heavily rely on the quality of the single heuristic function, used to guide the search. Therefore, they are not capable to achieve reasonable computational performance, resulting in unnecessary delays in the response of the vehicle. In this work, we are adopting a Multi-Heuristic Search approach, that enables the use of multiple heuristic functions and their individual advantages to capture different complexities of a given search space. Based on our knowledge, this approach was not used previously for this problem. For this purpose, multiple admissible and non-admissible heuristic functions are defined, the original Multi-Heuristic A* Search was extended for bidirectional use and dealing with hybrid continuous-discrete search space, and a mechanism for adapting scale of motion primitives is introduced. To demonstrate the advantage, the Multi-Heuristic A* algorithm is benchmarked against a very popular heuristic search-based algorithm, Hybrid A*. The Multi-Heuristic A* algorithm outperformed baseline in both terms, computation efficiency and motion plan (path) quality.
Bhargav Adabala, Zlatan Ajanović
2023-07-15T17:33:06Z
http://arxiv.org/abs/2307.07857v1
# A Multi-Heuristic Search-based Motion Planning ###### Abstract In unstructured environments like parking lots or construction sites, due to the large search-space and kinodynamic constraints of the vehicle, it is challenging to achieve real-time planning. Several state-of-the-art planners utilize heuristic search-based algorithms. However, they heavily rely on the quality of the single heuristic function, used to guide the search. Therefore, they are not capable to achieve reasonable computational performance, resulting in unnecessary delays in the response of the vehicle. In this work, we are adopting a Multi-Heuristic Search approach, that enables the use of multiple heuristic functions and their individual advantages to capture different complexities of a given search space. Based on our knowledge, this approach was not used previously for this problem. For this purpose, multiple admissible and non-admissible heuristic functions are defined, the original Multi-Heuristic A* Search was extended for bidirectional use and dealing with hybrid continuous-discrete search space, and a mechanism for adapting scale of motion primitives is introduced. To demonstrate the advantage, the Multi-Heuristic A* algorithm is benchmarked against a very popular heuristic search-based algorithm, Hybrid A*. The Multi-Heuristic A* algorithm outperformed baseline in both terms, computation efficiency and motion plan (path) quality. Motion Planning, Automated Driving, Multi-Heuristic Search, A* Search ## I Introduction Robot motion planning problems can be elegantly formulated as path planning in higher-dimensional configuration space [1]. However, finding a solution is computationally challenging due to a large continuous search space and kinodynamic constraints. The sampling-based approaches for motion planning have been extensively studied in robotics [2]. Instead of explicitly constructing the collision-free configuration space, which is time-consuming to compute, these algorithms probe the free space and search with a sampling strategy. The algorithms stop when a path connecting the initial and final poses is found. According to the sampling type, the sampling-based path-finding algorithms can be classified into two categories: random-sampling-based algorithms and orderly-sampling-based algorithms. The most popular random-sampling-based algorithm is the Rapidly-exploring Random Tree (RRT) [3]. RRT can be considered as a special case of Monte Carlo Tree Search (MCTS) [4]. The most notable orderly sampling-based algorithm is A* [5], including many of its extensions. It was initially developed to plan a path for the Shakey robot and it was further generalized and used for many different domains since then. The orderly sampling-based algorithms tend to be more efficient than random sampling-based planners, especially when the dimensions of the state spaces are fewer than six [6]. Random sampling-based planners inherently come with the disadvantages of being highly non-deterministic and converging towards solutions that are far from the optimum and suffer from bug trap problems [7]. The variants of RRT such as RRT* also provide comparable solutions, but they are computationally expensive and the efficiency depends on the size of the search space [8]. Furthermore, if no collision-free path to the goal exists, orderly sampling-based algorithms can report this failure much more quickly than random sampling-based ones. However, original A* deals with discrete state-space and requires discretization of the configuration space. Continuous state-space and kinodynamic constraints can be satisfied by constructing a lattice in the form of a regular grid [9]. Another approach is to use motion primitives [10]. To avoid the problem of rounding states generated using motion primitives to the grid, Hybrid-State A* [11] might be used. Sampling-based motion planning approaches were extensively used for autonomous vehicle path planning in unstructured environments during the 2007 DARPA Urban Challenge, both A*-based [12, 11] and RRT-based [13]. Several extensions for these approaches have been introduced in recent Fig. 1: Motion Planning for Autonomous Parking. years, namely A*-based [14, 15, 16], RRT*-based [17] and optimization-based [18]. An overview of recent developments and open challenges is presented in [19]. Besides motion planning for wheeled vehicles in unstructured environments (e.g. autonomous parking), different variants of search-based planning were recently used for planning footsteps for humanoid robots [20], robot manipulation [21], underwater vehicles [22], the aggressive flight of UAVs [23], as well as for special use-cases in automated driving such as energy-efficient driving [24], driving in complex scenarios in structured urban environments [25], unstructured and partially observable environments [15] and performance driving including drifting maneuvers [26][27]. This paper presents a motion planning approach for wheeled vehicles in unstructured environments based on Multi-Heuristic Search [28]. _Based on our knowledge, this is the first application of a Multi-Heuristic Search for motion planning in automated parking scenarios._ This is achieved by using a combination of geometric and orderly sampling-based approaches in order to achieve maximum coverage of the complexities within the search space. For this purpose, two heuristic functions are defined to get an accurate estimate of the cost-to-go and prune the unnecessary search nodes for faster path computation. The geometric approach is used to solve the simplified problem (without obstacles) by modeling the physical constraints of the vehicle and is used as a second heuristic function in a Multi-Heuristic A* algorithm, the first being the path length to the destination while considering obstacles but neglecting some physical constraints like turning radius. With this approach, both non-holonomic and holonomic constraints are combined to provide an optimal solution to the motion planning problem. Two approaches for the solution have been developed, one using Forward Search and the other using Bi-directional Search. Additionally, adaptive motion primitive arc length was developed to avoid the search getting stuck in depression regions indefinitely. An earlier version of this work is presented in the ICAPS 2020 PlanRob workshop [29]. ## II Autonomous Parking as Motion Planning Problem The autonomous parking problem tackled in this work represents a fully observable problem where intelligent infrastructure provides a connected vehicle with information about the structure and area of the parking lot. The driver enters the parking and drives the vehicle into a designated drop-off area. The autonomous parking algorithm overtakes a control and guides and maneuvers the vehicle into the assigned parking slot automatically based on the information from the infrastructure. This concept was demonstrated by Bosch and Daimler as Automated Valet Parking (AVP) [30]. The planning problem this work aims to solve can be stated as follows: _"Find a solution in real-time that autonomously navigates a non-holonomic vehicle without any collisions, from a given start position to a desired goal position within the parking layout based on the input of a two-dimensional obstacle map, or report the non-existence of such a solution."_ To fully define the problem, the environment (obstacles), the vehicle model, and Key Performance Indicators (KPIs) must be defined. ### _Environment_ Figure 2, shows an example of the parking layout structure which is considered for the motion planning problem of autonomous parking. There might be two most prominent parking orientations, perpendicular or parallel. However, the algorithm should be general to work on different parking arrangements. Each parking slot is enumerated with a parking slot ID. The number of parking slots and dimensions of the open space are configurable. The red cross symbols in the figure highlight the goal position for each parking slot within the layout. The entry point to the parking lot is fixed at \((x_{s},y_{s},\theta_{s})=(0,10,0)\). The parking position for the respective parking slots ID is read from a pre-computed map. The computation of goal position within each parking slot is based on the vehicle dimensions as the control reference would be different for the individual vehicle due to differences in length and width. ### _Vehicle Model_ Due to vehicle geometry and physics, there are constraints on the vehicle motion that restrict the allowable velocities. The first-order constraints, that consider the first derivative of the position (velocity), are often called _kinematic_ constraints. Including the dynamics of a vehicle results in second-order differential constraints, which allows the modeling of acceleration. The planning with such models is called _kinodynamic_ planning. As the focus of this work is on low-velocity parking maneuvers, higher-order constraints are not included, only kinematic constraints are considered. A simple yet useful model of a car is the _single track model_, also known as the _bicycle model_, shown in Figure (a)a. Fig. 2: Parking with perpendicular parking slots It does not consider dynamics, but it is useful to model lower-velocity driving. Consider the case where a car with wheelbase \(L\) moves forward with velocity \(v_{x}\) with a steering angle \(\alpha\) and assuming no wheel slip, then the car will move along a circle with radius \(R\). The kinematic constraints can then be derived by trigonometry. Let **x** = \((x,y,\theta)\) denote the configuration state of the car-like robot, where \(x\) and \(y\) denote the position and \(\theta\) the heading of the car. Vehicle motion is constrained to \(\dot{x}/\dot{y}=\tan(\theta)\), which together with the constraint \(R=L/\tan(\alpha)\) gives the following first-order differential constraints: \[\dot{x}=v_{x}\cos(\theta) \tag{1}\] \[\dot{y}=v_{x}\sin(\theta) \tag{2}\] \[\dot{\theta}=\frac{v_{x}}{L}\tan(\alpha) \tag{3}\] Setting the maximum steering angle \(|\alpha|\leq\alpha_{\max}\) results in a minimum turning radius \(R_{\min}\). The model clearly represents the non-holonomic behavior as it is impossible to move sideways without violating the no-slip condition. Restricting the allowed velocities and steering angles to the finite set \(\mathcal{U}_{v_{x}}=\{0,1\}\) and \(\mathcal{U}_{\alpha}=\{-\alpha_{\max},0,\alpha_{\max}\}\) results in the Dubin's car that can only stop and move forward at unit speed [31] and setting it to \(\mathcal{U}_{v_{x}}=\{-1,0,1\}\) results in the Reed-Shepp car that can also reverse at a unit speed [32]. Even though these models are very simplified they have efficient analytic solutions for optimal paths between any two states that can be useful for designing heuristic functions for search-based motion planners. A time-discretized single-track model is easily obtained using Euler forward or higher-order methods. By integrating the differential equations forward in time, a simulated path or trajectory resulting from a given input can be obtained and used to construct motion segments within a planning framework. ### _Collision Detection_ To avoid collisions with obstacles in the environment, vehicle geometry should be considered. The geometry of a car, approximate rectangular shape, can be reasonably approximated with overlapping circular disks. This simplifies the collision detection significantly as it is sufficient to check if the obstacles fall within the boundaries of the disk, which is represented by the radii of the disks. The problem of covering rectangles with equal-sized disks in an optimal way has been studied in the literature [33]. As stated in [34], a rectangle of length \(l\) and width \(w\) can be covered by \(n\) circles of radius \(r\) calculated as: \[r=\sqrt{\frac{l^{2}}{n^{2}}+\frac{w^{2}}{4}} \tag{4}\] placed at a distance of \(d\) calculated as: \[d=2\sqrt{r^{2}-\frac{w^{2}}{4}} \tag{5}\] In practical applications, the above approximation may lead to under-utilization of available collision-free space, especially in the case of environments where tight or narrow maneuvering is required, e.g. parking lots. Besides, the stated approach assumes the control reference to be at the center of the rectangular shape which is not true in the case of car-like robots which are mostly either front-wheel or rear-wheel driven. In this work, an approach proposed in [34] is adapted to fit a practical application of a car-like robot. The bounding disks can be arranged as shown in Figure 2(b). By this method, the bounding disks are arranged compactly to fit the geometry of the vehicle allowing better utilization of the free space even for tight maneuvers in narrow spaces. Moreover, all the calculations are based on the standard dimensions available from any production car design, and parametrizing the same makes the approach generic for any vehicle under consideration. ### _KPI Definition for Benchmark_ To compare the performance and quality of solutions generated by motion planning algorithms, the following Key Performance Indicators (KPIs) have been used. **Performance Parameters** _Number of Expanded States_: For a given configuration space, the number of expanded states reflects the guidance power of the heuristic functions in pruning the unwanted branches of the search. The lesser the number of expanded states, the better the heuristic. _Execution Time_: The execution time depends on the implementation of the vehicle model, the definition of motion primitives, and the algorithm itself. It is a measure of the time that the algorithm needs to return a solution using the defined attribute functions. _Number of Iterations_: The iteration counter quantifies how quickly the algorithm converges to either finding a solution or reporting that there exists no solution. **Solution Path Quality Parameters** _Path Length_: The path length is computed as the accumulated sum of Euclidean distance between two points on the final trajectory. It quantifies the efficiency of the generated solution as shorter path lengths are preferred. Fig. 3: Vehicle models. _Reverse Path Length_: The reverse path length indicates the quality of the algorithm to foresee a wrong branch. The longer reverse path length indicates that the vehicle had to move a lot in a backward direction in order to correct its path or in some cases the algorithm prefers to move the vehicle more in a backward direction rather than forward. In any case, longer reverse path lengths are not preferred. _Direction Changes_: Each direction change during driving indicates a stop-and-go situation, which will be annoying for a human driver. Even though it is an autonomous vehicle the quality solution shall be close to an experienced human driver, i.e., avoid multiple direction changes. ## III Motion Planning Approach The motion planning approach presented in this paper is based on Multi-Heuristic A* search, extended with a concept from the Hybrid A* Algorithm, employed in a bi-directional search fashion. The kinodynamic feasibility of the solution is provided by motion primitives based on the vehicle model. Several admissible heuristic functions enable efficient optimal planning. ### _Hybrid A* Algorithm_ The Hybrid A* algorithm was developed as a practical path-planning algorithm that can generate smooth paths for an autonomous vehicle operating in an unstructured environment and used in the DARPA Urban Challenge by the Stanford University team [11]. The hybrid A* algorithm is based on the A* algorithm, with the key difference being, that state transitions occur in continuous rather than in a discrete space. By considering the non-holonomic constraints of the robotic vehicle, the algorithm generates feasible transitions which can be executed by the actuator module. The three-dimensional state space \(\mathcal{X}\) (represented by \(x,y\) position and \(\theta\) heading angle of the vehicle) is associated with a discrete grid of reasonable resolution such that each continuous state is associated with some grid cell to enable the use of discrete search algorithm. Continuous states are rounded to the grid for association in order to prune the branches, by keeping only the best trajectory coming to the grid cell. The expansion still uses the actual continuous value that is not rounded to the grid. Similar to the original A*, if the current state being expanded is not the goal state, new successors are generated for all possible actions \(u\in\mathcal{U}(\textbf{x})\). The _cost-to-come_ is only computed for successor states that are not in the Closed list. If the state is not in the Open list it is directly pushed to the Open list. If the state is already in the Open list, and the _cost-to-come_ is smaller than the cost for a state with the same index that is in the Open list then the pointer to the parent, the _cost-to-come_ and the _cost-to-go_ are updated. After that, the key is decreased using the newly computed cost. ``` 1functionSHHA*(\(\textbf{x}_{1}\), \(\mathcal{X}_{G}\), \(\mathcal{O}\),\(h_{i}\)); 2returnCollision free trajectory from \(x_{1}\) to \(x\in X_{G}\) 3functionCombinePath(\(x_{\text{start}},\ldots,x_{\text{start}}^{\prime}\))(\(x_{\text{start}}^{\prime\prime},\ldots,x_{\text{G}}\))); 4 /* Use an analytic function to connect paths */ 5returnCombined trajectory from \(x_{\text{start}}\) to \(x_{\text{goal}}\) 6 7begin 8\end{lstlisting} /* Forward Search */ 9\(X_{G}\leftarrow\{x\mid\|x-x_{G}\|\leq d_{\text{goal}}\}\); 10\((x_{\text{start}},\ldots,x_{\text{start}}^{\prime})\leftarrow\textsc{SHHA*}(x_{ \text{start}},X_{G},\mathcal{O},h_{i})\); 11\((x_{\text{start}},\ldots,x_{\text{start}}^{\prime})\leftarrow\textsc{SHHA*}(x_{ \text{start}},X_{G},\mathcal{O},h_{i})\); 12\((x_{\text{start}}^{\prime})\leftarrow\textsc{Session Start and Goal Positions}\) */ 13\(x_{\text{start}}^{\prime}\gets x_{\text{goal}}\); 14\(X_{G}\leftarrow\{x\mid\|x-x_{G}\|\leq d_{\text{goal}}\}\); 15\((x_{\text{G}},\ldots,x_{\text{start}}^{\prime})\leftarrow\textsc{SHHA*}(x_{ \text{start}}^{\prime},X_{G}^{\prime},\mathcal{O},h_{i})\); 16\(/*\) Combine Paths */ 17\(path\leftarrow\textsc{CombinePath}((x_{\text{start}},\ldots,x_{\text{start}}^{ \prime}),(x_{\text{start}}^{\prime\prime},\ldots,x_{\text{G}}))\); 18returnpath ``` **Algorithm 1**Bi-Directional Multi-Heuristic Search ### _Multi Heuristic A* Algorithm_ The performance of the A* algorithm depends on the quality of the heuristic function used to guide the search. It is hard to design a single heuristic function that captures all the complexities of the problem. Furthermore, it is hard to ensure that heuristics are admissible (provide lower bounds on the cost-to-go) and consistent, which is necessary for an A*-like search to provide guarantees on completeness and bounds on sub-optimality. In [35] authors introduced an approach of alternation between different heuristic functions for satisficing (i.e. non-optimal) planning. Multi-Heuristic A* (MHA*) [28] overcomes the dependency on a single heuristic function in optimal planning too. MHA* can use multiple inadmissible heuristic functions in addition to a single consistent heuristic simultaneously to search in a way that preserves guarantees on completeness and bounds on sub-optimality. This enables us to effectively combine the guiding powers of different heuristic functions and simplifies dramatically the process of designing heuristic functions by a user because these functions no longer need to be admissible or consistent [28]. MHA* has two variants: Independent Multi-Heuristic A* (IMHA*) which uses independent cost-to-come and cost-to-go values for each search, and Shared Multi-Heuristic A* (SMHA*) which uses different cost-to-go values but a single cost-to-come value for all the searches. With this shared approach, SMHA* can guarantee the sub-optimality bounds with at most two expansions per state. In addition, SMHA* is potentially more powerful than IMHA* in avoiding depression regions as it can use a combination of partial paths found by different searches to reach the goal [28]. In SMHA* approach the optimal path for a given state is shared among all the searches so that if a better path to a state is discovered by any of the searches, the information is updated in all the priority queues. This allows the algorithm to expand each state at most twice, which significantly improves the computational time. ### _Our Planning Algorithm_ The presented planning algorithm is based on MHA* Search with features of Hybrid A* search and adaptive motion primitives. It uses MHA* in a forward and backward manner as shown in Algorithm 1. For this purpose, multiple admissible and non-admissible heuristic func tions are defined. The framework has three main functions SharedMultiHeuristicA*, GeneratePath, and CombinePath. SharedMultiHeuristicA* searches the configuration space according to the algorithm defined in [28]. It is expanded with hybrid A* features and uses motion primitives. It is used in both forward and backward steps. GeneratePath executes the search using SharedMultiHeuristicA* while continuously checking if the EucledianDistance from the current state to the goal state is greater than a configurable parameter \(d_{\text{fw}}\). If the search has reached the closest state defined by \(d_{\text{fw}}\) then the function returns the path from the start position to the closest point to the goal. Due to the discretization of the continuous space, in CombinePath an analytic function is required to combine the two paths generated by GeneratePath function. In this work, we used Reeds-Shepp curves for this purpose. #### Iii-B1 Bi-Directional Search The presented algorithm is using the Bi-directional Search, by searching for the path in two steps. In the first step, the search expands in the forward direction (towards the goal) from the start state to reach the state close to the goal position. In the second step, the search proceeds backward from the goal position toward the closest point reached by the forward search. The solution paths generated by the forward search step and backward search step are then joined by the analytical expansion using Reeds-Sheep curves. #### Iii-B2 Motion Primitives The motion primitives refer to the motion sequence that is triggered by an action request and corresponds to a basic move that is possible by the vehicle, sampled from a continuous control space. In this work, motion primitives are generated by applying one of the six control actions defined by combinations of \(\mathcal{U}_{v_{x}}=\{-1,1\}\) and \(\mathcal{U}_{\alpha}=\{-\alpha_{\max},0,\alpha_{\max}\}\). These represent maximum steering left while driving in the forward direction, no steering while driving in the forward direction, maximum steering right while driving in the forward direction, maximum steering left while driving in the backward direction, no steering while driving in the backward direction, maximum steering right while driving in the backward direction. Finer resolution is also possible, however that increases the branching factor and computational complexity. As shown in Figure 4, each of these control actions is applied for a certain amount of time, resulting in an arc of a circle with a lower bound turning radius \(R_{\min}\). This will ensure that the resulting paths are always drivable, as the actual vehicle model is used to expand the state, even though they might result in excessive steering actions. An _adaptive sizing of motion primitives_ is applied, wherein the arc length used for the execution of motion primitives is adapted dynamically to adjust to the environment. A shorter arc length is used near obstacles and a longer arc length in free space. This approach improves maneuverability in tight spaces. Using a shorter length in all cases promises higher levels of resolution completeness, as the likelihood to reach each state is increasing but reduces the computational efficiency. #### Iii-B3 Heuristics A heuristic function \(h\) is used to estimate the cost needed to travel from some state \(x\) to the goal state \(x_{g}\) (cost-to-go). As it is shown in [5], if the heuristic function is underestimating the optimal cost-to-go, A* search provides the optimal solution. For the shortest path search, the usual heuristic function is the Euclidean distance. In general, SMHA* algorithm supports \(n\) number of heuristics with \(n>1\). In this work, to restrict the complexity and to be comparable with Hybrid A* which is used as a reference for benchmarking, two heuristic functions have been used. The two heuristics capture different aspects of the problem as explained in the sections below. **Non-Holonomic without Obstacles** This heuristic function takes into account the non-holonomic constraints of the vehicle while neglecting the influence of the environment (obstacles). The most suitable candidate functions are either Dubins or Reeds-Shepp curves. These curves are the paths of minimal length with an upper bound curvature for the forward and combined forward and backward driving car respectively. We choose the Reeds-Shepp curves since in parking maneuvers it is important that the car can move in both forward and backward directions. These curves are computationally inexpensive to compute as they are based on an analytic solution. As shown in Figure 4(a), this heuristic takes into account the current heading as well as the turning radius, which ensures that the vehicle approaches the goal with the appropriate heading. This is especially important when the car gets closer to the goal. Given that Reeds-Shepp curves are minimal, this heuristic is clearly admissible. **Holonomic with Obstacles** This heuristic function neglects the characteristics of the vehicle and only accounts for obstacles. The estimate is based on the shortest distance between the goal state and the state currently being expanded. This Fig. 4: Motion Primitives. distance is determined using the standard Dijkstra search in two dimensions (\(x\) and \(y\) position). As the search is 2D and assumes the object under control is holonomic, the path is not smooth. The search is performed backward, it uses the initial state of the SMHA* as the goal state, and the goal state of the SMHA* search as the start state to generate the heuristic cost. The closed list of the Dijkstra search stores all the shortest distances to the goal and guides the vehicle away from dead ends and around obstacles. Since this heuristic function does not depend on any runtime sensor information, it can be fully pre-computed offline and used as a lookup table or simply translated and rotated to match the current goal instead of initiating a new search while SMHA* progresses. ## IV Simulation results In order to benchmark the performance of the solution developed using SMHA*, we chose the Hybrid A* as a reference. As discussed earlier, Hybrid A* gives a comparable reference as it is also based on the orderly sampling approach and also uses two heuristics to guide the search. The key difference is that in the Hybrid A* approach, the maximum of both of the results of the heuristics is considered to update the priority queue while in SMHA* the heuristics are iteratively computed and both can update the priority queue. The use cases chosen for the simulation depict common situations encountered in a parking lot such as _Entering Parking Lot_ and _Exiting Parking Lot_. The simulations are performed by executing the Hybrid A* and SMHA* algorithm back-to-back to compare the KPIs of the generated solution. In the rest of the section, an elaborate analysis of the simulation results and solution paths that are generated for the use case _Entering Parking Lot_ using Bi-directional Search for parallel parking configuration is presented. Figure 5(a) depicts the selected start and goal position (Parking Slot ID: 27) on the parking layout. First, the configuration space is explored using the 2D Dijkstra search to generate the cost-to-go map as shown in Figure 5(b). The cost-to-go map shows in a color scale distance from the goal pose considering obstacles but neglecting non-holonomic constraints. The results of this step are stored in a look-up and represents the _Holonomic with Obstacles_ heuristic explained earlier. Figure 6(a) depicts the state expansion pattern of the Hybrid A* algorithm. The algorithm has searched the area around the start position with a bias towards the goal position even though the solution path lies in another direction. The heuristic strongly guides the search towards the shortest path as far as possible within the obstacle-free area. As the search explores, it expands all states with lower costs that can lead to the shortest path until it reaches a point where the heuristic cost of expanding the points which do not lead to the shortest path has a lower cost to reach the goal. As a result of this poor pruning of the unwanted branches, the planner expands several states around the start position before it realizes the optimal direction of the path which leads to poor timing performance. The final path generated as seen in Figure 6(b) has many orientation changes in the Forward Search step and maneuvering step into the parking slot is determined by the backward search. The solution path is smooth, but the direction of orientation is reversed for the most part of the path which is not optimal. Figure 7(a) depicts the state expansion pattern of the SMHA* algorithm. Similar to Hybrid A*, the algorithm has searched the area around the start position with a bias towards the goal position even though the solution path lies in another direction. But, the multi-heuristic approach, quickly balances the bias towards finding the shortest path to finding a feasible path considering the obstacles. In addition, due to the mutually informed independent search by respective heuristics, the states Fig. 5: Paths based on relaxed models of heuristic functions. Fig. 8: Entering Parking Lot with Bi-Directional - SMHA*. Fig. 6: Parking Lot with Start Position - Cyan and Goal Position - Green (left) and cost-to-go map generated by 2D Dijkstra search (right). Fig. 7: Entering Parking Lot with Bi-Directional - Hybrid A*. that are expanded by one heuristic function are not expanded by other heuristic functions. As a result, the algorithm could prune the unwanted branches and realize the optimal direction of the path much faster compared to Hybrid A*. As seen in Figure (b)b, the path is smooth and the direction of orientation is in the forward direction for most parts of the path, which is preferred. As seen from KPI values tabulated in Table I, the Hybrid A* algorithm expands significantly more states and uses more time to generate the solution path compared to SMHA* approach. Even though the heuristics used are the same, the mutually informed independent search of SMHA* prunes the unwanted branches much more significantly allowing faster convergence towards the solution and improved execution time. To give a comprehensive performance comparison of both the algorithms for the use case _Entering Parking Lot_ for parallel parking lot layout using Bi-directional Search, a full simulation run through all the parking slots is executed. In this simulation mode, each parking slot ID is selected as a goal position sequentially and the back-to-back run of the Hybrid A* and SMHA* algorithm is performed. As observed from Figure 10, SMHA* outperforms the Hybrid A* algorithm in terms of both performance and solution path quality parameters. With the multi-heuristic approach, the average execution time to generate the solution path is reduced by **81%**, which is a significant improvement demonstrating the potential of multi heuristic approach to solve the given planning problem. ## V Conclusion The work focused on providing a Multi-Heuristic search-based approach to solve the motion planning problem for autonomous parking. To benchmark the results obtained, a state-of-the-art planning algorithm Hybrid A* was chosen as a reference. As the environment for the given use case involves only low-speed maneuvering, a Single Track Bicycle Model which reflects the kinematics of the vehicle was used as a motion model. The model emulates the non-holonomic nature of the vehicle in all stages of the algorithm, motion primitives (node expansion), and heuristic estimates. Thus, the paths generated are always driveable. A collision check algorithm was implemented based on the Multi-Disk Decomposition of the bounded volume. The algorithm was parameterized based on the vehicle geometry making it a generic solution to fit with any vehicle type and independent of the environment. The path planning problem was solved using the Shared Multi-Heuristic A* approach in which two heuristic functions were implemented with a round-robin scheduling. The respective heuristic searches share the current path obtained to a state. The heuristics were defined to capture the non-holonomic and holonomic constraints of the vehicle. Two solution approaches were developed: Forward Search and _Bi-Directional Search_. The SMHA* algorithm solved the motion planning problem elegantly and outperformed Hybrid A* with respect to the response time and path quality of the generated solution path. The KPI comparison clearly indicates that SMHA* is an ideal candidate for motion planning in slow-speed driving in unstructured environments applications like autonomous valet parking. \begin{table} \begin{tabular}{|c|c|c|} \hline **KPI** & **Hybrid A* & **SMHA* \\ \hline Number of Expanded States & 2457 & 73 \\ \hline Execution Time (s) & 47.8 & 11.51 \\ \hline Path Length (m) & 90.3 & 90.58 \\ \hline Reverse Path Length (m) & 7.29 & 3.29 \\ \hline Direction Changes & 4 & 4 \\ \hline Number of Iterations & 23633 & 73 \\ \hline \end{tabular} \end{table} TABLE I: Entering parking lot with Bi-Directional Search - KPI Comparison. Fig. 10: Entering Parking Lot with Bi-Directional Search - Mean Performance Improvement. Fig. 9: Entering Parking Lot with Bi-Directional Search - KPI Comparison (Expanded States (ES), Execution Time (ET), Forward Path Length (FPL), Reverse Path Length (RPL), Direction Changes (DC), Iteration Count (IC)). ## VI Acknowledgments The project leading to this study has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 675999, ITEAM project. VIRTUAL VEHICLE Research Center is funded within the COMET - Competence Centers for Excellent Technologies - programme by the Austrian Federal Ministry for Transport, Innovation and Technology (BMVIT), the Federal Ministry of Science, Research and Economy (BMWFW), the Austrian Research Promotion Agency (FFG), the province of Styria and the Styrian Business Promotion Agency (SFG). The COMET programme is administrated by FFG.
2305.00799
How to address monotonicity for model risk management?
In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair.
Dangxing Chen, Weicheng Ye
2023-04-28T04:21:02Z
http://arxiv.org/abs/2305.00799v2
# How to address monotonicity for model risk management? ###### Abstract In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair. Machine Learning, ICML ## 1 Introduction There has been growing public concern over the misuse of artificial intelligence models in the absence of regulations, despite the success of artificial intelligence (AI) and machine learning (ML) in many fields (Radford et al., 2019; He et al., 2016; Chen and Guestrin, 2016). The European Commission (EC) has proposed the Artificial Intelligence Act (AIA) (EU2, 2021), which represents a significant first step toward filling the regulatory void. Regulations regarding artificial intelligence should consider transparency, accountability, and fairness (Carlo et al., 2021; OCC, 2021). Many efforts have been made to develop transparent ML models (Agarwal et al., 2021; Yang et al., 2021; Tsang et al., 2020; Hastie, 2017; Caruana et al., 2015; Lou et al., 2013). A transparent model facilitates the explanation of how it makes decisions, therefore allowing us to easily verify conceptual soundness and fairness. Nevertheless, conceptual soundness and fairness are not necessarily guaranteed for ML models, even if they are transparent. Our focus in this paper is on monotonicity, one of the most important indicators. In recent years, monotonic machine learning models have received extensive research attention (Yanagisawa et al., 2022; Liu et al., 2020; Milani Fard et al., 2016; You et al., 2017). These studies have led to a more reasonable and fair approach to ML. The majority of papers, however, focus on individual monotonicity, that is, on the fact that a model is monotonic with a particular feature. It was only recently pointed out that individual monotonicity is insufficient to summarize all relevant information (Chen and Ye, 2022; Gupta et al., 2020). It is also important to consider pairwise monotonicity, a monotonicity that considers monotonicity between different features. Furthermore, most of these models are not necessarily transparent. In this paper, pairwise monotonicity is explored in more detail, particularly in the context of transparent machine learning models. We divide pairwise monotonicity into two types: the pairwise monotonicity introduced in (Chen and Ye, 2022) is classified as weak pairwise monotonicity, and monotonic dominance discussed in (Gupta et al., 2020) is classified as strong pairwise monotonicity. Time and severity are the two most common causes of pairwise monotonicity. In terms of time, recent information should often be considered more important than older information. For example, in credit scoring, if there is one past due, the credit score should be lower if the past due occurred recently. It is important to take into account such pairwise monotonicity in order to give people the opportunity to improve. Fairness implies that all individuals should have the opportunity to succeed based on their individual merits, regardless of their past behavior. In terms of severity, some events are intrinsically more severe than others due to the nature of justice. A lefomy, for example, is more serious than a misdemeanor in criminal justice. It is important to maintain pairwise monotonicity as justice is an important component of fairness and a good society should have a system of reward and punishment that is fair. Furthermore, weak and strong pairwise monotonicity are distinguished based on whether two features can only be compared at the same magnitude. Strong pairwise monotonicity occurs when two features can be compared at any level. Justice usually dictates the making of such comparisons. Pairwise monotonicity is analyzed and its impact on statistical interactions is discussed. The traditional way to check additive separability should incorporate monotonicity constraints and features with strong pairwise monotonicity and diminishing marginal effects should not be separated, even if data indicate otherwise. A new class of monotonic neural additive models (MGNAMs) is presented to incorporate three types of monotonicity into transparent neural networks. We demonstrate empirically that pairwise monotonicities frequently occur in a wide range of fields, including finance, criminology, and healthcare. Overall, MGNAMs provide a transparent, accountable, and fair framework. ## 2 Monotonicity For problem setup, assume we have \(\mathcal{D}\times\mathcal{Y}\), where \(\mathcal{D}\) is the dataset with \(n\) samples and \(m\) features and \(\mathcal{Y}\) is the corresponding numerical values in regression and labels in classification. We assume the data-generating process \[y=f(\mathbf{x})+\epsilon \tag{1}\] for regression problems and \[y|\mathbf{x}=\text{Bernoulli}(f(\mathbf{x})) \tag{2}\] for binary classification problems. For simplicity, we assume \(\mathbf{x}\in\mathbb{R}^{m}\). Then ML methods are applied to approximate \(f\). ### Individual monotonicity Throughout the paper, without loss of generality, we focus on the monotonic increasing functions. Suppose \(\boldsymbol{\alpha}\) is the list of all individual monotonic features and \(\boldsymbol{\alpha}\) its complement, then the input \(\mathbf{x}\) can be partitioned into \(\mathbf{x}=(\mathbf{x}_{\boldsymbol{\alpha}},\mathbf{x}_{\boldsymbol{- \alpha}})\). Then we have the following definition. **Definition 2.1**.: We say \(f\) is **individually monotonic** with respect to \(\mathbf{x}_{\boldsymbol{\alpha}}\) if \[f(\mathbf{x}_{\boldsymbol{\alpha}},\mathbf{x}_{\boldsymbol{- \alpha}})\leq f(\mathbf{x}_{\boldsymbol{\alpha}}^{\prime},\mathbf{x}_{\boldsymbol {-\alpha}}),\] \[\mathbf{x}_{\boldsymbol{\alpha}}\leq\mathbf{x}_{\boldsymbol{ \alpha}}^{\prime},\forall\mathbf{x}_{\boldsymbol{\alpha}},\mathbf{x}_{ \boldsymbol{\alpha}}^{\prime},\mathbf{x}_{\boldsymbol{-\alpha}}, \tag{3}\] where \(\mathbf{x}_{\boldsymbol{\alpha}}\leq\mathbf{x}_{\boldsymbol{\alpha}}^{\prime}\) denotes the inequality for all entries, i.e., \(x_{\alpha_{i}}\leq x_{\alpha_{1}}^{\prime},\forall i\). Here is an example of individual monotonicity. _Example 2.2_.: In credit scoring, the probability of default should increase as the number of past due increases. For a differentiable function \(f\), individual monotonicity with respect to \(\mathbf{x}_{\boldsymbol{\alpha}}\) can be verified if \[\min_{\mathbf{x},i}\frac{\partial f(\mathbf{x})}{\partial x_{\alpha_{i}}}\geq 0. \tag{4}\] ### Pairwise monotonicity There are some features that are intrinsically more important than others in practice. Analog to (3), we partition \(\mathbf{x}=(x_{\beta},x_{\gamma},\mathbf{x}_{\boldsymbol{-}})\). Without loss of generality, we assume \(x_{\beta}\) is more important than \(x_{\gamma}\). As a result of multiple features encountering pairwise monotonicity, we record them in two lists \(\mathbf{u}\) and \(\mathbf{v}\) such that \(u_{i}\) is more important than \(v_{i}\). Lastly, we require all features with pairwise monotonicity also satisfy individual monotonicity. #### 2.2.1 Weak pairwise monotonicity We classify the pairwise monotonicity introduced in (Chen & Ye, 2022) as the weak pairwise monotonicity. The definition is given as follows. **Definition 2.3**.: We say \(f\) is **weakly monotonic** with respect to \(x_{\beta}\) over \(x_{\gamma}\) if \[f(x_{\beta},x_{\gamma}+c,\mathbf{x}_{\boldsymbol{-}})\leq f(x_{ \beta}+c,x_{\gamma},\mathbf{x}_{\boldsymbol{-}}),\] \[\forall x_{\beta},x_{\gamma}\text{ s.t. }x_{\beta}=x_{\gamma}, \forall\mathbf{x}_{\boldsymbol{-}},\forall c\in\mathbb{R}^{+}. \tag{5}\] We give an example of weak pairwise monotonicity below. _Example 2.4_.: Functions should be weakly monotonic with respect to features containing current information over features containing past information. Following Example 2.2, let \(x_{\beta}\) and \(x_{\gamma}\) count the number of past dues within two years and two years ago, then the probability of default is weakly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\). Such monotonicity is considered weak due to the condition of \(x_{\beta}=x_{\gamma}\). Using this condition ensures that the effects of features on the function are compared at the same magnitude, and can therefore be viewed as a more general definition. Suppose \(f\) is differentiable and is weakly monotonic with respect to \(u_{i}\) over \(v_{i}\) for all i in lists \(\mathbf{u}\) and \(\mathbf{v}\), then the weak pairwise monotonicity can be verified as \[\min_{\widetilde{\mathbf{x}},i}\left(\frac{\partial f}{\partial x_{u_{i}}}( \widetilde{\mathbf{x}})-\frac{\partial f}{\partial x_{v_{i}}}(\widetilde{ \mathbf{x}})\right)\geq 0. \tag{6}\] where \(\widetilde{x}_{u_{i}}=\widetilde{x}_{v_{i}}\) in \(\widetilde{\mathbf{x}}\). #### 2.2.2 Strong pairwise monotonicity In addition to the weak pairwise monotonicity, there exists a stronger condition of pairwise monotonicity. We classify the monotonic dominance introduced in (Gupta et al., 2020) as the strong pairwise monotonicity. **Definition 2.5**.: We say \(f\) is **strongly monotonic** with respect to \(x_{\beta}\) over \(x_{\gamma}\) if \[f(x_{\beta},x_{\gamma}+c,\mathbf{x}_{\boldsymbol{-}})\leq f(x_{ \beta}+c,x_{\gamma},\mathbf{x}_{\boldsymbol{-}}),\] \[\forall x_{\beta},x_{\gamma},\mathbf{x}_{\boldsymbol{-}},\forall c \in\mathbb{R}^{+}. \tag{7}\] The difference between strong/weak monotonicity is whether the condition \(x_{\beta}=x_{\gamma}\) is needed. Strong monotonicity implies the impacts of increments of some features are more important than others at any point. Note that the features in Example 2.4 are only weakly pairwise monotonic, not strongly pairwise monotonic. Adding more past does to the credit score will have a different impact based on the number of past dues. Thus, current and past features cannot be directly compared, unless they are of equal magnitude. We provide an example of strong pairwise monotonicity below. _Example 2.6_.: In criminology, an additional felony is always considered more serious than an additional misdemeanor. Therefore, the probability of recidivism should be strongly monotonic with respect to felonies over misdemeanors. Clearly, we have the following Lemma. **Lemma 2.7**.: _If \(f\) is strongly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\), then \(f\) is also weakly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\)._ For a differentiable function \(f\), suppose \(f\) is strongly monotonic with respect to \(u_{i}\) over \(v_{i}\) for all i in lists \(\mathbf{u}\) and \(\mathbf{v}\), then the strong pairwise monotonicity can be verifed as \[\min_{\mathbf{x},i}\left(\frac{\partial f}{\partial x_{u_{i}}}(\mathbf{x})- \frac{\partial f}{\partial x_{v_{i}}}(\mathbf{x})\right)\geq 0. \tag{8}\] Strong pairwise monotonicity is transitive and we provide the following Lemma, where proof is provided in Appendix A.1. **Lemma 2.8**.: _If \(f\) is strongly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\) and \(x_{\gamma}\) over \(x_{\delta}\), then \(f\) is strongly monotonic with respect to \(x_{\beta}\) over \(x_{\delta}\)._ ## 3 Statistical Interactions The study of transparent machine learning models has become increasingly popular in order to improve explanation and compliance with regulatory requirements. As a general rule, we should avoid interactions between features if they do not exist in order to maintain transparency in our models. One popular class of transparent models is generalized additive models (GAMs) (Hastie, 2017) of the form \[f(\mathbf{x})=\alpha+\sum_{p=1}^{m}f_{p}(x_{p}). \tag{9}\] GAMs are transparent in that statistical interactions are not included. (Agarwal et al., 2021; Caruana et al., 2015) have shown that combination of GAMs with ML models achieved high accuracy for many datasets. In this section, we discuss whether we could incorporate three types of monotonicity into GAMs. ### Individual and weak pairwise monotonicity for GAMs In GAMs, individual and weak pairwise monotonicity can be easily enforced. Assume that \(f\) follows the GAM (9) of the form and is differentiable. If \(f\) is individually monotonic with respect to \(x_{\alpha}\), then we need \[f^{\prime}_{\alpha}(x)\geq 0,\;\forall x\in\mathbb{R}. \tag{10}\] Similarly, if \(f\) is weakly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\), then the weak pairwise monotonicity requires that \[f^{\prime}_{\beta}(x)\geq f^{\prime}_{\gamma}(x),\forall x\in\mathbb{R}. \tag{11}\] Constraints such as these can be easily implemented (Chen & Ye, 2022). Furthermore, without statistical interactions, weak pairwise monotonicity is also transitive, as illustrated in the following Lemma with proof in Appendix A.1. **Lemma 3.1**.: _If \(f\) follows the GAM (9), \(f\) is weakly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\) and \(x_{\gamma}\), over \(x_{\delta}\), then \(f\) is weakly monotonic with respect to \(x_{\beta}\) over \(x_{\delta}\)._ ### Additive separability Statistical interactions can be determined by checking additive separability. For simplicity, suppose there are two groups: \(\mathbf{x}\) can be split into two components \(\mathbf{x}_{U}\) and \(\mathbf{x}_{V}\), with \(U\cup V=D\) and \(U\cap V=\emptyset\), where \(D=\{1,\ldots,m\}\). Extending it to multiple groups is straightforward. **Definition 3.2**.: We say a function \(f\) with \(D\) is strictly additive separable for \(U\) and \(V\) if \[f(\mathbf{x})=g(\mathbf{x}_{U})+h(\mathbf{x}_{V}) \tag{12}\] for some functions \(g\) and \(h\), \(U\cup V=D\), and \(U\cap V=\emptyset\). Recently, statistical interactions have been studied extensively in the existing literature (Sorokina et al., 2008; Tsang et al., 2018, 2020). Roughly speaking, we wish to know whether there are interactions between groups \(U\) and \(V\). As implied from the name, the conclusion is often drawn according to whether such interactions are statistically significant. There are many different rules to check statistical significance, as a simple example, we might consider a threshold \(\epsilon\) and check whether the accuracy deteriorates if no interactions are assumed. **Verify additive separability:** \[|\text{Acc}(f(\mathbf{x}))-\text{Acc}(g(\mathbf{x}_{U})+h(\mathbf{x}_{V}))| <\epsilon. \tag{13}\] If the criteria are satisfied, it seems reasonable to conclude that there are no interactions between \(U\) and \(V\). When it comes to GAMs, if a GAM achieves similar accuracy as the black-box ML model, we may conclude that no interaction is necessary. ### Additive separability in the presence of monotonicity In the context that monotonicity is required, we should add the monotonicity into the requirement of additive separability. That motivates us to modify the rule of Equation 13. **Verify additive separability with monotonicity:** \[|\text{Acc}(f(\mathbf{x}))-\text{Acc}(g(\mathbf{x}_{U})+h(\mathbf{x}_{V}))|<\epsilon, \tag{14}\] \(f\) and \(g+h\) have required monotonicity. For statistical interactions with monotonicity, monotonicity constraints for \(g+h\) are essential, since we may not have sufficient data for statistically significant results. In spite of this, neglecting such a statistical interaction may have catastrophic consequences. To illustrate our idea, consider the following example of credit scoring. _Example 3.3_.: Suppose \(\mathbf{x}=(x_{\beta},x_{\gamma})\) where \(x_{\beta}\) counts the number of past dues of more than 60 days and \(x_{\gamma}\) counts the number of past dues between 30 and 59 days. Assume \(f\) calculates the probability of defaults. Clearly, \(f\) should be strongly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\). For simplicity, consider the values of \(f\) in the region where \(x_{\beta}+x_{\gamma}\leq 2\). Suppose the true function \(f\) and an additive approximation \(\widetilde{f}=f_{1}+f_{2}\) are given in Table 1. If there are no data for \(\mathbf{x}=(1,1)\), then \(\widetilde{f}\) exactly fits \(f\) in all training data. According to the criteria (13), \(x_{\beta}\) and \(x_{\gamma}\) can be well separated. However, \(\widetilde{f}\) doesn't have strong pairwise monotonicity and \(\widetilde{f}(1,1)>\widetilde{f}(2,0)\) causes algorithmic unfairness. Furthermore, such rules could encourage people with \(\mathbf{x}=(1,1)\) to wait for an additional month to pay back to change to \(\mathbf{x}=(2,0)\) in order to obtain a lower probability of default, and therefore higher credit score. Even worse, ML models might not recognize this from data in the long run, as people would intentionally avoid the state \(\mathbf{x}=(1,1)\). Data does not reveal such a problem, thus it must be considered in advance. ### In the presence of strong pairwise monotonicity We argue that there exists a common situation in which features with strong pairwise monotonicity cannot be separated, except in the trivial case. Let us consider the following proposition, whereas the proof is in Appendix A.1. **Proposition 3.4**.: _Suppose \(f\) takes the GAM form (9), \(f\) is differentiable, individually monotonic with respect to \(x_{\beta}\) and \(x_{\gamma}\), and strongly monotonic with respect to \(x_{\beta}\) over \(x_{\gamma}\). If there exists \(x^{*}\) such that \(f^{\prime}_{\beta}(x^{*})=0\), then \(f_{\gamma}(x)=0,\forall x\)._ According to the Proposition, under such additive forms, \(f_{\gamma}=0\) must be true, which can be inconsistent with reality. Sadly, such phenomena are common in practice, and one of the most common causes is diminishing marginal effects. We provide the definition below. **Definition 3.5**.: Suppose \(\mathbf{x}=(x_{\alpha},\mathbf{x}_{\neg\alpha})\). We say a differentiable function \(f\) has the **diminsishing marginal effect (DME)** with respect to \(x_{\alpha}\) if followings hold 1. \(\frac{\partial}{\partial x_{\alpha}}f(x_{\alpha},\mathbf{x}_{\neg\alpha})>0\) 2. \(\frac{\partial^{2}}{\partial x_{\alpha}^{2}}f(x_{\alpha},\mathbf{x}_{\neg \alpha})<0\) 3. \(\lim_{x_{\alpha}\rightarrow\infty}\frac{\partial}{\partial x_{\alpha}}f(x_{ \alpha},\mathbf{x}_{\neg\alpha})=0\). As a matter of fact, DMEs are quite common in practice. For example, the Cobb-Douglas utility function, \(u(x,y)=x^{a}y^{1-a}\) with \(0<a<1\), is commonly used to illustrate diminishing marginal utility in economics. Proposition 3.4 suggests that DMEs may prevent us from separating features with strong pairwise monotonicity. Features with strong pairwise monotonicity that exhibits DME patterns must be assumed to be non-separable at the time of its emergence. Therefore, GAMs are insufficient to incorporate strong pairwise monotonicity in this case. ### Implications on binary features There is an exception to the previous analysis, which is when features are binary since DMEs do not apply. In this case, we have the following Lemma, whereas the proof is left in Appendix A.1. **Lemma 3.6**.: _For binary features, weak pairwise monotonicity coincides with strong pairwise monotonicity._ In this case, features can still be additive separable in the linear form. Consider the linear regression of the following form for simplicity \[f(\mathbf{x})=\alpha+\sum_{p=1}^{m}\beta_{p}x_{p}.\] Suppose \(f\) is monotonic with respect to \(x_{\gamma}\) over \(x_{\delta}\), then we require \(\beta_{\gamma}>\beta_{\delta}\), and the additive separability can be achieved. \begin{table} \begin{tabular}{c c c c} \hline 2 & 0.4 & & \\ \hline 1 & 0.3 & **0.35** & \\ \hline 0 & 0 & 0.2 & 0.3 \\ \hline \(x_{\beta}/x_{\gamma}\) & 0 & 1 & 2 \\ \hline \end{tabular} \end{table} Table 1: Comparison between \(f\) with strong pairwise monotonic features and an additive approximation \(\widetilde{f}=f_{1}+f_{2}\). \(\widetilde{f}\) violates strong pairwise monotonicity at \(\mathbf{x}=(1,1)\). ## 4 Monotonic groves of neural additive models There has been an increasing demand for transparent models recently. In this direction, Neural additive models (NAMs) (Agarwal et al., 2021) and its monotonic version (Chen and Ye, 2022) provide the most transparent neural networks by avoiding statistical interactions, and have been very successful. NAMs have assumed that each \(f_{p}\) in Equation (9) is parametrized by neural networks (NNs). Despite their success, they cannot handle strong pairwise monotonicity, as discussed above. We aim to develop a new model that will maintain transparency to the greatest extent possible, in the manner of NAMs, as well as incorporate strong pairwise monotonicity. Thus, we consider a more general form, namely the groves of neural additive models (GNAMs), similar to (Sorokina et al., 2008), \[f(\mathbf{x})=\alpha+\sum_{p:p\in P}f_{p}(x_{p})+\sum_{q:q\in Q}f_{q}(\mathbf{ x}_{q}), \tag{15}\] \(f_{p}\) and \(f_{q}\) are parametrized by NNs. There exists five types of features: * Nonmonotonic features * Features with only individual monotonicity * Features with only weak pairwise monotonicity * Features with only strong pairwise monotonicity * Features with both strong and weak pairwise monotonicity The first three types of features are trained by 1-dimensional functions \(f_{p}\), just like monotonic NAMs (MNAMs) (Chen and Ye, 2022). Different from MNAMs, \(\mathbf{x}_{q}\) can be higher-dimensional. For the last two types, when there is strong pairwise monotonicity involved, features with pairwise monotonicity should be grouped together in \(q\). Note we group features with both strong and weak pairwise monotonicity to avoid unfair comparisons. Detailed explanations can be found in Appendix A.2. Regularized algorithms are used to enforce monotonicity. In GNAMs' architecture, motivated by conditions (4), (6), and (8), we consider the optimization problem: \[\min_{\mathbf{\Theta}}\ell(\mathbf{\Theta})+\lambda_{1}h_{1}(\mathbf{\Theta})+\lambda_{2} h_{2}(\mathbf{\Theta})+\lambda_{3}h_{3}(\mathbf{\Theta}), \tag{16}\] where \(\ell(\mathbf{\Theta})\) is the mean-squared error for regressions and log-likelihood function for classifications, and * Individual monotonicity: suppose \(\mathbf{\alpha}\) is the list of individual monotonic features, then \[h_{1}(\mathbf{\Theta})=\sum_{\alpha\in\mathbf{\alpha}}\int_{\mathbb{R}^{m}}\max\left( 0,-\frac{\partial f(\mathbf{x};\mathbf{\Theta})}{\partial x_{\alpha}}\right)^{2} \ d\mathbf{x}.\] * Weak pairwise monotonicity: suppose \(\mathbf{u}\) and \(\mathbf{v}\) are weak pairwise monotonic lists such that \(f\) is weakly monotonic with respect to \(u_{i}\) over \(v_{i}\), then \[h_{2}(\mathbf{\Theta})=\sum_{i=1}^{|\mathbf{u}|}\int_{\mathbb{R}^{m-1}}\max\left(0,\Delta f(\widetilde{\mathbf{x}}_{i},\mathbf{\Theta})\right)^{2}\ d\widetilde{ \mathbf{x}}_{i}\] where \[\Delta f(\widetilde{\mathbf{x}}_{i},\mathbf{\Theta})=-\frac{\partial f( \widetilde{\mathbf{x}}_{i};\mathbf{\Theta})}{\partial x_{u_{i}}}+\frac{\partial f (\widetilde{\mathbf{x}}_{i};\mathbf{\Theta})}{\partial x_{v_{i}}}\] and \(x_{u_{i}}=x_{v_{i}}\) in \(\widetilde{\mathbf{x}}_{i}\). * Strong pairwise monotonicity: suppose \(\mathbf{y}\) and \(\mathbf{z}\) are strong pairwise monotonic lists such that \(f\) is strongly monotonic with respect to \(y_{i}\) over \(z_{i}\), then \[h_{3}(\mathbf{\Theta})=\sum_{i=1}^{|\mathbf{y}|}\int_{\mathbb{R}^{m}}\max\left(0, \Delta f_{i}(\mathbf{x},\mathbf{\Theta})\right)^{2}\ d\mathbf{x}\] where \[\Delta f_{i}(\mathbf{x},\mathbf{\Theta})=-\frac{\partial f(\mathbf{x};\mathbf{\Theta}) }{\partial x_{y_{i}}}+\frac{\partial f(\mathbf{x};\mathbf{\Theta})}{\partial x_{ z_{i}}}.\] In the GNAM's architecture (15), computational dimensions can be reduced. For example, when calculating partial derivatives for features in the group \(q\), it is sufficient to evaluate \(\partial f_{q}\) instead of \(\partial f\). In practice, we replace the integral with the equispaced discrete approximations. In the optimization procedure, we also replace all \(\max(0,\cdot)\) with \(\max(\epsilon,\cdot)\). We gradually increase \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) until penalty terms vanish. The two-step procedure is summarized in Algorithm 1. We refer to the GNAM that satisfies all required monotonic constraints (3), (5), and (7) as the monotonic groves of neural additive model (MGNAM). ``` Initialization:\(\lambda_{1}=\lambda_{2}=\lambda_{3}=0\), the architecture of the GNAM (\(P\) and \(Q\)) Train a GNAM by (16) while\(\min(h_{1},h_{2},h_{3})>0\)do Increase \(\lambda_{i}\) if \(h_{i}>0\) Retrain the GNAM by (16). endwhile ``` **Algorithm 1** Monotonic groves of neural additive model ## 5 Empirical examples This section evaluates the performance of models for a variety of datasets in different fields, including finance, criminology, and health care. We compare fully-connected neural networks (FCNNs), neural additive models (NAMs), monotonic neural additive models (MNAMs), and monotonic groves of neural additive models (MGANMs). For MNAMs, strong pairwise monotonicity is replaced by weak pairwise monotonicity. We use FCNNs to check the accuracy of black-box ML models and NAMs/MNAMs for visualizations. We do not consider other models here as the general comparison of accuracy is not our focus, but the conceptual soundness and fairness. More details of the dataset, models, and experiments setup are provided in Appendix A.3. ### Finance - Credit scoring In credit scoring, statistical models are used to assess an individual's creditworthiness. A popularly used dataset is the Kaggle credit score dataset 1. Three delinquency features are included in this dataset: 30-59 days, 60-89 days, and 90+ days. To demonstrate the strong pairwise monotonicity of this dataset, we focus on these three features. Without loss of generality, we denote them as \(x_{1}\), \(x_{2}\), and \(x_{3}\). When an additional past due exceeds 90 days, the system should take it much more seriously than when it exceeds 60-89 days, which should take it much more seriously than when it exceeds 30-59 days. We, therefore, impose strong pairwise monotonicity on this order. In the event that such strong pairwise monotonicity is violated, customers with longer past dues could have a higher credit score, thereby causing algorithmic unfairness. In addition, customers with shorter past dues may wish to delay their payments in order to increase their credit score. Footnote 1: [https://www.kaggle.com/c/GiveMeSomeCredit/overview](https://www.kaggle.com/c/GiveMeSomeCredit/overview) A summary of the model performance is provided in Table 4. There is no significant difference in accuracy between the different methods, indicating that transparent neural networks are sufficient for this dataset. Next, we evaluate conceptual soundness and fairness. For simplicity, we focus on the number of past dues in each period that are less than or equal to two, that is, \(0\leq x_{1},x_{2},x_{3}\leq 2\). We will begin by examining the result of the NAM since it is straightforward to visualize. A comparison of the associated functions is provided in Figure 1. The pairwise monotonicity is clearly violated when there is more than one past due. For example, the feature with 30-59 days past due becomes more important than the feature with 60-89 days past due. Then, we evaluate the MNAM with function values in Table 5. Both individual monotonicity and weak pairwise monotonicity are satisfied. But when statistical interactions are involved for large x, the strong pairwise monotonicity is violated. As an example, consider an applicant who has three past dues, with \(x_{3}=1\). If \((x_{1},x_{2},x_{3})=(0,2,1)\), then it should be punished more severely than \((x_{1},x_{2},x_{3})=(1,1,1)\); however, according to the MNAM, \(f_{1}(0)+f_{2}(2)+f_{3}(1)=3.4\), which is less than \(f_{1}(1)+f_{2}(1)+f_{3}(1)=3.9\). Therefore, based on the MNAM, for the person with (0,1,1), if the applicant did not pay for one month and received (1,1,1), then he or she should wait and pay one payment in an additional month to achieve (0,2,1) for a higher credit score (lower probability of default). Clearly, the fairness of this situation has been violated. We then examine the result of the MGNAM. We are interested in knowing if delinquency features can be separated additively. We plot the marginal probability of default as a function of \(x_{1}-x_{3}\) in Figure 2. The presence of DMEs is evident. By Proposition 3.4, we cannot separate these three features additively and therefore group them together. The values of \(f_{q}(x_{1},x_{2},x_{3})\) calculated by MGNAM are shown in Table 6. The table provides confidence to model users by verifying all monotonicity is achieved. It should be emphasized that without satisfying monotonicity, even the most accurate ML model will not be accepted. Furthermore, the transparent nature of the MGNAM makes it easier to verify conceptual soundness and fairness, which are difficult to achieve with black-box machine learning models. ### Criminal justice - COMPAS The COMPAS scoring system was developed to predict recidivism risk and has been scrutinized for its racial bias (Angwin et al., 2016; Dressel and Farid, 2018; Tan et al., 2018). In 2016, ProPublica published recidivism data for defendants in Broward County, Florida (Pro, 2016). We focus on the simplified cleaned dataset provided in (Dressel and Farid, 2018). Race and gender unfairness have been extensively studied in the past (Foulds et al., 2020; Kearns et al., 2019, 2018; Hardt et al., 2016). Our focus is on the potential unfairness associated with types of offenses. Specifically, a colony is considered more serious than a misdemeanor. Without loss of generality, assume \(x_{1}\) counts the number of past misdemeanors and \(x_{2}\) counts the number of past felonies. Due to this, we ask that the probability of recidivism be strongly monotonic with respect to \(x_{2}\) over \(x_{1}\). Criminals may consider turning a misdemeanor into a colony in the future if this strong pairwise monotonicity is violated. Model performance is summarized in Table 7. The performance of all methods is similar. In this regard, algorithmic fairness is more important than accuracy when it comes to the dataset. Next, we evaluate conceptual soundness and fairness. For simplicity's sake, we restrict ourselves to a maximum of three charges per type. Regarding the architecture of the MGNAM, the diminishing marginal effect is clearly observed for the colony in Figure 3, therefore we should group the colony and misdemeanor together, based on Proposition 3.4. Due to the fact that there are only two features in the group, function values are calculated and compared in tables 8. For small values of \(x_{1}\) and \(x_{2}\), functions behave reasonably in the NAM. For larger values, it immediately violates pairwise monotonicity. The individual monotonicity of \(x_{2}\) is violated when the value of \(x_{1}\) is fixed. Furthermore, the function contribution is only 0.37 when there are three past felonies (\(x_{2}=3\)), whereas the function value is 0.65 when there is one felony and one misdemeanor (\(x_{1}=x_{2}=1\)). Compared to the first case, the value is almost doubled, which is a serious violation. Then, we evaluate the MNAM. Both individual monotonicity and weak pairwise monotonicity are satisfied. But when statistical interactions are involved for large x, the strong pairwise monotonicity is violated. Consider the example of \((x_{1},x_{2})=(0,2)\) which should be punished more severely than \((1,1)\). However, according to the MNAM, the value \begin{table} \begin{tabular}{c c c c} \hline \hline \(x_{3}=0\) & & & \\ \hline \(x_{1}/x_{2}\) & \(0\) & \(1\) & \(2\) \\ \hline \(0\) & \(0\) & \(1.7\) & \(2.3\) \\ \hline \(1\) & \(1.7\) & \(2.3\) & \(2.8\) \\ \hline \(2\) & \(2.3\) & \(2.8\) & \(3.2\) \\ \hline \(x_{3}=1\) & & & \\ \hline \(x_{1}/x_{2}\) & \(0\) & \(1\) & \(2\) \\ \hline \(0\) & \(2.2\) & \(2.7\) & \(3.2\) \\ \hline \(1\) & \(2.7\) & \(3.1\) & \(3.5\) \\ \hline \(2\) & \(3.1\) & \(3.5\) & \(3.7\) \\ \hline \(x_{3}=2\) & & & \\ \hline \(x_{1}/x_{2}\) & \(0\) & \(1\) & \(2\) \\ \hline \(0\) & \(3.1\) & \(3.4\) & \(3.7\) \\ \hline \(1\) & \(3.4\) & \(3.6\) & \(3.8\) \\ \hline \(2\) & \(3.6\) & \(3.8\) & \(3.9\) \\ \hline \hline \end{tabular} \end{table} Table 6: Function values for \(x_{1},x_{2},x_{3}\) by the MGNAM in the GMSC dataset. Monotonicity is preserved. \begin{table} \begin{tabular}{c c c} \hline \hline Model/Metrics & Classification error & AUC \\ \hline FCNN & \(33.8\%\) & \(71.8\%\) \\ \hline NAM & \(34.1\%\) & \(71.8\%\) \\ \hline MNAM & \(33.5\%\) & \(71.7\%\) \\ \hline MGNAM & \(34.3\%\) & \(71.9\%\) \\ \hline \hline \end{tabular} \end{table} Table 7: Model performance of the COMPAS dataset. All ML models perform similarly. Figure 2: Marginal probability of defaults with respect to \(x_{1}-x_{3}\) in the GMSC dataset. Diminishing marginal effects are observed. of the function at \((0,2)\) is \(0.37\), which is less than the value at \((1,1)\) as \(0.50\). Consequently, a person who commits one colony and one misdemeanor will be punished more severely than a person who commits two felonies. There is a serious violation of the principle of fairness in this situation. Additionally, if someone with one felony commits another crime, he or she may consider it to be a felony rather than a misdemeanor, leading to difficulties in society. In our model, this issue has been avoided, since the value of the function at \((0,2)\) is \(0.54\), which is larger than \(0.53\) at \((1,1)\). There are many other similar examples of violations. In the absence of such strong pairwise monotonicity, the algorithm should not be used. ### Healthcare - heart failure clinical records This dataset (Ahmad et al., 2017; Chicco & Jurman, 2020) contains the medical records of 299 patients who had heart failure, collected during their follow-up period, where each patient profile has 13 clinical features. This study aims to predict the survival of patients suffering from heart failure. Conceptual soundness is a very important aspect of health datasets. With limited dataset, machine learning models are very easy to overfit, which can be mitigated by imposing constraints. In the case that one needs to determine the priority of patients, then fairness is also a very important factor. For this dataset, we focus on four features: smoking, anemia, high blood pressure, and diabetes. Without loss of generality, we denote them as \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\). Anemia, high blood pressure, and diabetes are considered to be more serious health risks than smoking. Thus, \(f\) should be monotonic with respect to \(x_{2}-x_{4}\) over \(x_{1}\). Due to the fact that they are all binary features, strong monotonicity is the same as weak monotonicity, by Lemma 3.6. A summary of the results is provided in Table 9. Since the NAM performs similarly to the FCNN and features associated with pairwise monotonicity are only binary, we do not consider interactions, and the MGNAM coincides with the MNAM. The MGNAM also has a similar level of accuracy. Next, we evaluate conceptual soundness and fairness. For blood and diabetes in the NAM, both individual and pairwise monotonicity are violated, as shown in Figure 4 and Figure 5. This problem has been avoided by MGNAM. According to the NAM, high blood pressure and diabetes are actually beneficial for survival. Furthermore, smoking is more dangerous than both of them. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{MGNAM} & & & \\ \hline \(x_{1}\)\(x_{2}\) & \(0\) & \(1\) & \(2\) & 3 \\ \hline 0 & 0 & 0.35 & 0.54 & 0.56 \\ \hline 1 & 0.21 & 0.53 & 0.56 & 0.56 \\ \hline 2 & 0.49 & 0.55 & 0.56 & 0.56 \\ \hline 3 & 0.55 & 0.56 & 0.56 & 0.56 \\ \hline \multicolumn{2}{c}{NAM} & & & \\ \hline \(x_{1}\)\(x_{2}\) & \(0\) & \(1\) & \(2\) & 3 \\ \hline 0 & 0 & 0.41 & 0.40 & **0.37** \\ \hline 1 & 0.24 & **0.65** & 0.65 & 0.62 \\ \hline 2 & 0.32 & 0.72 & **0.72** & **0.69** \\ \hline 3 & 0.33 & 0.74 & 0.73 & 0.70 \\ \hline \multicolumn{2}{c}{MNAM} & & & \\ \hline \(x_{1}\)\(x_{2}\) & \(0\) & \(1\) & \(2\) & 3 \\ \hline 0 & 0 & 0.33 & **0.37** & 0.37 \\ \hline 1 & 0.17 & **0.50** & 0.54 & 0.54 \\ \hline 2 & 0.19 & 0.53 & 0.57 & 0.57 \\ \hline 3 & 0.20 & 0.53 & 0.57 & 0.57 \\ \hline \end{tabular} \end{table} Table 8: Function values for \(x_{1},x_{2}\) by the MNAM and MGNAM of the COMPAS dataset. There are multiple violations of monotonicity for the NAM, for example, between \((2,2)\) and \((2,3)\), and between \((0,3)\) and \((1,1)\). Violations are also observed for the MNAM, for example, between \((0,2)\) and \((1,1)\). The MGNAM preserves monotonicity. \begin{table} \begin{tabular}{c c c} \hline \hline Model/Metrics & Classification error & AUC \\ \hline FCNN & \(23.0\%\) & \(89.8\%\) \\ \hline NAM & \(18.9\%\) & \(89.8\%\) \\ \hline MGNAM & \(17.6\%\) & \(90.6\%\) \\ \hline \end{tabular} \end{table} Table 9: Model performance of the heart dataset. All ML models perform similarly. Figure 3: Marginal probability of recidivism with respect to the number of felonies in the COMPAS dataset. The diminishing marginal effect is observed. ## 6 Related work **Monotonic Models**: Most of previous work (Yanagisawa et al., 2022; Liu et al., 2020; Milani Fard et al., 2016; You et al., 2017) focus on individual monotonicity. Weak pairwise monotonicity is considered in (Chen & Ye, 2022) and strong pairwise monotonicity is considered in (Gupta et al., 2020). **Transparent Models**: There has been enormous literature on designing transparent machine learning models. (Agarwal et al., 2021; Chen & Ye, 2022; Yang et al., 2021) starts with transparent generalized additive models. Another direction specifies neural network models based on statistical interactions (Janizek et al., 2021; Tsang et al., 2018, 2020; Tsang et al., 2017). However, these approaches haven't yet included three types of monotonicity in discussions. ## 7 Conclusion In this paper, we analyze three types of monotonicity and propose monotonic groves of neural additive models (MGNAMs) for transparency and monotonicity. There are many avenues for future directions. First, the regularized algorithm with discretized integrals in the penalty functions in order to enforce monotonicity. It is possible to achieve high accuracy with continuous features by using a large number of points, however, certification is not yet available for three types of monotonicity. Second, there are many applications in which these integrals are appropriate when the dimensions of pairwise monotonic features are small. Nevertheless, there is the possibility of having a large collection of pairwise monotonic features in some contexts. In the future, we plan to investigate possible fast algorithms for implementing pairwise monotonicity. Third, in the spirit of neural additive models, we keep MGNAM architectures as simple as possible to preserve the transparency of models. There is, however, a possibility that some datasets will exhibit other interactions. The detection of statistical interactions will be studied in the future in the presence of three types of monotonicity.
2310.13235
Auxiliary Features-Guided Super Resolution for Monte Carlo Rendering
This paper investigates super resolution to reduce the number of pixels to render and thus speed up Monte Carlo rendering algorithms. While great progress has been made to super resolution technologies, it is essentially an ill-posed problem and cannot recover high-frequency details in renderings. To address this problem, we exploit high-resolution auxiliary features to guide super resolution of low-resolution renderings. These high-resolution auxiliary features can be quickly rendered by a rendering engine and at the same time provide valuable high-frequency details to assist super resolution. To this end, we develop a cross-modality Transformer network that consists of an auxiliary feature branch and a low-resolution rendering branch. These two branches are designed to fuse high-resolution auxiliary features with the corresponding low-resolution rendering. Furthermore, we design residual densely-connected Swin Transformer groups to learn to extract representative features to enable high-quality super-resolution. Our experiments show that our auxiliary features-guided super-resolution method outperforms both super-resolution methods and Monte Carlo denoising methods in producing high-quality renderings.
Qiqi Hou, Feng Liu
2023-10-20T02:45:13Z
http://arxiv.org/abs/2310.13235v1
# Auxiliary Features-Guided Super Resolution for Monte Carlo Rendering ###### Abstract This paper investigates super resolution to reduce the number of pixels to render and thus speed up Monte Carlo rendering algorithms. While great progress has been made to super resolution technologies, it is essentially an ill-posed problem and cannot recover high-frequency details in renderings. To address this problem, we exploit high-resolution auxiliary features to guide super resolution of low-resolution renderings. These high-resolution auxiliary features can be quickly rendered by a rendering engine and at the same time provide valuable high-frequency details to assist super resolution. To this end, we develop a cross-modality Transformer network that consists of an auxiliary feature branch and a low-resolution rendering branch. These two branches are designed to fuse high-resolution auxiliary features with the corresponding low-resolution rendering. Furthermore, we design residual densely-connected Swin Transformer groups to learn to extract representative features to enable high-quality super-resolution. Our experiments show that our auxiliary features-guided super-resolution method outperforms both super-resolution methods and Monte Carlo denoising methods in producing high-quality renderings. Super resolution, Fast-to-compute auxiliary features, Transformer, Monte Carlo rendering CCS Concepts \({}^{\star}\) Computing methodologies Ray tracing; ## 1 Introduction Monte Carlo rendering algorithms are now widely used to generate photo realistic computer graphics images for applications such as visual effects, video games, and computer animations. These algorithms generate a pixel's color by integrating over all the light paths arriving at a single point [1]. To rendering a high-quality image, a large number of rays need to be cast for each pixel, which makes Monte Carol rendering a slow process. A great amount of effort has been devoted to speeding up Monte Carlo rendering. The core idea is to reduce the number of rays for each pixel. For instance, numerous denoising algorithms are now available to reconstruct a high-quality image from a rendering produced at a low sampling rate. Such Monte Carlo denoising algorithms often use auxiliary features generated by a rendering algorithm to help denoise the noisy rendering result. The recent deep neural network-based denoising algorithms can now generate very high-quality images at a fairly low sampling rate [3, 17, 18, 19]. Monte Carlo rendering can also be sped up by reducing the number of pixels to render. For example, pixels from the frames that have already been rendered can be warped to generate frames between existing frames to increase the frame rate [2] or to generate future frames to reduce latency [16]. Another approach is to only render one pixel for a block of neighboring pixels to further reduce the total number of pixels to render. This can be implemented by first rendering a low-resolution image and then applying super resolution to increase its resolution [20, 21]. As super resolution is a fundamentally ill-posed problem, it alone often cannot recover high-frequency details from only the low-resolution rendering. To address this problem, Hou _et al._ render a high-resolution rendering with a low sampling rate and use that together with the high-resolution auxiliary features to help super resolve the low-resolution rendering rendered at a high sampling rate. While this method produces a high-quality result, it needs to render the high-resolution image at a low sampling rate, which still takes a considerable amount of time [21]. _Can we only use the fast-to-obtain high-resolution auxiliary features without the high-resolution-low-sample rendering to effectively assist super resolution of the corresponding low-resolution rendering?_ If so, we can further speed up Monte Carlo rendering. We are encouraged by the recent work on neural frame synthesis that showed fast-to-obtain auxiliary features of the target frames can greatly help interpolate or extrapolate the target frames [2, 16]. On the other hand, Hou _et al._ showed that using a wide range of auxiliary features and the high-resolution-low-sample rendering help super resolution more than only using a subset of auxiliary features within their own deep neural network-based super resolution framework [21]. Therefore, if we only use a small number of fast-to-compute auxiliary features, we need to have a better super resolution method. This paper presents a Cross-modality Residual Densely-connected Swin Transformer (XRDS) for super resolution of a Monte Carlo rendering guided by its auxiliary features. For the seek of speed, we only use two auxiliary features: albedo and normal. To effectively use these features, we design a super resolution network based on Swin Transformer that recently has been shown powerful for a wide variety of computer vision tasks. Our Transformer network has two branches, one for the low resolution rendering and the other for the auxiliary features. Such two branches are designed to perform cross-modality fusion to effectively use auxiliary features to assist super resolution of the low-resolution rendering. While the auxiliary feature branch consists of convolutional blocks, the branch for the low-resolution rendering consists of a sequence of residual densely-connected Swin Transformer blocks to extract effective features. The features from the two branches are combined together using a cross-modality fusion module and are finally used to generate the high-resolution high-quality rendering. This paper contributes to Monte Carlo rendering as follows. First, we present the first super resolution approach to Monte Carlo rendering that only uses fast-to-compute high-resolution auxiliary features to enable high-quality upsampling of a low-resolution rendering. Second, we design a dedicated Cross-modality Swin Transformer-based super resolution network that can learn to effectively combine high-resolution auxiliary features with the corresponding low-resolution rendering to generate the final high-resolution high-quality image. Third, our experiments show that our method outperforms super-resolution and denoising methods in producing high-quality renderings. ## 2 Related Work This section briefly discusses relevant work to our paper, including Monte Carlo denoising, super resolution, and vision Transformers. **Monte Carlo Denoising.** Monte Carlo rendering algorithms need numerous samples per pixel to generate a high-quality rendering [15, 18]. With insufficient samples, the rendering results suffer from noise. To address this problem, many Monte Carlo denoising methods have been developed to reconstruct high-quality renderings from only a small number of samples. Traditional methods reconstruct renderings in a similar way to general image denoising methods by designing specific denoising kernels based on image variance or geometric features or directly regress the final result [2, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 1888, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 222, 223, 224, 225, 226, 227, 228, 239, 240, 251, 252, 253, 254, 255, 256, 257, 258, 261, 259, 262, 270, 263, 264, 265, 266, 267, 268, 269, 271, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 32, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 161, 172, 173, 174, 175, 176, 177, 178, 179, 180, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 206, 207, 208, 209, 209, 210, 211, 223, 214, 215, 216, 217, 218, 219, 222, 23, 224, 226, 227, 228, 239, 240, 251, 252, 253, 254, 255, 256, 257, 258, 269, 260, 261, 262, 263, 264, 265, 266, 267, 268, 279, 280, 292, 294, 295, 296, 297, 298, 299, 300, 31, 320, 32, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 59, 61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 108, 109, 119, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 133, 140, 141, 145, 146, 147, 148, 149, 150, 161, 172, 173, 174, 175, 176, 177, 178, 179, 180, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 212, 23, 215, 216, 217, 218, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 241, 242, 243, 244 age generated using the sampling map to produce high-quality results. Zheng _et al._ proposed an ensemble denoising technique that learns to combine multiple denoiser together [14]. Yu _et al._ designed a transformer-based neural network for Monte Carlo denoising [13]. Their network consists multi-scale feature extractor and a self-attention module and achieved promising results. Unlike these denoising methods, our method explores an orthogonal approach that speeds up Monte Carlo rendering by reducing the number of pixels to render via super-resolution. **Super resolution.** Super resolution is a classic problem in computer vision. It aims to reconstruct a high-resolution image from the low-resolution input. Recently, the state of the art of super resolution research has been advanced significantly due to the use of deep neural networks [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 132, 133, 134, 135, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147, 148, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 231, 232, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 281, 285, 287, 288, 289, 289, 291, 286, 287, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 41, 41, 42, 43, 44, 45, 46, 47, 48, 49, 41, 43, 44, 45, 46, 48, 49, 42, 44, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 42, 44, 49, 43, 44, 46, 49, 44, 45, 46, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 42, 44, 47, 48, 49, 43, 44, 48, 49, 45, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 43, 44, 47, 48, 49, 45, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 45, 47, 48, 49, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 41, 42, 43, 44, 45, 46, 49, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 47, 48, 49, 41, 42, 43, 44, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 41, 42, 43, 44, 46, 49, 42, 44, 48, 49, 45, 47, 49, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 41, 43, 44, 48, 49, 42, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 48, 49, 41, 45, 46, 49, 42, 47, 49, 48, 49, 41, 45, 46, 49, 42, 48, 49, 43, 49, 45, 47, 48, 49, 49, 41, 42, 43, 44, 49, 46, 49, 45, 47, 48, 49, 49, 41, 42, 43, 44, 45, 46, 49, 45, 47, 49, 46, 48, 49, 49, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 49, 41, 45, 49, 49, 42, 45, 46, 49, 47, 48, 49, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 41, 45, 46, 49, 48, 49, 49, 49, 49, 50, 49, 49, 51, 49, 52, 53, 54, 55, 56, 57, 58, 59, 59, 51, 50, 52, 54, 56, 57, 59, 52, 54, 58, 59, 53, 59, 50, 54, 59, 52, 55, 56, 57, 58, 59, 53, 59, 50, 54, 59, 50, 55, 56, 57, 59, 51, 52, 56, 59, 53, 57, 58, 59, 50, 56, 59, 5 Different from the previous work [11], which leverages a wide range of auxiliary features, our method _only_ employs the auxiliary features that can be computed very fast [1], including albedo and normal. On the one hand, although our method doesn't leverage the shading layers, albedo and normal could provide a lot of high-frequency information, e.g., the texture of the material, which is essential for super-resolution. As we will show, it can help improve the super-resolution results. On the other hand, albedo and normal can be computed fast [1]. It not only reduces the rendering time but also enables us to render these high-resolution layers at a relatively higher sampling rate, which typically contains fewer artifacts, such as aliasing. We design a cross-modality transformer network to effectively fuse two categories of visual input, namely the low-resolution rendering and its corresponding high-resolution auxiliary features, to recover visual details. Figure 2 shows the architecture of our network. It contains two parallel branches, one for the low-resolution rendering and the other for the corresponding high-resolution auxiliary features. **Auxiliary feature branch.** The auxiliary feature branch takes auxiliary features as inputs, which provide essential high frequency visual details. As discussed above, we select albedo and normal, which are relatively fast to acquire. Since this branch processes high-resolution input, we design a shallow architecture for the sake of memory and speed. As shown in Figure 2, we employ a convolutional layer and \(N=3\) residual blocks (RB) [10] in a sequence to get the features \(\{H_{i}\}_{i=0}^{N-1}\) \[\begin{array}{rcl}H_{0}&=&f_{conv}^{A}(A),\\ H_{i}&=&f_{RB}^{i}(H_{i-1}),\qquad i=1,\cdots,N-1,\end{array} \tag{1}\] where \(f_{conv}(\cdot)\) indicates the convolution operation. \(f_{RB}(\cdot)\) indicates the operation of a residual block. In our experiments, we set the channels as 32 for the auxiliary feature branch. We then obtain the downsampled features \(\{D_{i}\}_{i=0}^{N-1}\) with a group of deshuffle layers [11], which is able to downscale the feature while keeping the high frequency information. \[\begin{array}{l}D_{i}=f_{DSF}^{i+1}(H_{i}),\quad i=0,\cdots,N-1\end{array} \tag{2}\] where \(f_{DSF}(\cdot)\) indicates the deshuffle layer. **Low resolution rendering branch.** Following the recent works on image super resolution [1, 1, 10, 11, 12], we first adopt a \(3\times 3\) convolutional layer with 64 channels to get the shallow feature from the low resolution rendering \(I_{LR}\). \[\begin{array}{l}F_{0}=f_{conv}^{AR}(I_{LR})\end{array} \tag{3}\] We feed the resulting feature \(F_{0}\) to a sequence of cross-modality residual densely-connected Swin Transformer groups (XDG). \[\begin{array}{l}F_{i}=f_{XDG}^{i}(F_{i-1},D_{i-1}),\quad i=1,\cdots,N\end{array} \tag{4}\] where \(f_{XDG}(\cdot)\) indicates the XDG module. \(N\) indicates the number of XDG. We choose \(N=3\) in our experiments. XDG is designed to fuse the auxiliary features \(D_{i-1}\) and the low-resolution rendering features \(F_{i-1}\). It consists of a cross-modality module (XM) and a sequence of residual densely-connected Swin Transformer blocks (RDST). Specifically, XM is designed to fuse the local information from the low-resolution rendering and the high frequency information from the auxiliary features, while the RDST sequence learns more dedicated representations for super resolution from them. **Cross-modality module (XM).** Inspired by the success of Swin Transformer [1, 10] and Transformer Decoder [1], we design XM based on Swin Transformer, which can efficiently model the long-range dependency. Figure 3 shows the architecture of XM. It takes features \(F\) from the low-resolution rendering branch and features \(D\) from the auxiliary feature branch as input and outputs the fused feature \(X\). It consists of Layer Norm layers (LN), a Window-based Multi-head Self-Attention layer (W-MSA), a Window-based Multi-head Cross Attention layer (W-MCA), and a Multi-Layer Perception layer (MLP). The key idea behind XM is to combine the features \(F\) from the low-resolution rendering branch with the features \(D\) from the high-resolution auxiliary branch using cross-attention, creating a more comprehensive representation for super resolution. The process starts by extracting intermediate features \(F_{mid}\) from \(F\), which serve as the "query" \(Q\). From \(D\), which holds high-resolution information, the "key" \(K\) and "value" \(V\) are extracted. Then, the cross-attention is calculated following [23] and combined with \(F_{mid}\) to generate \(F_{cross}\). Finally, an MLP layer is used to integrate the features from the low-resolution branch and the cross-attention. **Residual Densely-connected Swin Transformer block (RDST).** As shown in Figure 2, we feed the fused feature \(X\) from XM to a Figure 4: The residual densely-connected Swin Transformer block (RDST). Red lines indicate the window partitions. Figure 3: The cross-modality module. It takes feature \(F\) from the low-resolution rendering branch and \(D\) from the auxiliary feature branch, and outputs the fused feature \(X\). sequence of \(B=5\) residual densely-connected Swin Transformer blocks (RDST), \[F_{i-1}^{b}=f_{RDST}(F_{i-1}^{b-1}), \tag{5}\] where \(f_{RDST}\) indicates the RDST block. We also use a short skip connection to combine the shallow feature \(X_{i-1}\) with the deep feature \(F_{i-1}^{B}\) \[F_{i}=F_{i-1}^{B}+X_{i-1}. \tag{6}\] We design RDST by combining the ideas of the Residual Densely-connected Network (RDN) [21] and Swin Transformer [10]. We are specifically inspired by SwinIR [10] that explores Swin Transformers for image restoration tasks. It replaces traditional convolutional layers with Swin layers in residual blocks, allowing for the learning of more descriptive features and delivering impressive results. Taking inspiration from RDN [21], we introduce RDST, where the convolution layers in densely-connected blocks are replaced with Swin layers. As shown in Figure 4, RDST consists of a sequence of densely-connected Swin Transformer blocks and a local feature fusion block. For the densely-connected Swin Transformer blocks, we shift the windows. We also use a local skip connection to fuse the features from the shallow layer. **Upscale.** We adopt the pixel shuffle layer [1] to upscale the dense feature \(F_{DF}\) to a high resolution feature. We also uses a \(3\times 3\) convolutional layer with 3 channels to predict the final high resolution image \(I_{SR}\). \[I_{SR}=f_{conv}(f_{UP}(F_{DF})), \tag{7}\] where \(f_{UP}\) indicates the operation of the pixel shuffle layer. **Training details.** We adopt the robust loss to handle the prediction with a high dynamic range image [10]. \[\ell_{r}=\frac{1}{M}\sum_{p\in I_{MR}}\frac{|t_{BR}^{p}-t_{SR}^{p}|}{\beta+|t_{ BR}^{p}-t_{SR}^{p}|}, \tag{8}\] where \(I_{HR}\) indicates the ground truth image. \(M\) indicates the number of pixels. \(\beta\) indicates the robust factor, which is set to 0.1. We implement our network in PyTorch. We train our super resolution network on examples of size \(256\times 256\). We select Adam [1] with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) as the optimizer. The learning rate is set to 0.0001. We train the network for 400 epochs with a mini-batch size of 16 for our 4\(\times\) super resolution models, and we fine-tune our other models using the 4\(\times\) pretrained weights. It takes about one week to train a single model using 4 Nvidia A40 GPUs. We adopt the BCR dataset [10] as the training dataset. BCR dataset contains 2449 images from 1463 scenes rendered by Blender Cycles. Following MSSPL [10], we use 2126 images from 1283 scenes for training, 193 images from 76 scenes for validation, and 130 images from 104 scenes for testing. ## 4 Experiments We evaluate our network by quantitatively and qualitatively comparing them with state-of-the-art image super resolution methods and the Monte Carlo denoising methods on the BCR dataset [10] and the Gharbi dataset [10]. We also conduct the ablation study to examine our method. Following [10], we adopt Relative Mean Square Error (RelMSE) and PSNR to evaluate our methods in the scene linear color space and the sRGB space, respectively. Please refer to the supplementary material for an interactive demo that provides more results. ### Comparison with Super Resolution Methods We compare our method with state-of-the-art super-resolution methods, including EDSR [10], RCAN [10], and SwinIR [10], a recent transformer-based approach, as well as the multiple sampling-based super resolution method MSSPL [10]. We obtained the results of compared methods either from the authors [10] or from finetuning the official models [10 per pixel, can achieve 35.16 dB which is higher than MSSPL (34.27 dB) for the \(\times 4\) task, even though our method takes much less input information from the high resolution input. **Speed and memory.** Table 2 reports the speeds and the peak memory of the above methods. As our method is based on the Transformer, our method is slower than CNN-based methods, including EDSR [16], RCAN [21] and MSSPL [12]. This is consistent with many other works that Transformer tends to be slower than CNN [4, 10]. Compared to Transformer-based method SwinIR, our method is slightly faster. We also compare the peak memory to produce a \(1024\times 1024\) image in Table 2. Our method uses less peak memory than EDSR and MSSPL but more memory than RCAN and SwinIR. ### Comparison with Denoising Methods We compare our methods to the state-of-the-art Monte Carlo denoising methods, including Sen [17], Rousselle [14], Kalantari [15], Bitterli [1], KPCN[21], Gharbi [18], MSSPL [19], AdvMC [22], and MCSA [21]. Table 3 and Table 4 report results on the BCR dataset [12] and the Gharbi dataset [12], respectively. We obtain the results of the comparing methods either from their authors [12] or from their project websites [12]. MSSPL [12] was trained on the BCR dataset. For KPCN [21], AdvMC [22], and MCSA [21], we finetuned their official models on the BCR dataset using their official training scripts. For our model, We trained a distinct model for each scale and sampling count. As most denoising methods do not take high-resolution auxiliary features as input, we follow MSSPL [12] to compute the average spp for our method and MSSPL as \(app_{\textit{LR}}=spp_{\textit{LR}}/s^{2}+spp_{\textit{HR}}\), where \(s\) indicates the scale. \(spp_{\textit{LR}}\) and \(spp_{\textit{HR}}\) indicate the sampling rates for the low-resolution and high-resolution inputs, \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{4 spp} & \multicolumn{2}{c}{8 spp} & \multicolumn{2}{c}{16 spp} \\ \cline{2-7} & PSNR & RelMSE & PSNR & RelMSE & PSNR & RelMSE \\ \hline Input & 19.58 & 17.5358 & 21.91 & 7.5682 & 24.17 & 11.2189 \\ Sen & 28.23 & 1.0484 & 28.00 & 0.5744 & 27.64 & 0.3396 \\ Rousselle & 30.01 & 1.9407 & 32.32 & 1.9660 & 34.36 & 1.9446 \\ Kalantari & 31.33 & 1.5573 & 33.00 & 1.6635 & 34.43 & 1.8021 \\ Bitterli & 28.98 & 1.1024 & 30.92 & 0.9297 & 32.40 & 0.9640 \\ KPCN & 29.75 & 1.0616 & 30.56 & 7.0774 & 31.00 & 20.2309 \\ KPCN-ft & 29.86 & 0.5004 & 31.66 & 0.8616 & 33.39 & 0.2981 \\ Gharbi & 33.11 & **0.0486** & 34.45 & **0.0385** & 35.36 & **0.0318** \\ MSSPL\(\times 2\) & 34.02 & 1.5025 & 35.30 & 1.4902 & **36.43** & 1.4748 \\ MSSPL\(\times 4\) & 33.94 & 5.5586 & 35.22 & 5.6781 & 35.97 & 5.7436 \\ MSSPL\(\times 8\) & 31.56 & 3.7228 & 32.60 & 4.2030 & 33.22 & 4.5045 \\ \hline Ours\(\times 1\) & \multicolumn{2}{c}{(2 - 2)} & \multicolumn{2}{c}{(4 - 4)} & \multicolumn{2}{c}{(8 - 8)} \\ & 27.41 & 0.3438 & 30.39 & 0.3092 & 32.88 & 0.3062 \\ Ours\(\times 2\) & \multicolumn{2}{c}{(8 - 2)} & \multicolumn{2}{c}{(16 - 4)} & \multicolumn{2}{c}{(32 - 8)} \\ & **34.29** & 2.2587 & **35.47** & 1.5480 & 36.37 & 1.5417 \\ Ours\(\times 4\) & \multicolumn{2}{c}{(32 - 2)} & \multicolumn{2}{c}{(64 - 4)} & \multicolumn{2}{c}{(128 - 8)} \\ & 34.26 & 20.7861 & 35.12 & 29.0364 & 35.52 & 28.1264 \\ Ours\(\times 8\) & \multicolumn{2}{c}{(128 - 2)} & \multicolumn{2}{c}{(16 - 8)} & \multicolumn{2}{c}{(32 - 16)} \\ & 31.57 & 1.3474 & 31.26 & 1.1718 & 31.51 & 1.0940 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison on the Gharbi dataset [12]. We directly test our models pretrained on the BCR dataset without finetuning. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & Scale & EDSR & RCAN & SwinIR & MSSPL & Ours \\ \hline Runtime(ms) & \(\times 4\) & 503.96 & 280.51 & 1149.25 & 125.24 & 1009.08 \\ Peak memory(MB) & \(\times 4\) & 2493.9 & 672.1 & 806.0 & 739.70 & 941.3 \\ Peak memory(MB) & \(\times 8\) & 2375.6 & 621.3 & 659.0 & 1010.1 & 803.8 \\ Peak memory(MB) & \(\times 16\) & 2359.7 & 615.4 & 608.4 & 1008.0 & 783.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of runtime cost and peak memory with super resolution methods to produce a \(1024\times 1024\) image on an Nvidia Titan XP. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{2 spp} & \multicolumn{2}{c}{4 spp} & \multicolumn{2}{c}{8 spp} \\ \cline{2-5} & PSNR & RelMSE & PSNR & RelMSE & PSNR & RelMSE \\ \hline Input & 18.12 & 0.2953 & 21.51 & 0.1400 & 24.75 & 0.0646 \\ KPCN & 25.87 & 0.0390 & 27.31 & 0.0292 & 28.11 & 0.0276 \\ KPCN-ft & 31.03 & 0.0078 & 33.69 & 0.0043 & 35.83 & 0.0026 \\ Bitterli & 26.67 & 0.0293 & 27.22 & 0.0252 & 27.45 & 0.0226 \\ Gharbi & 30.73 & 0.0068 & 31.61 & 0.0057 & 32.29 & 0.0050 \\ MSSPL\(\times 2\) & 33.27 & 0.0044 & 35.15 & 0.0027 & 36.74 & 0.0019 \\ MSSPL\(\times 4\) & 33.94 & 0.0039 & 35.21 & 0.0028 & 36.31 & 0.0022 \\ MSSPL\(\times 8\) & 31.37 & 0.0075 & 32.35 & 0.0057 & 33.14 & 0.0049 \\ AdvMC-ft & 30.33 & - & 32.30 & - & 33.69 & - \\ MCSA-ft & 32.68 & 0.0049 & 34.81 & 0.0031 & 36.61 & 0.0021 \\ \hline \hline \multirow{2}{*}{Ours\(\times 2\)} & \multicolumn{2}{c}{(1 - 1)} & \multicolumn{2}{c}{(2 - 2)} & \multicolumn{2}{c}{(4 - 4)} \\ & 31.04 & 0.0078 & 34.67 & 0.0030 & 36.62 & 0.0020 \\ \multirow{2}{*}{Ours\(\times 2\)} & \multicolumn{2}{c}{(4 - 1)} & \multicolumn{2}{c}{(8 - 2)} & \multicolumn{2}{c}{(16 - 4)} \\ & **34.12** & **0.0035** & **35.49** & **0.0026** & **37.09** & **0.0018** \\ \multirow{2}{*}{Ours\(\times 4\)} & \multicolumn{2}{c}{(16 - 1)} & \multicolumn{2}{c}{(32 - 2)} & \multicolumn{2}{c}{(64 - 4)} \\ & 34.08 & 0.0046 & 35.06 & 0.0034 & 35.77 & 0.0029 \\ \multirow{2}{*}{Ours\(\times 8\)} & \multicolumn{2}{c}{(64 - 1)} & \multicolumn{2}{c}{(128 - 2)} & \ Figure 6: Visual comparison with super-resolution methods on the BCR dataset [HLM*21]. Figure 7: Visual comparison with denoising methods on the BCR dataset [HLM*21]. respectively. In our case, we take the sampling rates for the auxiliary features as \(spp_{HR}\). We would like to note that this measurement of spp is **unfair** to our method, as our method only uses high-resolution albedo and normal features which takes much less time than rendering all the shading layers to obtain the high-resolution rendering as done in MSSPL. As shown in Table 3, our method generates better results than the state-of-the-art methods on the BCR dataset [11]. Ours \(\times\)2 model wins 0.18dB, 0.28dB, and 0.35 dB in terms of PSNR on 2spp, 48pp, and 8spp, respectively. We also conduct our experiments on \(\times\)16 scale. On the one hand, with \(\times\)16, our method produces worse results than MSSPL because MSSPL uses the high-resolution RGB image as input that is not available to our method. While the high-resolution RGB input to MSSPL is rendered at a low sampling rate, it still provides useful information. As shown in the existing literature on Monte Carlo denoising, even the rendering result at 1 spp can be denoised to a reasonable quality. At such a high upsampling rate of \(\times\)16, super-resolution is very difficult. On the other hand, in practice, given a target overall spp rate, our method can select an optimal combination of (spp rate, super-resolution scale) that outperforms MSSPL and other methods, as shown in Table 3. In practice, \(\times\)16 will not be used for rendering by either MSSPL or our method to achieve an overall target spp as it produces the worst results among alternative combinations of spp rate and super-resolution scale. Figure 7 shows the visual comparisons. Our results are more visually plausible. Briefly, instead of working in the pixel color space that can potentially cause the color fidelity problem, our method fuses the low-resolution RGB and high-resolution feature maps in the feature space and learns to fuse them into correct colors, thus alleviating the color ambiguities/artifacts at fine details. For example, In Figure 7, the wall of our results is less noisy and more accurate than the results from other methods that are either blurred or inconsistent with the ground truth. In the second example, our method produces high-frequency geometric details in the wine basket area that well differentiates the mesh color and the background color. Table 4 reports the comparison on the Gharbi dataset [18]. Following MSSPL [18], we directly test our models pre-trained on the BCR dataset without fine-tuning as the training set of the Gharbi dataset is not available. Our \(\times\)2 model wins 0.27dB and 0.17dB in terms of PSNR on 4spp and 8sp, respectively. When the spp is 16, our PSNR is slightly lower than MSSPL [11]. We would like point out our method takes less high-resolution information than MSSPL. Our input high-resolution auxiliary features only include the albedo and normal, while MSSPL also takes all the shading layers as inputs. When the high resolution input is rendered at a high spp, the shading layers can contribute a lot of high fre \begin{table} \begin{tabular}{c c c c c} \hline Auxiliary Layer & None & Normal & Albedo & Normal + Albedo \\ \hline PSNR & 30.49 & 34.85 & 36.42 & **37.45** \\ RelMSE & 0.0141 & 0.0042 & 0.0030 & **0.0021** \\ \hline \end{tabular} \end{table} Table 6: The effects of input fat-to-compute auxiliary feature layers on the BCR dataset [11]. \begin{table} \begin{tabular}{c c c c} \hline Method & AdvMC-ft & MCSA & Ours \\ \hline PSNR & 27.96 & 30.01 & **34.12** \\ LPIPS & 0.320 & 0.202 & **0.090** \\ \hline \end{tabular} \end{table} Table 7: The effects of network architectures on the BCR dataset [11]. AdvMC-ft [12] and MCSA [11] take 1spp RGB and 1spp auxiliary buffers as inputs. Our method takes 4-spp low-resolution RGB (\(\times\)2, effectively the same sampling rate as 1-spp at the high resolution ) and 1-spp high-resolution auxiliary buffers) Figure 8: Visual comparison with denoising methods on the Gharbi dataset [18]. quency information. Similar to the findings in MSSPL [HLM*21], our results on RelMSE are heavily affected by a small number of pixels with abnormal large errors. Excluding these abnormal pixels can greatly improve our scores on RelMSE. As shown in Figure 8, our method produces high-quality results with much fewer artifacts when compared to the ground truth. ### Discussions **Auxiliary features sampling rates**. As discussed above and shown in Figure 5, using more samples to generate the auxiliary features help our method generate better super resolution results. However, even using one sample per pixel to generate the auxiliary features can already enable our method to significantly outperform standard super resolution methods. Moreover, when we use 16 samples to generate these features, our results are already very close to the results that use the features generated using 4000 samples per pixel denoted as \(A_{gt}\) in the figure. **Input layers of auxiliary features**. We examine how our method works with different auxiliary feature layers. The upsampling scale is set to \(4\times\). We use 4000 spp for \(I_{LR}\) and \(A\). As shown in Table 6, both albedo and normal can improve the results significantly, as they can provide the essential high frequency visual details for super resolution. The performance of our network can be further improved if we take both of them as inputs. These findings are consistent with previous denoising methods [BVM*17; GLA*19] where intermediate layers can improve the final results. **Ablation study w.r.t. MSSPL [HLM*21]**. We evaluated the performance of both our method and MSSPL [HLM*21] using fast-to-compute auxiliary features as well as full auxiliary features. In the experiments, the upsampling scale is set to \(\times 4\). As shown in Table 5, both our network and MSSPL benefit from using the full auxiliary features due to the richer high-resolution information they provide. However, our method with fast-to-compute layers still outperforms MSSPL with full auxiliary layers, which demonstrates the effectiveness of our network architecture. **Network Effectiveness.** We examine how our network architecture works by comparing to AdvMC [XZW*19] and MCSA [YNL*21]. Specifically, we feed high-resolution 1-spp RGB and 1-spp auxiliary buffers to AdvMC and MCSA and fine tune them on the BCR dataset. In this experiment, our method takes 4-spp low-resolution RGB (\(\times 2\), effectively the same sampling rate as 1 spp at the high resolution) and 1-spp high-resolution auxiliary buffers. Table 7 shows our method outperforms these methods, which demonstrates the effectiveness of our transformer-based network architecture. **Network architecture components**. We examine the effect of the network architecture. The upsampling scale is set to \(4\times\). In this test, we remove XM modules and replace our RDST with the state-of-the-art blocks, including RDB from RRN [ZTK*18] and RSTB from SwinIR [LCS*21]. As shown in Table 8, our RDST can greatly improve the results. These improvements can be attributed to the strong generalization capability of RDST. Besides, XM modules can further improve the results. **Number of RDST blocks.** We examine how our network architecture works with different RDST blocks in each XDG block on the BCR dataset [HLM*21]. In this test, the upsampling scale is set to x4. To check the impact of RDST, we set the XDG number as 3, and we investigated our results across different RDST numbers of each XDG block, including 1, 3, and 5. Besides, we also measure the flops, macs, and parameters for a single 1024x1024 image [RRRH20]. As shown in Table 10, decreasing the number of RDST blocks accelerates the network but compromises performance. **Number of XDG blocks.** Similar to RDST, we investigate our results across different XDG numbers, including 1, 2, and 3. The upsampling scale is set to x4 and the number of RDST of each XDG block is set to 5. As the results reported in Table 11, reducing the number of XDG blocks accelerates the network but also compromises performance. **Our robust loss vs SMAPE loss [Mea86].** Our robust loss is used based on our observations that there are a very small number of pixels with abnormally large intensity values in our dataset, mostly due to the firefly artifacts. These pixels will often incur very large \begin{table} \begin{tabular}{c c c c c c} \hline \hline KDST Num & PSNR & RelMSE & Flops(T) & Macs(G) & Params(M) \\ \hline 5 & 34.08 & 0.0046 & 1.45 & 723.68 & 9.36 \\ 3 & 33.47 & 0.0056 & 1.04 & 519.13 & 6.21 \\ 1 & 32.60 & 0.0091 & 0.63 & 314.59 & 3.06 \\ \hline \hline \end{tabular} \end{table} Table 10: The effects of the number of RDST blocks on the BCR dataset [HLM*21]. We measured the flops and macs for a single \(1024\times 1024\) image [RRRH20]. \begin{table} \begin{tabular}{c c c c c} \hline \hline Network & RDB & RSTB & RDST & RDST + XM \\ \hline PSNR & 35.56 & 36.63 & 37.27 & **37.45** \\ RelMSE & 0.0034 & 0.0098 & 0.0022 & **0.0021** \\ \hline \hline \end{tabular} \end{table} Table 8: The effects of network architecture components on the BCR dataset [HLM*21]. We compare the proposed RDST with RDB [ZTK*18] and RSTB [LCS*21]. \begin{table} \begin{tabular}{c c c c c c} \hline \hline XDG Num & PSNR & RelMSE & Flops(T) & Macs(G) & Params(M) \\ \hline 3 & 34.08 & 0.0046 & 1.45 & 723.68 & 9.36 \\ 2 & 33.19 & 0.0066 & 1.02 & 507.86 & 6.42 \\ 1 & 32.30 & 0.1193 & 0.59 & 292.04 & 3.49 \\ \hline \hline \end{tabular} \end{table} Table 11: The effects of the number of XDG blocks on the BCR dataset [HLM*21]. We measured the flops and macs for a single \(1024\times 1024\) image [RRRH20]. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{2spp} & \multicolumn{2}{c}{4spp} & \multicolumn{2}{c}{8spp} \\ \cline{2-5} & PSNR & LPIPS & PSNR & LPIPS & PSNR & LPIPS \\ \hline AdvMC-ft & 30.33 & 0.209 & 32.30 & 0.155 & 33.69 & 0.126 \\ MCSA-ft & 32.68 & 0.108 & 34.81 & 0.080 & 36.61 & 0.068 \\ \hline Ours & **34.12** & **0.090** & **35.49** & **0.070** & **37.09** & **0.057** \\ \hline \hline \end{tabular} \end{table} Table 9: Comparison on the perceptual quality on the BCR dataset [HLM*21]. We utilize the LPIPS [ZIE*18] metric as a measure of perceptual quality. errors during training and thus compromise the performance of our model. We use the robust loss to reduce these undesirable impacts of these pixels as this robust loss will limit the maximal loss value to 1 no matter how large the pixel error is. We compared these two loss functions. In our experiments, the upsampling factor is set to 4, and we set the sampling rate to (16 - 1). Models trained with the SMAPE loss showed slightly worse results: 33.96 v.s.34.12 in PSNR, and 0.0046 v.s.0.0035 in RelMSE. **Super resolution scale.** We investigate our results across multiple scales, including \(\times 1\), \(\times 2\), \(\times 4\), and \(\times 8\). Among them, scales \(\times 1\) and \(\times 8\) exhibit weaker performance compared to \(\times 2\) and \(\times 4\). When comparing scales \(\times 4\) and \(\times 2\), \(\times 4\) takes less peak memory and is faster than \(\times 2\), but \(\times 2\) leads to better quality. To make a fair comparison, we maintain a consistent average sampling rate across different scales. Consequently, the low-resolution input of our \(\times 1\) model is rendered at a much lower average sampling rate than that of our \(\times 2\) model. This makes the resulting input RGB image to our model very noisy for \(\times 1\) and thus comprises the final quality of Ours \(\times 1\), as reported in the 2-spp column of Table 3. In the 4-spp column of the same table, the difference between Ours \(\times 1\) and Ours \(\times 2\) is less significant as in this setting, the average sampling rate of Ours \(\times 1\) is reasonably higher and provides more information for our model to synthesize higher-quality results. In addition, we used the same training pipeline for our \(\times 1\) model as we did for other scales, keeping the number of epochs consistent across all scales. However, due to the high memory requirement to train the \(\times 1\) model, we have to set a smaller mini-batch size. This would also potentially impact the performance, but we believe that this is not as significant as the first reason we discussed above. **Perceptual quality.** We examine the perceptual quality of our results using the LPIPS metric [2]. Table 7 and Table 9 present the results for AdvMC [2], MCSA [2], and our method. Our approach outperforms the others in terms of both PSNR and LPIPS, thereby demonstrating its ability to generate images with high perceptual quality. ## 5 Limitations and Future Work The fusion for the high-reflection parts is challenging. Our method produces high-frequency visual details by two means: 1) train a neural network to learn to recover high-frequency information from low-resolution input and 2) use high-frequency information from the high-resolution albedo and normal maps. Our neural network can learn to produce visual details for many examples. However, super resolution from a low-resolution input alone is necessarily an ill-posed problem. In the high-reflection parts of the scene, such as the example shown in Figure 9, when the high-resolution normal and albedo map cannot, by its nature, provide high-frequency details in those regions, our method may fail. Compared to CNN-based methods, our method is slow. However, compared to another Transformer-based method [2], our method uses less peak memory (0.89Gb vs 30.56Gb) and is faster (1.0s vs 2.5s) when producing a 1024x1024 image using an Nvidia A40. Research on fast transformers has been advancing quickly recently. Patro et al. [1] offer an extensive review of efficient vision transformers. Through the advancement of effective token mixing strategies and efficient MLP layers, vision transformers can be significantly accelerated [2, 2, 3, 4]. For example, both CMT [1] and WaveViT [2] outperform EfficientNet [3] while maintaining a lower computational complexity. Moreover, several transformer hardware accelerators have been introduced to expedite Transformer networks, such as SwiftTron [1]. We believe that our method can benefit from the quick advance of research on Transformer. In this paper, we specifically explored albedo and normal as quick-to-compute auxiliary features. However, we acknowledge that other auxiliary features, such as a whitted ray-traced layer, could offer valuable high-frequency information and be generated fast. Incorporating such a layer can potentially improve the performance of our method. Unfortunately, the BCR dataset doesn't contain such layers. We plan to explore this in our future research. ## 6 Conclusion This paper explored high-resolution fast-to-compute auxiliary features to guide super resolution of Monte Carlo renderings. We developed a dedicated cross-modality Transformer network to fuse high-resolution fast-to-compute auxiliary features with the corresponding low-resolution rendering. We designed a Transformer-based cross-modality module to fuse the features from two modalities. We also developed a Residual Densely-connected Swin Transformer block to learn more representative features. Experimental results indicate that our proposed method surpasses existing state-of-the-art super-resolution and denoising techniques in producing high-quality images.
2308.09027
JWST observations of the Ring Nebula (NGC 6720): I. Imaging of the rings, globules, and arcs
We present JWST images of the well-known planetary nebula NGC 6720 (the Ring Nebula), covering wavelengths from 1.6$\mu$m to 25 $\mu$m. The bright shell is strongly fragmented with some 20 000 dense globules, bright in H$_2$, with a characteristic diameter of 0.2 arcsec and density $n_{\rm H} \sim 10^5$-$10^6$ cm$^{-3}$. The shell contains a thin ring of polycyclic aromatic hydrocarbon (PAH) emission. H$_2$ is found throughout the shell and in the halo. H$_2$ in the halo may be located on the swept-up walls of a biconal polar flow. The central cavity is shown to be filled with high ionization gas and shows two linear structures. The central star is located 2 arcsec from the emission centroid of the cavity and shell. Linear features (`spikes') extend outward from the ring, pointing away from the central star. Hydrodynamical simulations are shown which reproduce the clumping and possibly the spikes. Around ten low-contrast, regularly spaced concentric arc-like features are present; they suggest orbital modulation by a low-mass companion with a period of about 280 yr. A previously known much wider companion is located at a projected separation of about 15 000 au; we show that it is an M2-M4 dwarf. The system is therefore a triple star. These features, including the multiplicity, are similar to those seen in the Southern Ring Nebula (NGC 3132) and may be a common aspect of such nebulae.
R. Wesson, Mikako Matsuura, Albert A. Zijlstra, Kevin Volk, Patrick J. Kavanagh, Guillermo García-Segura, I. McDonald, Raghvendra Sahai, M. J. Barlow, Nick L. J. Cox, Jeronimo Bernard-Salas, Isabel Aleman, Jan Cami, Nicholas Clark, Harriet L. Dinerstein, K. Justtanont, Kyle F. Kaplan, A. Manchado, Els Peeters, Griet C. Van de Steene, Peter A. M. van Hoof
2023-08-17T15:01:55Z
http://arxiv.org/abs/2308.09027v2
# _Jwst_ observations of the Ring Nebula (NGC 6720): I. Imaging of the rings, globules, and arcs ###### Abstract We present _JWST_ images of the well-known planetary nebula NGC 6720 (the Ring Nebula), covering wavelengths from 1.6\(\mu\)m to 25 \(\mu\)m. The bright shell is strongly fragmented with some 20 000 dense globules, bright in H\({}_{2}\), with a characteristic diameter of 0.2 arcsec and density \(n_{\rm H}\sim 10^{5}\)-\(10^{6}\) cm\({}^{-3}\). The shell contains a thin ring of polycyclic aromatic hydrocarbon (PAH) emission. H\({}_{2}\) is found throughout the shell and in the halo. H\({}_{2}\) in the halo may be located on the swept-up walls of a biconal polar flow. The central cavity is shown to be filled with high ionization gas and shows two linear structures. The central star is located 2 arcsec from the emission centroid of the cavity and shell. Linear features ('spikes') extend outward from the ring, pointing away from the central star. Hydrodynamical simulations are shown which reproduce the clumping and possibly the spikes. Around ten low-contrast, regularly spaced concentric arc-like features are present; they suggest orbital modulation by a low-mass companion with a period of about 280 yr A previously known much wider companion is located at a projected separation of about 15 000 au; we show that it is an M2-M4 dwarf. The system is therefore a triple star. These features, including the multiplicity, are similar to those seen in the Southern Ring Nebula (NGC 3132) and may be a common aspect of such nebulae. keywords: planetary nebulae: general - planetary nebulae: individual: NGC6720 - circumstellar matter ## 1 Introduction Planetary nebulae (PNe) are composed of the ionized ejecta from low- and intermediate-mass stars (\(<8\)\(M_{\odot}\)) at the ends of their lives. PNe can be used as astrophysical laboratories, having a single exciting central star, and hosting a range of ionized, neutral, and molecular lines as well as dust emission. They are ideal objects to study the physics and chemistry of gaseous media under well-defined conditions. The structures shown by PNe range from their large scale shape to small scale condensations, including filaments and globules. They can be used to study the hydrodynamical origin of such features. Specific questions that can be studied using PNe include the formation and destruction of molecules including H\({}_{2}\) and polycyclic aromatic hydrocarbons (PAHs), and the role of stellar interactions in the shaping of the nebulae. We report here on _JWST_ imaging of the Ring Nebula, NGC 6720, using 13 filters from 1.6 \(\mu\)m to 25\(\mu\)m. The different filters trace a range of emission lines and dust features. _JWST_'s high angular resolution enables us to trace the ionized, molecular, and dust components of the nebula. Due to its proximity, high angular resolution and high sensitivity _JWST_ images can reveal details of the physics and chemistry in small structures such as globules and filaments in this PN. These features are shared with the Southern Ring Nebula (NGC 3132), which was also imaged by _JWST_(De Marco et al., 2022). This paper is one of four describing new _JWST_ observations of the Ring Nebula, together with a study of the central star and its close environs (Sahai et al. in prep), a study of the rich H\({}_{2}\) emission line spectra in two subregions of the nebula (van Hoof et al. in prep), and a study of the PAH emission in two subregions of the nebula (Clark et al., in prep.). ## 2 Basic Properties of the Ring Nebula The Ring Nebula (NGC 6720 or M 57) is located at a distance of 790\(\pm\)30 pc, as derived from the _Gaia_ parallax of the central star of \(1.2696\pm 0.0438\) mas (Lindegren et al., 2021). It has a visual diameter of about 4 arcmin (equivalent to \(\sim 2\times 10^{5}\) au or \(\sim\)1 pc) and shows a complex morphology (O'Dell et al., 2013). The distance puts the central star 190 pc above the Galactic plane, consistent with membership of the thin disk. _HST_ narrow-band images at a variety of optical wavelengths (O'Dell et al., 2004, 2007, 2013) have been used to derive plasma parameters and extinction maps for the nebula (Ueta & Otsuka, 2021). They find an extinction \(c_{\rm H\beta}=0.2\), increasing to 0.4 in the shell.3 The interstellar foreground extinction contributes \(c_{\rm H\beta}=0.1\); the remainder of the extinction is internal to the circumstellar shell. The extinction is highest in micro-structures (clumps) in the shell (Ueta & Otsuka, 2021). Footnote 3: We use the relation \(c_{\rm H\beta}=1.46\,E(B-V)\). The star currently has a temperature of \(T_{\rm eff}=1.35\times 10^{5}\) K and a luminosity of \(L\approx 310\) L\({}_{\odot}\)(Sahai et al, in prep.) which places it on the white dwarf cooling track. The current mass and progenitor mass of the central star are difficult to determine for objects on the cooling track, as the tracks tend to converge in that region of the HR diagram. Gonzalez-Santamaria et al. (2021) quote 0.58 \(M_{\odot}\) and 1.5 \(M_{\odot}\), respectively, while the models of Miller Bertolami (2016) for the current temperature and luminosity are also consistent with a progenitor mass close to \(2\,M_{\odot}\). A post-AGB star of these masses reaches peak temperature at a luminosity \(L\sim 3000\,L_{\odot}\). From this point, the luminosity declines to \(\sim 200\,L_{\odot}\) within a few hundred years, before the decline slows down (Miller Bertolami, 2016). The stellar luminosity of NGC 6720 (\(L\approx 310\)\(L_{\odot}\)) indicates that the star is currently in this phase of rapid fading. Part of the ionized nebula is likely recombining. ## 3 Observations and Data Reduction ### JWST Observations The Ring Nebula was observed with _JWST_(Gardner et al., 2023) in Cycle 1 General Observers (GO) program 1558. The observations were carried out in July and August 2022, using both NIRCam (Rieke et al., 2023) and MIRI (Wright et al., 2023). The NIRCam observations were obtained on 2022 August 4th. Four filters were used, with two filters for each of the Short Wavelength Camera (F162M and F212N) and the Long Wavelength Camera (F300M and F335M). The nebula was covered by a single field of view (FOV) with a 4-point dither pattern yielding a FOV of 2.45 arcmin with some gaps near the edges of the fields which depend on the camera. The field centre is at 18:53:35.079, +33:01:45.03 (J2000). The observing log is summarized in Table 1. NIRCam images of individual filter bands are shown in Figure 1. Imaging observations with MIRI were carried out on 2022 August 20th, using nine filters. The exposure time was 444 sec in each filter. The nebula was covered by a 1\(\times\)2 mosaic with a 4-point dither pattern. The field size was 2.35\(\times\)1.9 arcmin, again centred at 18:53:35.079, +33:01:45.03 (J2000). Additionally, MIRI images in three filters (F770W, F1130W and F1000W) were obtained in simultaneous imaging mode during separate MIRI MRS observations on 2022 July 4th. These images are centred on 18:53:34.510, +33:02:09.11, which allows for some overlap with the direct MIRI images. The simultaneous observations capture part of the outer regions of the Ring Nebula. Their exposure time was 951 sec per filter, hence these simultaneous observations have better sensitivities than those for the main field. The pixel scales of the reduced data are 0.031 arcsec per pixel for the NIRCam short wavelength camera images, 0.063 arcsec for the NIRCam long wavelength camera images, and 0.111 arcsec for the MIRI images. ### Data reduction The NIRCam imaging exposures were reduced using the _JWST_ Calibration Pipeline4(Bushouse et al., 2022) version 1.9.6 with CRDS version 11.16.19 and CRDS context 'jwst_1075.pmap'. Each of the NIRCam short wavelength and long wavelength exposures were processed through the Detector1Pipeline with the default parameters. At this point the ramp slope _rate_fits images had some of the \(1/f\) noise removed using a stand-alone python code provided by Chris Willott outside of the pipeline5 before going on to the subsequent data reduction steps. The files were then processed through the Image2Pipeline and Image3Pipeline stages with the default parameters for all steps, except that the tweakreg step was called with the option to align the images to _Gaia_ Data Release 26 stars in the field. The _JWST_ pipeline did not have the capability to align to _Gaia_ Data Release 3 at the time of the reduction of the images. The difference in the astrometry between using _Gaia_ Data Release 2 or _Gaia_ Data Release 3 as the reference is expected to be 10 mas or less, based on comparison of the two catalogues and on a test of this option in pipeline version 1.11.1 on a different data set. For each of the four filters this resulted in a combined, resampled image that was used for the subsequent analysis. Footnote 5: [https://github.com/chriswillott/jwst](https://github.com/chriswillott/jwst) Footnote 6: [https://www.cosmos.esa.int/web/gaia/dr2](https://www.cosmos.esa.int/web/gaia/dr2) It was observed that the \(1/f\) noise removal interacted with the Image3Pipeline skymatch step to produce a low-level artefact in the short wavelength images, in the regions to the left and right of the main nebula that are observed in 3 of the 9 mosaic positions. It is not clear why this happens in the skymatch step, since direct examination of the images before and after the \(1/f\) noise removal does not show any obvious sign of an offset in the sky background level. This artefact is seen in the ratio image (see Fig. 11). We processed all MIRI imaging exposures using _JWST_ Calibration Pipeline version 1.8.3 with CRDS version 11.16.16 and context 'jwst_1062.pmap'. Each of the raw MIRI files was processed through DetectorPipeline with default parameters. The world coordinate system (WCS) in pipeline-processed _JWST_ data can often be incorrect, resulting from uncertainties in the pointing information that are introduced by guide star catalogue errors and roll uncertainty (see Pontoppidan et al., 2022). We corrected the WCS reference keywords in the DetectorPipeline output by determining and applying the median offset between point sources in preliminary, fully calibrated exposures to their _Gaia_ Data Release 37 counterparts. We then processed the resulting files through Image2Pipeline to create calibrated dither images across all filters. We created combined mosaics for each MIRI filter, which include the simultaneous imaging field for F770W, F1130W and F1000W, using Image3Pipeline. Footnote 7: [https://www.cosmos.esa.int/web/gaia/dr3](https://www.cosmos.esa.int/web/gaia/dr3) MIRI colour images were created by combining multiple filters, where the angular resolutions were matched. For each pair of images, we convolved the images using simulated PSFs, calculated using WebbPSF (Perrin et al., 2014). The convolution was processed using the python code PSF Matching(Gordon et al., 2008; Aniano et al., 2011). ### Photon-weighted mean flux densities Table 1 lists the photon-weighted mean flux densities (\(F_{\rm tot}\) in Jy) of the main body (the bright shell) of the Ring Nebula. An elliptical aperture was placed centred on the central star. The major and minor radii of the elliptical aperture are 48 arcsec and 38 arcsec respectively, and the rotation angle is 32\({}^{\circ}\) clockwise from north, aligned with the major axis of the nebula. The background flux level was estimated from apertures at two areas having the lowest backgrounds within the image (RA=18:53:30.225, Dec=+33:01:55.80, and RA=18:53:40.480, Dec=+33:02:03.91, with a width of 8 arcsec and height of 4 arcsec and 140.52\({}^{\circ}\) as the NIRCam tilt angle). These apertures are shown in Figure 14. The difference \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline Instrument & Filter & \(\lambda_{p}\) & BW & PSF & \(t_{\rm exp}\) & \(F_{\rm tot}\) & \multicolumn{3}{c}{North Spec contributions} & Notes \\ & (\(\mu\)m) & (\(\mu\)m) & (\(\arcsec\)) & (sec) & (Jy) & Cont & H i & H\({}_{2}\) & Others & \\ \hline NIRCam & F162M & 1.626 & 0.168 & 0.053 & 1933 & 0.113\(\pm\)0.017 & 81\% & 10\% & 4\% & He i (1\%) & \\ & F212N & 2.121 & 0.027 & 0.069 & 1933 & 0.9\(\pm\)0.08 & 13\% & & 85\% & \\ & F300M & 2.996 & 0.318 & 0.097 & 483 & 0.1511\(\pm\)0.0002 & 58\% & 10\% & 27\% & He ii (1\%) & \\ & F335M & 3.365 & 0.347 & 0.109 & 483 & 0.2124\(\pm\)0.0003 & 53\% & 6\% & 37\% & PAHs \\ MIRI & F560W & 5.6 & 1.2 & 0.207 & 444 & 0.44\(\pm\)0.02 & 62\% & 32\% & \\ & F770W & 7.7 & 2.2 & 0.269 & 444 & 0.816\(\pm\)0.01 & 68\% & 5\% & 14\% & [Ar ii] (9\%) & PAHs \\ & F1000W & 10.0 & 2.0 & 0.328 & 444 & 1.86\(\pm\)0.02 & 46\% & & 11\% & [S iv] (26\%); [Ar iii] (14\%) & \\ & F1130W & 11.3 & 0.7 & 0.375 & 444 & 0.7253\(\pm\)0.003 & 94\% & & [Ni i] (3\%) & PAHs \\ & F1280W & 12.8 & 2.4 & 0.420 & 444 & 1.20\(\pm\)0.02 & 45\% & 2\% & 1\% & [Ne ii] (49\%) & \\ & F1500W & 15.0 & 3.0 & 0.488 & 444 & 7.43\(\pm\)0.02 & 4\% & & [Ne iii] (94\%) & \\ & F1800W & 18.0 & 3.0 & 0.591 & 444 & 4.01\(\pm\)0.02 & 42\% & & [S iii] (54\%) & \\ & F2100W & 21.0 & 5.0 & 0.674 & 444 & 5.57\(\pm\)0.03 & 71\% & & [S iii] (25\%); [Ar iii] (1\%) & \\ & F2550W & 25.5 & 4.0 & 0.803 & 444 & 20\(\pm\)2 & 88\% & & [O iv] (8\%) & \\ MIRI Simul & F770W & & & & 951 & \(\pm\) & & & \\ & F1130W & & & & & & & & \\ & F1000W & & & & 951 & \(\pm\) & & & \\ \hline \end{tabular} \(\lambda_{p}\): pivot (Tokunaga and Vacca, 2005) wavelength in \(\mu\)m 12, BW: band width in \(\mu\)m. PSF: FWHMs of PSFs in arcsec. For NIRCam simulated values are taken. \(t_{\rm exp}\): exposure time on source in sec. \(F_{\rm tot}\): photon-weighted mean flux density (Bohlin et al., 2014) of the nebula in Jy (Sect. 3.3). North Spec contributions: Estimated contributions to the images in each filter, calculated using NIRSpec-IFU and MIRI-MRS observations of a region in the northern part of the bright ring (Fig. 14; van Hoof et al. in prep). Note that some filters are designated to detect specific PAH features, though these PAH bands are not the strongest contributors to the in-band flux in the North spectrum. The PAH spectra observed in the MIRI MRS spectra are discussed by Clark et al. (in prep.) \end{table} Table 1: Observing log between these two background apertures is taken as an estimate of the uncertainty in the total flux. These uncertainties represent a systematic error (background subtraction) and not Poisson noise. The flux in each filter includes emission lines and bands, bound-free and free-free continuum, and dust emission. The important contributions to each filter are listed in Table 1, based on NIRSPEC and MRS observations of a region in the northern part of the bright ring (Fig. C3; van Hoof et al. in prep). The relative contributions will vary across the nebula. ## 4 Description and analysis of the images ### Stratification The nebula can be divided into three clearly distinct regions: the central cavity, the bright shell and the halo. (The latter is sometimes divided into an inner and an outer halo; Balick et al. 1992). This structure can be recognized in the multi-colour images of Figure 1 (NIRCam) and Figure 2 (MIR). The components are labelled in Figure 3. In these images, the regions show different colours. This indicates that they differ in their line emission: the nebula is stratified. The low-density central cavity appears as an approximately circular structure, with a radius of about 25 arcsec, which emits mainly in the F1000W and F2550W filters (Figs. 4 and 5). These two filters contain the high excitation [S iv] (ionization potential of 35 eV) and [O iv] (55 eV) lines (Table 1). All other filters show the cavity as a low-emission region (Figs. 1, 2, 4 and A1). The cavity thus has a higher excitation than the surrounding nebula. The cavity contains a linear structure approximately along the long axis of the nebula, which is visible in most filters and consists of two brighter stripes on either side of the central star. The region between these stripes shows little emission in all filters except for F2550W. The long wavelength F1800W and F2100W filters do not show the stripes. A somewhat similar structure is present in the central region of NGC 3132 (De Marco et al. 2022). O'Dell et al. (2013a) interpret the stripes as features in the inner halo, seen in projection against the cavity. The shell surrounding the cavity is a broad region with a well-defined inner and outer edge. It is bright in all filters. Figure 1: NIRCam three colour image: F212N (blue), F300M (green) and F335M (red). This is produced from the pipeline level3 products which does not include the \(1/f\) noise removal step. Directions of north and east are indicated Figure 3: The names of the nebula components superposed on the NIRCam and MIRI three-colour image: F300M (blue), F560W (green) and F770W (red). CS (the white circle) is the central star, FS (black spot) is the first-moment centroid of the flux in the F300M image, and Comp is the companion star candidate. North is at the top in this image. The locations of some low-contrast concentric features are indicated, but they are much more easily seen in Fig. 9. Figure 2: MIRI three-colour image: F2550W (blue), F560W (green) and F1130W (red). Directions of north and east are indicated. Unlike the central region, it has an elliptical shape. The outer radius is 44 arcsec along the major axis and 35 arcsec along the minor axis. The position angle of the major axis is 132\({}^{\circ}\) (from north to west). The shell is significantly brighter along the minor axis than the major axis. The outer edge of the shell is somewhat distorted towards the northeast. This is approximately the direction of the _Gaia_ DR3 proper motion of the central star (NNE, 10 km s\({}^{-1}\)), and it is possible that this distortion is related to interaction with the interstellar medium. The emission in the shell is clumpy, especially in filters dominated by H\({}_{2}\) (Table 1). The H\({}_{2}\) emission from the shell has long been known (Greenhouse et al., 1988), and was known to be clumpy (Speck et al., 2003), but is seen at much higher angular resolution and sensitivity with _JWST_. Close inspection shows evidence for a very large number of clumps or globules, seen throughout the shell. The almost complete lack of clumps seen projected on the central cavity supports the interpretation of the shell as an equatorial or toroidal structure, seen approximately pole-on (Bryce et al., 1994; O'Dell et al., 2013a). The inner halo is seen in all NIRCam images, apart from F162M (Fig. A1). The inner halo is also seen in the MIRI F560W and F770W MIRI images; the filters that clearly show the halo are dominated by H\({}_{2}\) lines. H\({}_{2}\) emission from the halo was detected by van Hoof et al. (2010). The faint halo shows a wealth of structure, including concentric arcs, radial stripes and distorted bright edges. These are each discussed sepa Figure 4: MIRI images of the Ring Nebula. Figure 5: MIRI images combining the direct mapping images and the simultaneous images of the Ring Nebula which extend the imaged field further to the north and west. rately in the following sections. The simultaneous fields show a part of the outer halo, terminating at a radius of 1.9 arcmin from the centre. The (incomplete) coverage is consistent with a generally circular outer halo. The central star is clearly detected in the NIRCam images, and in the MIRI F560W and F770W images. The central star and its vicinity will be discussed in a separate paper (Sahai et al., in preparation). ### Offset of the nebular centre from the stellar position Although the outer edge of the inner cavity is close to circular, its centre does not exactly correspond to the position of the central star, but instead appears to be offset to the north-west (roughly in the direction of the _Gaia_ companion; see Sect. 5.2) by about 2 arcsec (Fig. 6). The flux-weighted centres of the F300M and F335M images (calculated as the first moment of the images) are also offset from the central star, and nearly coincide with the centre of the inner cavity. The coordinates of the flux-weighted centre for the F300M image are RA=18:53:35.04 and Dec=+33:01:46.8, while the coordinates of the centre of the circular inner edge of the cavity are RA=18:53:35.01, Dec=+33:01:46.01. This offset with the shell and cavity must be considered tentative, as the complex small-scale structures introduce uncertainties. The interpretation is also open to discussion. The structures here are affected by the original mass loss, the ionization and the hot stellar wind. If the offset is interpreted as caused by the original mass loss, for an age of 4000 yr (O'Dell et al., 2013), the offset would correspond to a velocity difference between star and nebula of around 2 km s\({}^{-1}\). Off-centre central stars are known in some other planetary nebulae. The PN A39 has an offset of 2 arcsec in a symmetric nebula (Jacoby et al., 2001). Another example is Hu2-1 (Miranda et al., 2001). ### Globules Figure 7 shows a sequence of images of regions containing globules, covering optical (_HST_) and near- and mid-infrared (_JWST_) wavelengths. The _HST_ images show the globules as extinction peaks (e.g., O'Dell et al., 2013), while the _JWST_ images do not. Even the shortest wavelength NIRCam image at 1.62 \(\mu\)m shows no absorption (Fig. A1). This is in contrast to NGC 3132, in which De Marco et al. (2022) found some globules showing absorption at 1.87 \(\mu\)m. We note that the sensitivity to absorption is less in our data because of the choice of filters: the F187N filter used for NGC 3132 contains a strong HI Pa\(\alpha\) line, while the F162M and F212N filter do not contain strong atomic emission lines (Table 1). There is less absorbable nebular background in our images. To estimate the mass of the globules in NGC 3132, De Marco et al. (2022) assumed \(A_{V}\)=1.7 mag and 3.9 mag for two specific knots. We follow their analysis. A direct measurement of the most heavily extinguished globule in Figure 7b gave an extinction \(A_{\rm 502nm}\)=1.1 mag. Using the wavelength dependence of the extinction curve from Cardelli et al. (1989), and \(A(V)/N({\rm H})\sim 2.3\times 10^{21}\) cm\({}^{-2}\), the column density becomes \(N_{\rm H}=2.2\times 10^{21}\) cm\({}^{-2}\). The diameter of this globule is about 0.4 arcsec. This yields a density \(n_{\rm H}\approx 5\times 10^{5}\) cm\({}^{-3}\) and a mass of \(m\approx 2\times 10^{-5}\,M_{\odot}\). The majority of clumps are of order 0.2 arcsec in diameter, or around 150 au. Assuming that the clumps are roughly spherical and have similar density, the mass of such a clump becomes \(m\approx 5\times 10^{-6}\) M\({}_{\odot}\). At these densities (noting that \(n_{\rm H_{2}}=0.5n_{\rm H}\)) the globules can be in pressure equilibrium with the ionized gas. The density in the ionized gas is approximately \(1.3\times 10^{3}\) cm\({}^{-3}\) and the temperature is around 9000 K (Ueta & Otsuka, 2021). Pressure equilibrium would be reached for a globule temperature and density \(T\approx 200\times 10^{6}/n_{\rm H}\) K. The globules could therefore be essentially stable and avoid collapse or dissipation, until the surrounding gas recombines. Whether they are indeed in pressure equilibrium or are still collapsing depends on the unknown temperature, turbulence and details for the formation process. Even the angular resolution of the _JWST_ images does not allow the system of globules to be cleanly resolved. However, to estimate their total number, we applied a peak-finding algorithm from the Python package photutils(Bradley, 2023) to the F212N image. A small section of the image is shown in Figure 8, with the locations of the peaks found by the algorithm indicated. The peak-finding algorithm identifies about 17 500 peaks, which, given the density of clumps and resulting overlap, is likely to be an underestimate of the total number. A manual count was done in small regions, which, extrapolated to the full area, gives an estimated population of \(\sim\)25 000 globules. Based on these numbers, the clumps have a combined mass of up to 0.1 M\({}_{\odot}\). In comparison, the CO emission also traces \(\sim 0.1\) M\({}_{\odot}\)(Bachiller et al., 1989). The molecular mass is similar to that found for NGC 3132 (De Marco et al., 2022). The typical filling factor of dense clumps in planetary nebulae has been estimated as \(7\times 10^{-5}\)(Zhang et al., 2004). In the Ring Nebula, this factor appears considerably higher: the clump filling factor for the shell is \(\sim 2\times 10^{-3}\). The density of the ionized gas in the shell is around \(1.3\times 10^{3}\) cm\({}^{-3}\)(Ueta & Otsuka, 2021). This gives an ionized mass in the shell of roughly \(0.15\,M_{\odot}\). The clumps may therefore account for up to half the mass of the shell. Ueta & Otsuka (2021) argue for a similar ratio. Sahai et al. (2012) used [C ii] observations made with the _Strasospheric Observatory for Infrared Astronomy_ (_SOFIA_) to show that 0.11 \(M_{\odot}\), half the mass of the nebula, lies in a photon-dominated region (PDR) zone. Liu et al. (2001) found densities of \(n_{\rm H}\sim 10^{5}\) cm\({}^{-3}\) for these PDR regions. The PDR emission zone is likely associated with the clumps. The PDR zone is mixed in with the ionized gas, in a region where clumps and ionized gas co-exist, rather than forming a separate outer shell. This zonal mixing is also seen in the ionized gas (e.g., Garnett & Dinerstein, 2001). The H\({}_{2}\) images reveal few cometary tails emanating from the globules. This is in contrast to the Helix Nebula where the majority of the globules (at least in the inner region) have well-developed tails seen in H\({}_{2}\)(Matsuura et al., 2009). Some short extensions of around 1 arcsec in length can be seen in absorption in the _HST_ F502N and F658N images (Fig. 7a,b) but these do not have clear counterparts in the _JWST_ images. An exception is the largest globule in Fig. 7b, which shows a faint bow shape in the _HST_/F658N image that becomes a longer 'U' shape in the _JWST_/F300M and F335M images. The extensions seen in the _HST_ images tend not to be straight. They may trace an early stage of tail formation. Figure 6: The boundary of the inner cavity of the Ring Nebula is close to circular, but its centre (marked by an X) is offset by 2 arcsec from the central star. The image is in the F335M filter. North is at the top, east is left. Figure 7: Zoomed-in images of globules in the Ring Nebula. The locations of the three zoomed-in regions are indicated in Fig. C3. Globules are detected in H\({}_{2}\) emission in the \(JWST\) NIRCam images and MIRI images, while some of them are seen in absorption against the diffuse ionized emission in the _HST_ F502N and F658N images. ### Radial spikes The halo shows multiple narrow, radial features pointing away from the central star, which following De Marco et al. (2022) we call'spikes' (Figs. 9 and 10). 8 They are seen only outside the bright shell, and mainly in H\({}_{2}\). The spikes are mainly a feature of the inner halo (O'Dell et al., 2013). Footnote 8: O’Dell et al. (2013) uses the word ‘rays’. In a section of the F770W image covering 20 degrees of azimuth, we count 15-20 spikes. From this, we estimate that there are about 300-400 spikes in total. The exact number is imprecise due to the low contrast, small separation, and partial overlap of many spikes. The typical width appears to be around 0.4 arcsec. The typical length of the visible spikes is around 20 arcsec, but they may in a few cases extend twice as far, out to the outer edge of the nebula. The spikes are expected to arise from illumination effects where stellar light escapes through holes in the shell (De Marco et al., 2022). They line up better with the central star than with the offset centre of emission, which indicates that the dominant cause lies in the current or recent radiation from the star. There are some cases of misalignment with the star, possibly where there is partial overlap between spikes. The number of spikes is of order 2 per cent of the number of globules. This suggests that there is no direct relation between the globules and spikes. O'Dell et al. (2013) argue that some of the spikes can be the shadows of large globules but this is not evident from the data here. ### Arcs - concentric structures A series of faint, broken concentric arcs is apparent outside the bright ring in the H\({}_{2}\) halo. In places, up to 10 arcs can be identified (Figure 9). In some directions, the arcs appear to have been disrupted, possibly due to density variations in the nebula, but in general the curvature is regular (Fig. 10). These concentric structures are most apparent in the F770W image, which contains emission lines of H\({}_{2}\), H i and [Ar ii]. They are also clearly visible in the F1000W and F1130W images and can be distinguished at longer wavelengths in the F1800W and F2100W filters, but at lower contrast due to the poorer spatial resolution. At shorter wavelengths with better spatial resolution, the much clumpi appearance of the nebula in the lines isolated by these filters makes the features harder to trace. The arcs are seen in most directions but are obscured by the shell along the major axis. Using a portion of the F770W image in which the arcs are most clearly seen, we measure an average separation of about 1.5 arcsec. Assuming that the arcs are embedded in the outflow and are in the plane of the sky, this separation and outflow velocity provide a time scale. The outflow velocity of the Ring Nebula decreases outward from \(\approx 40\) km s\({}^{-1}\) in the cavity to 20-25 km s\({}^{-1}\) in the shell and 15 km s\({}^{-1}\) in the halo (O'Dell et al., 2013; Martin et al., 2016; Sahai et al., 2012). We assume a value of \(20\pm 5\) km s\({}^{-1}\) at the location of the arcs, just outside the shell. At a distance of 790 pc, this separation corresponds to a time interval of \(280\pm 70\) yr. This is much too short an interval to be related to periodic thermal pulses that may enhance mass loss, which are expected to occur at intervals of \(10^{4}\)-\(10^{5}\) years. Instead, a common scenario invoked to explain arc systems like these is one in which a close binary companion modulates the outflow from the AGB star. The time interval of 280 years then corresponds to the orbital period of the companion. If the combined mass of the original AGB star and its companion is \(1.5\pm 0.5\)M\({}_{\odot}\), the orbital separation would be about \(50\pm 15\) au. _JWST_ images of the Southern Ring Nebula (NGC 3132) have revealed very similar concentric structures, with a separation of about 2 arcsec at a distance of 750 pc. These have also been attributed to the effects of a binary companion, with an orbital period of 290-480 yr (De Marco et al., 2022). To highlight the radial and concentric structures, we reprojected the MIRI F770W image into polar coordinates relative to the central star. We then applied the edge-detection algorithm of Canny (1986) to the reprojected image. The result is shown in Figure 10. Within the shell, the edges are primarily azimuthal, i.e. horizontal in the azimuth-radius plot. Further out, the edges abruptly change to radial (i.e. vertical). This confirms that the spikes start suddenly at the outer radius of the shell. There is a region of overlap between the horizontal arc lines and the vertical spikes. ## 5 Discussion ### PAHs The polycyclic aromatic hydrocarbon (PAH) feature at 11.3 \(\mu\)m has been detected in _Spitzer_ spectra of the Ring Nebula (Cox et al., 2016). _Spitzer_'s Infrared Spectrograph (IRS) covered a 99\(\times\)18 arcsec area along the major axis of the elliptically-shaped Ring Nebula, with a pixel scale of 1.8 arcsec.9 This spectral map showed that the PAH feature is emitted from the outer half of the shell, a more or less similar emitting region to the H\({}_{2}\) S(1) and S(2) lines at 17.04 and 12.28 \(\mu\)m. Footnote 9: Program 40536, P.I. H. Dinerstein The higher angular resolution _JWST_ images allow the PAH-emitting regions to be studied in more detail. The F335M filter encompasses the 3.3 \(\mu\)m PAH feature, while the F300M filter lacks this band but otherwise contains similar Figure 8: The positions of clumps, shown in red, identified by a peak-finding algorithm in a section of the main ring in the F212N image. eubular emission (Table 1). The F1130W filter has the lowest contribution from emission lines (Table 1), and best reflects the contributions of the continuum and PAH bands. Both the F335M/F300M ratio image and the F1130W/F1000W ratio image (Fig. 11) show indications of a narrow ring of excess emission located at the outer edge of the shell. From Table 1, the PAH contribution to F335M and F1000W is \(<14\) per cent and \(<7\) per cent (but note that this is estimated from MIRI IFU spectra which are not centred on the ring; Fig. C3). This ring shows up only in the two filters containing PAH bands (Fig. 11) and is not seen in other filter combinations (Figs. B1 and B2). We interpret these narrow ring excesses as possible PAH emission. Figure 12 presents the spectra of three different regions across the nebula, from the centre to the west, indicated as regions 1-3 in Figure 11. The spectra were extracted from the _Spitzer_ spectral map specified above. Region 1, the innermost region of the ring, lacks PAH emission at 11.3 \(\mu\)m but shows strong [S iv] emission in the JWST F1000W filter. Further from the central star, the F1000W image is dominated by continuum emission, as found in the MRS North Region spectra (Table 1; van Hoof et al. in preparation). Regions 2 and 3 both show an excess of the F1130W over the F1000W flux, relative to a simply rising continuum, suggesting the presence of PAH emission. Also for these two fields, the F335M flux has a subtle excess over F300M. These filter bands need cautious interpretations, as both contain several H\({}_{2}\) lines. Nevertheless, these excesses appear at the same locations, so we interpret them as being likely due to PAH emission. The colour-excess ring seen in Figure 11 is centred at the current location of the central star, rather than at the offset centre found for other emission features. The ring also coincides with the thin region at the edge of the shell where the excitation drops off rapidly, as traced by the [N ii]/[O iii] ratio (O'Dell et al., 2013). This suggest that the PAH emission is excited by FUV radiation from the star that penetrates to regions where the nebula has become optically thick to the harder, H-ionizing UV photons, beyond the ionization front. The PAH emission distribution appears very different from that of the H\({}_{2}\) emission which is far more widespread in the nebula. However, we cannot conclude that weaker PAH emission is not present elsewhere in the nebula. Neither can we conclude whether the PAHs are created by the chemistry in the ring or have survived from the molecular AGB wind. ### Multiplicity A distant companion to the central star was identified by Gonzalez-Santamaria et al. (2021). This star, _Gaia_ DR3 2090486687506009472, lies 18.5 arcsec from the central star, corresponding to a projected distance of 0.07 pc. It shares both the proper motion and parallax of the central star, and is the only _Gaia_ star within 5 arcmin to do so. Gonzalez-Santamaria et al. (2021) proposed that this companion is a white dwarf, based on the _Gaia_ DR3 photometry. However, the \(B_{P}\) and \(R_{P}\) photometry is discrepant from the \(G\)-band photometry by 2.74 mag, and appears to be significantly affected by nebular emission lines. Similar problems appear to affect other optical and near-IR photometry of the star (Chambers et al., 2016; Skrutskie et al., 2006), which are discrepant from both the photometry published in the Hubble Source Catalogue (Whitmore et al., 2016) and from the _Gaia_ photometry. The available photometric data for this star are presented in Fig. 13. This figure includes photometry from our NIRCam Figure 9: Regular concentric features in the outer regions of the F770W image of the Ring Nebula. Green arrows indicate the locations where these regularly-spaced features are most easily seen. and MIRI observations, extracted with an aperture of radius 1 arcsec. The 'background' for these _JWST_ data points was estimated in an annulus of inner/outer radii of 1.1/1.5 arcsec, with the aperture correction estimated using WebbPSF. To establish the properties of the companion star, we use the Python Stellar Spectral Energy Distribution toolset (PySSED)10, an SED-fitting code based on the software presented in McDonald et al. (2009, 2012, 2017). We assume a distance to the star of 790 pc, and that the star has a composition of between [Fe/H] \(=-0.4\) and \(+0.0\) dex and a corresponding [\(\alpha\)/Fe] \(=+0.1\) to \(+0.0\) dex, based on its height of 190 pc from the Galactic plane. The Ring Nebula has a metallicity which is approximately solar (Liu et al., 2004; Guerrero et al., 1997). Footnote 10: PySSED is the code underlying the S-Phot software data application. It is currently under development and will be hosted at [https://explore-platform.eu/sdss](https://explore-platform.eu/sdss) The most significant unknown is the extinction by dust in the nebula at this location, which also depends on whether the star is in front or behind the main shell. We assume a plausible range of \(c_{\rm H\beta}=0.1\) to \(0.4\), giving \(E(B-V)=c_{\rm H\beta}/1.46=0.068\) to \(0.274\) mag. Assuming that the extinction is the dominant source of uncertainty, we fit the parameters listed in Table 2. Figure 10: Application of an edge-detection algorithm to a section of the polar-projected image, highlighting the predominantly radial and concentric nature of the nebular structures. Top: reprojected F770W image. Centre: edges detected using the algorithm of Canny (1986). Bottom: edges overlaid on the reprojected image. The horizontal direction is azimuthal direction of the nebula, and vertical direction is radial direction, with the central star towards the bottom. The horizontal axis covers 90 degrees and the vertical axis extends 45 arcsec. These correspond to a main sequence star of approximate spectral type M2-M4 with a mass of 0.3-0.5 M\({}_{\odot}\)(Cifuentes et al., 2020). At the projected distance of the companion from the central star, the orbital period would be of order \(10^{6}\) yr and the orbital velocity 250 m s\({}^{-1}\). If the star was bound before the AGB mass loss, it is likely unbound now. It is plausible that it was originally in a more compact orbit. However, the proper motion shows that it cannot have moved away by more than 3 arcsec during the lifetime of the PN. Although the companion star lies roughly along the direction of the minor axis, it is not exactly aligned: the offset is about 25\({}^{\circ}\). The concentric arcs indicate the additional presence of a closer companion, with an estimated period of 280 years, corresponding to a semi-major axis of 50 au for an assumed combined mass of 1.5 M\({}_{\odot}\). This is similar to NGC 3132 where there is also evidence for at least a triple stellar system. The puzzling 2 arcsec offset between the centre of certain nebular structures and the central star could be related to the multiplicity. A speculative model for this involves an elliptical orbit for the close pair. As most of the time in an eccentric orbit is spent near largest separation (apoastron), the nebula is ejected with a typical central velocity corresponding to this part of the orbit. The offset acquired over a dynamical time of 4,000 years corresponds to a velocity of 2 km s\({}^{-1}\). For the system above and assuming an ellipticity of 0.5, a velocity difference of the primary component between systemic and aphelion of 2 km s\({}^{-1}\) requires an equal mass binary. It should be possible to detect a main sequence star with such a mass, but a white dwarf companion would be difficult to see. The companion star has remained bound to the central star over its lifetime. Assuming that it formed at this distance, this suggests that no other star has come closer to it than half the current separation of 0.07 pc. For an assumed number density of stellar systems of 0.1 pc\({}^{-3}\), the mean free path is 2600 pc. Assuming a random velocity of 10 km s\({}^{-1}\) \begin{table} \begin{tabular}{c c c} \hline [Fe/H] & 0.0 & \(-\)0.4 \\ \hline \(T_{\rm eff}\) & 3281 – 3406 K & 3382 – 3635 K \\ \(L\) & 0.0136 – 0.0199 L\({}_{\odot}\) & 0.0328 – 0.0433 L\({}_{\odot}\) \\ \(R\) & 0.362 – 0.406 R\({}_{\odot}\) & 0.525 – 0.604 R\({}_{\odot}\) \\ \hline \end{tabular} \end{table} Table 2: pysed parameters of the companion star, for two values of the metallicity Figure 11: F335M/F300M ratio (left), F1130W/F1000W ratio (middle) and _Spitzer_ spectral map at 11.3 \(\mu\)m (10.95–11.65 \(\mu\)m, corresponding to _JWST_ F1130W filter) (Cox et al., 2016) on the same scale. The narrow ring at the edge of the shell, guided by a black ellipse, is interpreted as the location of PAH emission. The _Spitzer_ spectra were extracted at three different regions and plotted at Fig. 12. North is at the top. Figure 12: The extracted _Spitzer_ spectra of the Ring Nebula (Cox et al., 2016), from the top towards the inner ring from region 1 to 3. (Fig. 11). Region 3 has excess of PAHs at 11.3 \(\mu\)m in Figure 11, and _Spitzer_ spectra confirm the presence of 11.3 \(\mu\)m PAHs, while region 1, the innermost region of the ring lacks observed PAHs. this gives a time between encounters of 1 Gyr. This indicative calculation only favours a higher mass progenitor for the central star of \(\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$>$}}}2\,M_{ \odot}\) because of the evolutionary timescales. A more massive system is also more robust against perturbations. Alternatively, the companion may have formed in a tighter orbit and escaped due to the AGB mass loss of the primary. This requires a mass loss of 50 per cent or more of the original combined mass, which also indicates a primary progenitor mass of 1.5 \(M_{\odot}\) or more. ### Comparison to similar planetary nebulae The two well-studied PNe most similar to the Ring Nebula are the so-called Southern Ring Nebula and the Helix Nebula. NGC 3132 (the Southern Ring Nebula) lies at a similar distance, at 750 pc. It was imaged with _JWST_ as part of its Early Release Observations (ERO) programme (Pontoppidan et al., 2022). A detailed analysis of these images was carried out by De Marco et al. (2022). NGC 3132 was imaged in fewer filters than NGC 6720 (the Ring Nebula), but the images show very similar structures to those we find here, including the numerous globules visible in H\({}_{2}\), the radial spikes in the halo, the system of concentric arcs and the structure inside the cavity. The Helix Nebula is closer to us, at 200 pc. It too has a system of globules, but the globules in the inner part of its shell have well-defined tails (O'Dell et al., 2004; Matsuura et al., 2009), which are rarely seen in NGC 6720 and NGC 3132. Spikes in the halo are detected in the _Spitzer_ 5.8 \(\mu\)m image of the Helix (Hora et al., 2006). It is interesting to note that the numbers of globules in each of these three nebulae are similar, of order 20 000 (Meixner et al., 2005; De Marco et al., 2022). Applying the same peak-finding algorithm that we used to count the clumps in the Ring Nebula (Section 4.3) to the Southern Ring Nebula we find 15 000 clumps. However, there are notable differences in the size and shape of globules in the Ring Nebula compared to the other two objects. The Helix globules typically have a diameter of 2 arcsec in H\({}_{2}\) images (Matsuura et al., 2009) at a distance of 200 pc, and one of the largest globules in the Southern Ring Nebula is 0.5 arcsec in diameter at a distance of 750 pc (De Marco et al., 2022). These angular diameters correspond to \(\sim\)400 au. In contrast, the globules in the Ring Nebula tend to be smaller. One of the largest globules in the Ring Nebula is indicated in Fig. 3. The diameter of the head is approximately 0.4 arcsec across in the F212N H\({}_{2}\) image, corresponding to 300 au. The typical diameter of the globules is 0.2 arcsec, or 150 au. Most of the Ring Nebula globules also lack significant tails. Another similar PN of relevance is NGC 2346, which has been imaged in H\({}_{2}\) from the ground (Manchado et al., 2015). It has a more flaring bipolar structure with a thinner torus than the Ring Nebula, but the torus shows similar H\({}_{2}\) clumps. The clumps range in size from 0.16 to 0.34 arcsec which, at a _Gaia_ DR3 distance of 1400 pc, corresponds to 200 to 500 au. This is larger than for the Ring Nebula but similar to those in the Southern Ring and the Helix Nebula. The formation mechanism of the globules is still under debate. Globules may form before the ejection of the PN, and survive in the medium (Zanstra, 1955), or alternatively, they may have formed during the PN phase (Capriotti, 1973; Garcia-Segura et al., 2006). The central stars of these four PNe are all on the cooling track of the HR diagram. This supports a model in which the globules form after the rapid fading of the central star on the cooling track of the white dwarf phase, in the recombining dense gas of the shell (O'Dell et al., 2007; van Hoof et al., 2010). Further support for a picture where clump formation occurs predominantly at late times, rather than before the ejected envelope has been ionized, was found by Huggins & Mauron (2002), who demonstrated the absence of small-scale structures in objects in the early stages of transition from proto-planetary nebulae to full-fledged PNe. During recombination, the recombination rate (\(t_{\rm rec}\)) is faster in denser regions (\(t_{\rm rec}=10^{5}\,{\rm yr}/n_{e}\) for hydrogen). If density fluctuations are present, the higher-density regions will recombine first, lose electron pressure, and collapse under the pressure of the surrounding ionized gas, which will be \(\sim\)200 times higher than in the recombined regions. Molecules and dust can form, shielded and shielding against the UV radiation. The effect of this shielding can be seen via extinction by the neutral globules against the ionized gas background emission and shadowing in the tails. The tails in the globules of the Helix nebula are best explained by shadowing (Canto et al., 1998; Andriantsaralaza et al., 2020) within the photoionized region, triggering recombination and allowing CO to form along the tail (Andriantsaralaza et al., 2020). Involvement of stellar wind ablation in tail formation (Dyson et al., 2006; Matsuura et al., 2007) cannot be excluded, but stars on the white dwarf cooling track do not have strong stellar winds. The lack of tails in the Ring nebula may point to it being in an earlier stage of evolution. The shell of the Helix Nebula is larger than those of the other nebulae, at a radius of 0.33 pc (O'Dell et al., 2004) versus 0.1 pc for NGC 6720 and 0.07 pc for NGC 3132. The Helix Nebula has a larger dynamical age Figure 13: SED of a companion star candidate, _Gaia_ DR3 2090486687506009472. Orange points show archival _Hubble Space Telescope_ photometry and our new _JWST_ photometry. Blue points show other literature data from _Gaia_, the Two Micron All-Sky Survey (2MASS) and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) as indicated in the text. Many of these are strongly affected by the bright nebular background. Overlain are fit-sett-model atmospheres (Allard, 2014) for 3400 K and 3600 K stars with log(\(g\)) = 4.5 dex, [Fe/H] = –0.5 dex, [\(\alpha\)/Fe] = +0.2 dex and the appropriate reddening. of \(\sim\) 7,400 years (Gonzalez-Santamaria et al., 2021) and has likely been on the cooling track for longer. On the cooling track, the star initially fades rapidly, leading to recombination. But the fading slows down dramatically after that (Miller Bertolami, 2016). During this phase, the nebula keeps expanding and the density drops. Once the star stops fading, the ionization front should move outward through the nebula again, due to the expansion of the nebula and hence lower density and longer recombination time scales. The ionization front can now pass the previously formed globules. Once that happens, the globules may develop tails in the shadowed region in the re-ionized gas. This could explain why the inner globules in the older Helix Nebula have tails, while those in the younger nebulae do not. #### 5.3.1 Hydrodynamical model Figure 14 shows density snapshots from hydrodynamical models (Garcia-Segura et al., 2006, 2018) for the Ring Nebula and the Southern Ring Nebula. The models are extracted for an age of 4000 yr after envelope ejection, a time equal to the dynamical age of the Ring Nebula (O'Dell et al., 2013). The 2-d simulation has a resolution of 1000\(times\)1000 zones in spherical polar coordinates (R, \(\Phi\)) on the orbital or equatorial plane. The model assumes ejection via a common envelope evolution event which provides an equatorial density enhancement. There is currently no evidence that the Ring Nebula underwent a common envelope event, since the posited binary companions are too distant to lead to a common envelope phase. However, the common envelope leads to formation of a high-density torus which is the structure of the shell of the Ring Nebula. The model follows the expansion of the ejected shell, and includes central star evolution (Vassiliadis and Wood, 1994) and ionization of the nebula. The model (Fig. 14) shows the formation of higher density regions in the form of clumps in a way similar to described above. They are formed due to the thin-shell instability in the swept-up shell. The thin-shell instability acts in wind-blown bubbles (Vishniac, 1983) and is very effective at producing clumps (Garcia-Segura and Mac Low, 1995). It may involve photo-ionization. The number of clumps depends on the thickness of the swept-up shell, which is related to the temperature and pressure of the shell, and also depends on the expansion velocity. Another instability can occur in thicker shells sandwiched between an ionization front and a shock front, which can fragment through a so-called I-S (Ionization-shock) instability (Garcia-Segura and Franco, 1996). This does not involve a wind. in the I-S instability the clumps form at the ionization front. In the model, the first mechanism acts, but both mechanisms can lead to clumping. The clumps are in pressure equilibrium. This may provide a natural explanation for the clumps being larger in the Helix Nebula: it has expanded further with lower density in the ionized region, allowing the clumps to expand. The I-S instability can form tails directly behind the clumps, caused by direct shadowing. The thin-shell mechanism does not form tails connected to the clumps, because of the hot-shocked gas and internal gas motions in the bubble. However, once the wind stops, tails can form progressively while the hot-shocked gas disappears, in the same way as for the I-S instability. Fig. 14 compares the Ring Nebula with the Southern Ring nebula, with models for each (De Marco et al., 2022). The model for the Southern Ring corresponds to a slightly later phase in the evolution. ### Spikes Narrow spikes are found in the halo of the Ring Nebula, the Southern Ring, and the Helix. Although rarely reported (e.g. NGC 6543; Guerrero et al., 2020), they may be a common phenomenon in PNe. They are seen in H\({}_{2}\) emission in the Ring Nebula, the Southern Ring (De Marco et al., 2022) and the Helix Nebula (Hora et al., 2006) but in optical emission in NGC 6543 (Guerrero et al., 2020). The latter paper presents a hydrodynamical model where the spikes appear to form behind holes in the dense shell. The fact that they are well aligned with the star shows the importance of illumination. There does not appear to be a direct relation between the spikes and the clumps; the spikes number only around 2 per cent the number of clumps, and the clumps in the Ring Nebula do not have tails. The spikes in the Ring Nebula are seen in H\({}_{2}\). They are likely excited by radiation passing through the shell. While the clumpy gas in the shell is optically thick for \(<\)912 A radiation which can photoionize hydrogen, it can still be optically thin at \(>\)912 A. Thus, longer wavelength UV radiation can escape through the swept-up shell, exciting H\({}_{2}\) molecules via fluorescence in the remnant AGB wind. The H\({}_{2}\) molecules in the halo are unlikely to result from recombination of the ionized gas, since recombination times in the halo are much longer than those in the main shell. The spikes therefore likely formed already during the high-luminosity PN phase on the horizontal track, and were shadowed during this phase which allowed the original molecules to survive while the surrounding halo became ionized. The hydrodynamical model described above also produces spikes. They form in the halo where there is no hot shocked gas, again by shadowing. The lack of hot gas explains why spikes formed but tails did not. There are open issues: the model predicts a similar number of spikes and clumps, and because of the high density it has fast recombination. Comparison of this work with that for the Southern Ring nebula (De Marco et al., 2022) shows that the spikes are less developed in the Ring Nebula. The mass of the progenitor star may play a role, as the central star of the Ring Nebula was likely of lower initial mass (\(\sim\) 1.5-2 \(M_{\odot}\)) than that of the Southern Ring (\(2.86\pm 0.06\)\(M_{\odot}\)). The total numbers of globules in these three PNe are similar, of order 20 000. This may be because the densities and expansion velocities of the nebulae were similar, as these two parameters are key to triggering the instabilities that initiated the formation of the globules. ### Halo The _JWST_ images show the presence of H\({}_{2}\) in the halo, for example in the F335M filter. The images show a combination of azimuthal structures further out and flocculent structure in the inner halo. The latter is mainly seen closer to the major axis, outside the shell. Bryce et al. (1994) proposed a biconical structure, with two lobes extending into the halo (their Fig. 11). This model was refined by Sahai et al. (2012) and O'Dell et al. (2013a). The bicone has a wide opening angle, and is confined by the torus which forms the shell. The polar axis is close to the line of sight, making the bicone not obvious in the images. The projected bicone is seen in long-slit spectra, however, due to its higher flow velocity. We created a morphological model consisting of a barrel-like torus, and a wide bipolar flow angled 30 degrees from the line of sight, towards the minor axis of the shell. The lobes start at the shell and move out into the inner halo. The lobe is assumed to be slightly flaring, with radius \(r\propto z^{1.5}\). The H\({}_{2}\) is assumed to be located on the wall of the cone, swept out from the shell. This model is speculative. Fig. 15 shows the F770W _JWST_ image, with the contours of the wall of the cone superposed. In this model, the two cones are slightly separated in projection. Where the contours run close together, there is a longer line of sight along the cone and more H\({}_{2}\) emission may be expected. The contours roughly agree with the location of the H\({}_{2}\). Interestingly, it can also explain the two stripes seen in projection against the cavity which become the edge of the bicone. Finally, the arcs and stripes are both seen where the wall of the cones is seen under the most favourable angle, around the polar axis. The model is overplotted on the F1800W/F1000W ratio image in the bottom panel of Fig. 15. F1800W is dominated by [S iii] and F1000W by [S iv]. The latter is seen in the openings of the cones which provide a clear line of sight into the cavity. The lower excitation [S iii] comes from the shell. The region between the stripes also shows this emission from the shell, projected on the cavity, and therefore has a very different appearance in the ratio image. ## 6 Conclusion The _JWST_ images have revealed a wealth of structural detail in NGC 6720. The nebula has a highly ionized inner cavity, a shell in which some 20 000 dense clumps contain up to half the total mass, a thin ring of possible PAH emission, and a halo that contains around 10 concentric arcs and 400 spikes. The centre of the nebula is offset by 2 arcsec from the central star. Much of this detail is shown by the H\({}_{2}\) emission. The globules/clumps have densities of \(n_{\rm H}\sim 10^{5}\)-\(10^{6}\) cm\({}^{-3}\) and account for \(\sim 0.1\,M_{\odot}\), up to half the mass of the PN. They are modelled as arising from thin-shell or I-S instabilities The PNe in which clumps are seen have central stars that are already on the cooling track, suggesting that the clumps form during the rapid fading of the star. The globules in the Ring Nebula have little or no tails, unlike those in the Helix Nebula. The globules in the Helix nebula are also larger. These differences could be due to the Helix Nebula Figure 14: (Right) Density snapshot of a hydrodynamic simulation for the Ring Nebula (right top) and the Southern Ring Nebula (right bottom). The hydrodynamic model for the Southern Ring Nebula is taken from De Marco et al. (2022). (Left) _JWST_ images of the Ring Nebula (top left; F770W) and the Southern Ring Nebula (bottom left; F212N), highlighting the spikes and clumps. The Southern Ring Nebula has evolved faster, and has more fully developed spikes and clumps. Figure 15: Top: The F770W images with the contours of the bicone superposed. Black lines show the torus, blue and red the cones. The dashed lines show the cone in the halo. Bottom: the same model superposed on the F1800W (red) / F1000W (blue) ratio image being more evolved and the Ring nebula being in an earlier phase of evolution. The radial spikes in the halo are seen in H\({}_{2}\). They are regions partially shadowed by the shell, predicted by the hydrodynamical model. The central exciting star is inferred to be a member of a triple system. This consists of: the central star itself, with a progenitor mass of \(\sim\)1.5-2 \(M_{\odot}\); a binary companion at some 35 au, responsible for the evenly spaced concentric arc structures in the nebula; and a distant, common proper motion companion at 0.07 pc which is inferred to be a low-mass M2-M4 main sequence star. A schematic model is presented which consists of the central torus and two polar cones, in which the H\({}_{2}\) is swept up on the walls of the cones. The model can explain the location of the H\({}_{2}\) emission in the halo and the stripes seen on projection against the cavity. Many features we see in the _JWST_ images of the Ring Nebula, including the spikes, are shared by several other well-studied PNe of similar morphology. The time when planetary nebulae could be modelled as uniform density spheres is long gone. They contain a large variety of structures and phases, from highly ionized hot gas to dense molecular clumps. ## Acknowledgements This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for _JWST_. These observations are associated with program #1558. Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESAC/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This work is based in part on observations made with the _Spitzer Space Telescope_, which was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This study is based on the international consortium of ESSENcE (Evolved Stars and their Nebulae in the JWST era). R.W. and M.M. acknowledge support from STFC Consolidated grant (2422911). M.J.B. and R.W. acknowledge support from European Research Council (ERC) Advanced Grant SNDUST 694520. This research has used data, tools or materials developed as part of the EXPLORE project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101004214. A.A.Z. I.M. and N.L.J.C. acknowledge support from this grant. A.A.Z. acknowledges funding through UKRI/STFC through grant ST/T000414/1. I.A. acknowledges support from the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES; Finance Code 001) and the Program of Academic Research Projects Management, PRPI-USP. H.L.D. acknowledges support from grants JWST-GO-01558.03 and NSF AAG-1715332. G. G.-S. thanks Michael L. Norman and the Laboratory for Computational Astrophysics for the use of ZEUS-3D. The computations were performed at the Instituto de Astronomia-UNAM at Ensenada. P.J.K. acknowledges support from the Science Foundation Ireland/Irish Research Council Pathway programme under Grant Number 21/PATH-S/9360. RS's contribution to the research described here was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. J.C., N.C. and E.P. acknowledge support from the University of Western Ontario, the Institute for Earth and Space Exploration, the Canadian Space Agency (CSA)[22JWGO1-22], and the Natural Sciences and Engineering Research Council of Canada. This research made use of photutils, an astropy package for detection and photometry of astronomical sources (Bradley, 2023). ## Data Availability _JWST_ data are available from the Barbara A. Mikulski Archive for Space Telescopes (MAST; [https://mast.stsci.edu](https://mast.stsci.edu)). Reduced images will be available by request to the authors.
2301.05316
Traffic Steering for 5G Multi-RAT Deployments using Deep Reinforcement Learning
In 5G non-standalone mode, traffic steering is a critical technique to take full advantage of 5G new radio while optimizing dual connectivity of 5G and LTE networks in multiple radio access technology (RAT). An intelligent traffic steering mechanism can play an important role to maintain seamless user experience by choosing appropriate RAT (5G or LTE) dynamically for a specific user traffic flow with certain QoS requirements. In this paper, we propose a novel traffic steering mechanism based on Deep Q-learning that can automate traffic steering decisions in a dynamic environment having multiple RATs, and maintain diverse QoS requirements for different traffic classes. The proposed method is compared with two baseline algorithms: a heuristic-based algorithm and Q-learningbased traffic steering. Compared to the Q-learning and heuristic baselines, our results show that the proposed algorithm achieves better performance in terms of 6% and 10% higher average system throughput, and 23% and 33% lower network delay, respectively.
Md Arafat Habib, Hao Zhou, Pedro Enrique Iturria Rivera, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Steve Furr, Melike Erol-Kantarci
2023-01-12T22:02:25Z
http://arxiv.org/abs/2301.05316v1
# Traffic Steering for 5G Multi-RAT Deployments using Deep Reinforcement Learning ###### Abstract In 5G non-standalone mode, traffic steering is a critical technique to take full advantage of 5G new radio while optimizing dual connectivity of 5G and LTE networks in multiple radio access technology (RAT). An intelligent traffic steering mechanism can play an important role to maintain seamless user experience by choosing appropriate RAT (5G or LTE) dynamically for a specific user traffic flow with certain QoS requirements. In this paper, we propose a novel traffic steering mechanism based on Deep Q-learning that can automate traffic steering decisions in a dynamic environment having multiple RATs, and maintain diverse QoS requirements for different traffic classes. The proposed method is compared with two baseline algorithms: a heuristic-based algorithm and Q-learning-based traffic steering. Compared to the Q-learning and heuristic baselines, our results show that the proposed algorithm achieves better performance in terms of 6% and 10% higher average system throughput, and 23% and 33% lower network delay, respectively. Multi-RAT, traffic steering, reinforcement learning ## I Introduction The dual connectivity between long term evolution (LTE) and fifth generation new radio (5G NR) results in multiple radio access technologies (multi-RAT) [1, 2]. On the other hand, each type of RAT is supposed to have distinctive capabilities to serve user equipment (UE) with diverse quality-of-service (QoS) requirements. This raises the need of steering a specific class of traffic to a certain RAT to fulfill the QoS demands. For instance, high throughput video traffic can be better served by 5G NR. On the contrary, steering voice traffic to LTE base station (BS) with wider coverage can be a better decision since such traffic is not throughput hungry but requires more coverage to avoid frequent handovers. However, steering a specific class of traffic continuously to a certain RAT may cause several problems. The system may suffer from higher delay due to excessive load and reduced throughput because of the packet drops. These issues are quite challenging to address, especially when 5G NR facilitates dense network deployments and an increased number of users. To address the above-mentioned challenges, an AI-enabled traffic steering scheme emerges as a promising approach to manage densely deployed networks with dynamic requirements. In recent years, AI and machine learning have been applied to various other problems in 5G [3]. Even though the emergence of the 5G non-stand-alone (NSA) mode has drawn the attention of researchers recently, most existing works linked with traffic steering lack a comprehensive tool to overcome the complexity. For instance, in [4], the authors propose a traffic steering scheme based on some threshold calculated using parameters like load at each type of RAT, channel condition, and service type but the method lacks the intelligence to handle dynamic wireless environments. Compared with conventional model-based optimization methods, machine learning, especially reinforcement learning (RL) algorithms, can significantly reduce the complexity of defining a dedicated optimization model [5]. Advanced machine learning techniques like deep reinforcement learning (DRL) [6] can not only automate traffic steering in a dynamic 5G wireless environment, but also it can handle larger state-action space compared to traditional reinforcement learning. Therefore, unlike previous works, we propose a DRL-based traffic steering scheme that tends to perform RAT specific traffic steering in a multi-RAT environment to maintain QoS requirements of different traffic classes in a dynamic 5G NSA mode to maintain seamless network activity and smooth user experience. In this paper, we seek to balance the QoS demands of all the traffic classes simultaneously by proposing a Deep-Q-network (DQN)-based traffic steering scheme. The reward and state functions of the proposed DQN-based traffic steering scheme is carefully designed to have satisfactory performance based on two crucial key performance indicators (KPIs); i.e. network delay and average system throughput. Performance of the proposed method is compared with two baseline algorithms: Q-learning-based method [7] and a heuristic-based algorithm adopted from [4]. It gains 6% and 10% increase in average system throughput compared to the Q-learning and heuristic-based baseline respectively. Furthermore, it achieves 23% and 33% decrease in network delay compared to the mentioned baselines. The rest of the paper is organized as follows: Section II presents the related works. We discuss the system model and the problem formulation in Section III. Section IV covers the proposed DQN-based traffic steering scheme along with the baselines. The performance evaluation of the proposed DQN-based traffic steering method is presented in Section V. Finally, the paper is concluded in Section VI. ## II Related works In this section, we summarize the state-of-the-art literature on traffic steering. Prasad et al. propose a dynamic traffic steering scheme for energy efficient radio access network moderation in ultra-dense 5G networks [8]. A unified traffic steering scheme by Dryjanski et al. is proposed for LTE-advanced pro, aiming at optimal radio resource allocation in multi-RAT networks [9]. Most recently, Khaled et al. have proposed a cell zooming technique to steer traffic in a software defined radio-enabled LTE network that uses renewable energy sources to lessen on-grid power consumption [10]. Gijon et al. propose a data driven approach to perform traffic steering in multi-carrier LTE networks in which traffic steering is conducted based on reference signal received quality-based handover margins [11]. Nevertheless, 5G deployments have made it more challenging to develop an elegant traffic steering scheme because of the increased number of users and dual connectivity. Passas et al. propose a pricing oriented network selection process for distributed heterogeneous networks based on imposed load pressure at a particular RAT [12]. A heuristic-based approach proposed in [4] performs traffic steering based on a threshold level calculated using parameters like channel condition, load level at each RAT, and service type. Priscoli et al. address the problem of traffic steering using a Q-learning-based solution that aims at maintaining QoS, and performs load balancing in a 5G heterogeneous network [13]. Different from the previous works, this paper provides automation in the system via DRL-based traffic steering scheme that can perform RAT specific traffic steering in a multi-RAT environment. Furthermore, the proposed method can maintain QoS requirements of different traffic classes in a dynamic 5G NSA mode to maintain seamless network activity and smooth user experience. ## III System Model and Problem Formulation ### _System Model_ In this work, a multi-RAT network is considered having \(Q\) classes of RATs where each class of RAT, \(q\) represents a particular access technology (LTE, 5G, etc.). Multiple users are associated with different types of RATs via dual connectivity. A UE can maintain \(K\) types of traffic classes. Fig. 1 presents the network model considered in this study. We represent three different classes of traffics: voice, gaming, and video as TC1, TC2, and TC3 respectively in the figure. We have designed our network environment in a way where small cells are within the range of a macro-cell. UEs have dual connectivity with LTE or 5G RAT and traffic can be steered to either one of these RATs based on our proposed method. The total downlink bandwidth, \(B\) in MHz is divided into \(N_{RB}\) resource blocks. A resource block contains a set of 12 contiguous subcarriers. Consecutive resource blocks are grouped to constitute resource block group (RBG) as defined in [3]. Each RBG, \(h\) is allocated a certain transmission power \(p_{h,b}\), by a BS, \(b\). Based on our system model, each BS holds a number of transmission buffers corresponding to the number of users connected to it. Every transmission time interval (TTI), the downlink scheduler assigns resources to the users having pending data transmissions. The link capacity between the UE, \(u\) and BS, \(b\) can be formulated as follows: \[C_{u,b}=\sum_{h=1}^{H}\omega_{h}\log_{2}\left(1+\frac{p_{h,b}x_{h,u,b}g_{h,u, b}}{\omega_{h}N_{0}+\sum_{m\in B}p_{h,m}x_{h,u,m}g_{h,u,m}}\right), \tag{1}\] where \(\omega_{h}\) is the bandwidth of the \(h\), \(p_{h,b}\) is the transmit power of the BS, \(b\) on \(h\), \(g_{h,u,b}\) is the channel co-efficient and \(x_{h,u,b}\) is the RBG's allocation indicator of the link \((h,u,b)\). \(N_{0}\) is the additive white Gaussian noise single-sided power spectral density. \(p_{h,m}\) is the transmit power of the interfering BS, \(m\), \(g_{h,u,m}\) is the channel co-efficient, and \(x_{h,u,m}\) is the allocation indicator of link \((h,u,m)\). Each link has a capacity limit. Traffic flows passing through a link should not exceed the capacity of the link in the system. \[\sum_{f\in F}d^{f}x_{u,b}^{f}\leqslant C_{u,b}\quad\forall(u,b)\in L, \tag{2}\] where \(F\) is the set of all the flows in the network, \(d^{f}\) is the capacity demand of the flow \(f\in F\) from UE, \(u\) to BS \(b\). \(x_{u,b}^{f}\) represents a binary \((0,1)\) component that is '1' if the link \((u,b)\) has been used from UE,\(u\) to BS \(b\). It is '0' otherwise. \(L\) is the set of links and \(C_{u,b}\) is the capacity of link \((u,b)\). as presented in eq. (1) In our system model, the delay is considered as the summation of transmission and queuing delay which is as follows: \[D_{k,b}=D_{k,b}^{Trx}+D_{k,b}^{q}, \tag{3}\] where \(D_{k,b}^{Trx}\) is the transmission delay experienced for a particular traffic type \(k\) and BS \(b\), and \(D_{k,b}^{q}\) is the queuing delay experienced for a particular traffic type \(k\) at BS \(b\) for a user \(u\). The transmission delay can be calculated as follows: \[D_{k,b}^{Trx}=\frac{L_{u,b}}{C_{u,b}}, \tag{4}\] where \(L_{u,b}\) is the packet length and \(C_{u,b}\) is the link capacity as stated in eq. (1). Fig. 1: Illustration of network environment with one LTE macro cell and several 5G small cells. ### _QoS Requirements and Problem Formulation_ To be able to perform traffic steering for different traffic classes with QoS requirements for delay and throughput, first two parameters are defined based on delay and throughput. The delay parameter associated with our traffic steering problem is considered as the ratio of the defined QoS requirement for delay and the actual delay experienced in the system for a particular traffic class being carried by a certain BS. It can be stated as follows: \[r_{k,b}^{D}=\frac{D_{QoS}}{D_{k,b}}, \tag{5}\] where \(D_{QoS}\) is delay requirement defined in the simulation for a particular traffic type and \(D_{k,b}\) is the actual delay achieved. Similarly, the throughput parameter is defined as the ratio of actual throughput achieved and the required throughput as stated in eq. (6): \[r_{k,b}^{T}=\frac{T_{k,b}}{T_{QoS}}, \tag{6}\] where \(T_{QoS}\) is the throughput requirement defined in the simulation for a particular traffic class and \(T_{k,b}\) is the actual throughput achieved. Since our aim is to improve the system performance in terms of the delay and throughput, a new variable is formed to represent and meet such targets. It combines the delay and throughput parameters in eq. (5) and (6) along with some weight factors. The declared variable combined with delay, throughput, and weight factors (\(w_{1}\) and \(w_{2}\)) is as follows: \[M=w_{1}(r_{k,b}^{D})+w_{2}(r_{k,b}^{T}). \tag{7}\] The traffic steering problem proposed in this paper is formulated as the maximization of the variable \(M\) (presented in eq. (7)) which is as follows: \[\begin{split} max\sum_{u\in U}\sum_{k\in K}\sum_{b\in B}M_{u,f, b},\\ s.t.\sum_{(u,b)\in L}\beta^{f_{k}}\geqslant\beta^{f}\quad\forall f \in F,\\ \sum_{(u,b)\in L}D(u,b)x_{u,b}^{f}\leqslant D^{f}\quad\forall f \in F,\end{split} \tag{8}\] where \(\beta^{f_{k}}\) is the required bitrate for a particular type of traffic \(k\), and \(\beta^{f}\) is the available bitrate. Also, \(D^{f}\) represents the latency demand of flow \(f\in F\) and \(D(u,b)\) is the latency of link \((u,b)\). ## IV Proposed DQN-based Traffic Steering Scheme ### _DQN-based Traffic Steering Scheme_ For a relatively simplistic RL environment, Q-learning is a good solution for optimization. However, as the state-space increases, the time needed to traverse all these states and iteratively update all the Q-values will increase which is computationally inefficient and resource consuming. To address this issue, DQN can be used to estimate the Q-values for each state-action pair in a given environment using a deep neural network (DNN) [6]. During the training stage of DQN, agent's experiences at each time step is stored in a data set called the replay memory. At time \(\tau\), the agent's experience \(e_{\tau}\) is defined as the following tuple: \[e_{\tau}=(S_{\tau},A_{\tau},R_{\tau+1},S_{\tau+1}). \tag{9}\] The tuple contains the state of the environment, the action taken from the state, the reward given to the agent as a result of previous state-action pair and the next state of the environment. In short, the tuple gives us the summary of the agent's experience at time \(\tau\). All the agent's experiences at each time step over all the episodes played by the agent are stored in the replay memory. In practice, the replay memory is set to some finite size unit \((N)\). Therefore, it will only store the last \(N\) experiences. The replay memory data set is the place from where random samples are chosen to train the network. The DNN in DQN takes states as inputs from the environment and outputs the Q-values for each action that can be taken from that state. Before the training starts, first, the replay memory data set, \(D\) is initialized to capacity, \(N\). Next, DNN is initialized with random weights. For each episode, the starting state is initialized. For each time step within the episode, the agent either explores the environment and selects a random action or the agent exploits the environment and selects the greedy action for the given state that provides the highest Q-value. This epsilon greedy policy is used to balance the exploration and exploitation. \[A_{\tau}=\begin{cases}random&action,&\text{if }rand\leqslant\epsilon\\ argmax(q_{\tau}(S_{\tau},A_{\tau})),&\text{otherwise}\end{cases} \tag{10}\] where \(\epsilon\) is the exploration probability within \(0\leqslant\epsilon\leqslant 1\) and \(rand\) represents a random number between 0 to 1. After an action is taken, we observe the reward for the action along with the next state of the environment. Therefore, the state an agent initialized from, action taken, reward observed are all put together in a tuple as described in eq. (9). For a single sample, the first pass to the network occurs for the state from the experience tuple that was sampled. The network then outputs the Q-values associated with each possible action that can be taken from that state and then the loss is calculated between the Q-values for the action from the experience tuple and the target Q-value for this action. To calculate the target Q-value, it is required to have a second pass to the target network with the next state. The target network is the clone of the policy network (which is also the main network). Its weights are frozen with the weights same as the policy network and the weights are updated in the target network after every certain amount of time steps. The loss for DQN is calculated using the following equation: \[L(w)=Er(R_{\tau}+\gamma\max_{A}q(S_{\tau+1},A,w^{\prime})-q(S_{\tau},A_{\tau},w )), \tag{11}\] where \(w\) and \(w^{\prime}\) are the weights of the main and the target network, and \(Er\) represents the error function. Having two NNs (main and target) ensures stability. Fig. 2 describes the schematic of the proposed DQN-based traffic steering where we have a main network and a target network and minibatch from the replay memory is getting fetched. The mathematical formulation of DQN depends on Markov Decision Process (MDP) that is defined by agents, states, actions, and a reward function. Tuples associated with DQN is defined as follows: * **Agent:** We implement a centralized agent to control the macro base station (MBS) and the small cell base stations. It is deployed in the MBS and controls all the incoming traffic to each BS. * **State:** The state consists of three elements, \(\{T_{f},L_{Q(SINR)},q_{L}\}\). Here, \(T_{f}\) represents the traffic type. It is assumed that each traffic type has fixed QoS requirements and we can perform traffic steering to a particular RAT based on that. Users periodically report signal-to-interference and noise ratio (SINR) measurements to the 5G base station (gNB) and LTE base station (eNB). It indicates the quality of the link associated with a UE and a BS. Therefore, the second element of state space is: \(L_{Q(SINR)}\)=\(\{SINR_{eNB},SINR_{gNB}\}\). To represent load level, queue length of both types of RATs is used. So, the last element of the state space is queue length, \(q_{L}\)=\(\{q_{L(gNB)},q_{L(eNB)}\}\). * **Action:** The action space contains the action of flow admission to the RATs. It is defined as: \(\{A_{LTE},A_{5G}\}\). Here, \((A_{LTE})\) stands for flow admission to the LTE RAT, and \((A_{5G})\) stands for flow admission to the 5G RAT. * **Reward:** The reward function is based on eq. (7). To keep it normalized, sigmoid function is used. Therefore, the reward function is as follows: \[R=sigm(M),\] (12) where \(sigm(M)\) represents the sigmoid function. The proposed DQN-based traffic steering algorithm is summarized as Algorithm 1. ``` 1:for\(TTI=1\quad to\quad T\)do 2:for every \(u,b,k\)do 3:if\((rand\leq\epsilon)\)then 4: choose action randomly 5:else 6: select \(A_{\tau}\) using greedy policy 7:endif 8: BSs are selected for all the UEs for all \(k\in K\) 9: Traffic admission is performed 10: Reward calculation based on eq. (12) 11: Agent updates its own state \(S_{\tau}\) 12: Save \((S_{\tau},A_{\tau},R_{\tau+1},S_{\tau+1})\) 13:endfor 14: Random sample a minibatch from the experience pool 15: Generate target Q-values, \(q_{\tau}(S_{\tau},A_{\tau})\) 16: Update \(w\) using gradient descent to minimize the loss, \(L(w)=Er(q_{\tau}(S_{\tau},A_{\tau})-q(S_{\tau},A_{\tau},w))\) 17: Copy \(w\) to \(w^{\prime}\) after several training 18:endfor 19:Output: Optimal traffic steering decisions from \(TTI=1\quad to\quad T\) ``` **Algorithm 1** DQN-based traffic steering ### _Baseline Algorithms_ In this section, two baseline algorithms are introduced that have been used for the performance comparison. The first baseline algorithm for RAT selection is based on a predefined threshold [4]. This is called the heuristic baseline. Here, the threshold is calculated for each UE based on the metrics like load at eNB \((l_{e})\) and gNB \((l_{g})\), channel condition of a user under LTE \((ch_{e,u})\) and 5G BS \((ch_{g,u})\), service type of a user \((S_{u})\). The channel condition is determined to be good or bad considering a threshold of received SINR values. Similarly, the load at each RAT is determined based on a threshold value. Based on the mentioned metrics, a value \(T_{u}\) is calculated that is used for selecting the RAT for a UE after comparing it with a predetermined threshold \((T_{th})\). Following equation is used to calculate the value for \(T_{u}\): \[T_{u}(l_{e},l_{g},ch_{e,u},S_{u})=\alpha l_{e}+\beta l_{g}+\gamma ch_{g,u}+ \delta S_{u}, \tag{13}\] where \(\alpha\), \(\beta\), \(\gamma\), and \(\delta\) are the weights associated with considered parameters that can be modulated based on the impact of any certain metric on system performance. \(T_{th}\) is set to be the mean of all the possible values of \(T_{u}\). The decision of steering traffic to a particular RAT is taken the following way: \[R_{u}=\begin{cases}1,T_{u}>T_{th}&\text{(1 represents gNB)}\\ 0,T_{u}\leqslant T_{th}&\text{(0 represents eNB)}.\end{cases} \tag{14}\] The Q-learning algorithm has been used as another baseline in this work [7]. The goal is to investigate how DQN performs against the Q-learning algorithm. Fig. 2: Overall system architecture with DQN. ## V Performance Evaluation ### _Simulation setup_ We have conducted MATLAB based simulations considering 1 eNB and 4 gNBs with 30 users in total. There are a total of 1 macro-cell and 4 small cells facilitated by the gNBs and an eNB. A macro-cell and a small-cell have carrier frequencies of 3.5 GHz and 0.8 GHz respectively. Specifications of the traffic classes used in this study have been summarized in TABLE I. For the experimental results, the load has been varied between 5-10 Mbps. Proportion of the voice, video, and gaming traffic is 20%, 50%, and 30% respectively. Higher proportion of the video traffic is deliberately considered to observe how the system performs with the higher throughput requirements. Also, gaming traffic has the most stringent delay requirement and we wanted to see if the system performs well enough to meet such precise requirement. Therefore it has a higher percentage compared to the voice traffic. QoS requirements associated with delay and throughput for the three types of traffic classes are specified based on the existing literature [14] and 3GPP specifications (see TABLE I). We are using multi-RAT dual connectivity architecture, an NSA mode where LTE and 5G NR BSs serve together. An architecture specified in [15] has been used where the dual connectivity is ensured via evolved packet core [16]. Transmission power of the LTE BS and 5G NR BSs are set to 40W and 20W. Furthermore, bandwidth for the LTE and 5G RAT are fixed to 10MHz and 20MHz. ### _Simulation results_ The performance of the proposed algorithm is evaluated in terms of two KPIs: Average system throughput and network delay. In Fig 3, we present a comparison in terms of system throughput under different user loads. The proposed DQN outperforms heuristic and Q-Learning baselines by gaining 6% and 10% increased throughput, respectively. Fig. 4 presents the performance comparison of the proposed DQN-based traffic steering method with the other baselines in terms of delay. The DQN-based method achieves 23% and 33% decrease in network delay compared to the baselines. Note that, the proposed method and the Q-learning, both have a reward function formulated based on throughput and delay. Whenever high delay is experienced for steering traffic to a particular RAT, the system learns. That is why, both of them have better performance compared to the heuristic baseline. In Fig. 4, delay is calculated considering all the traffic classes together at each load. It should be mentioned that the main reason of the improved performance of the proposed method is the use of DQN, that outperforms Q-learning in terms of exploration efficiency and achieves higher average reward. Q-learning suffers due to longer exploration period and gets lower average reward since it does not have a DNN as an approximator which compels the agent to cover larger state and action space. In this work, we also want to steer a particular type of traffic to a specific RAT. For example, steering the voice traffic constantly to a gNB is a waste of resources since the throughput requirement is not that high for such traffic. Fig. 5 is presented which shows what percentage of a traffic class is processed by a particular RAT and when the traffic gets steered due to higher load. In Fig. 5(a), it is observed that most of the voice traffic is processed by the eNB, however, a small portion of the traffic is processed by the gNB too whenever the system experiences higher load. For the video and gaming traffic, it is observed that most of the traffic is processed by the gNB. Fig. 4: System delay against traffic load. Fig. 3: System throughput against traffic load. Lastly, Fig. 6 demonstrates how traffic steering occurs whenever a high load is experienced in a BS with a particular RAT. We start with one UE at the 300th time slot and increase the number of UEs in a small cell up to six for different traffic classes. The variable \(L\), in the respective figure represents load in terms of queue length. At the 1800th time slot, it can be seen that four among six UEs are steering different types of traffic to the 5G NR BS. This results in higher load and we can see that the third and fourth UEs are experiencing high load (value of \(L\) changed from 0 to 1). So, in the next observed time slot, these two UEs steer the traffic to the eNB. In the 2100th time slot, we can see four UEs steering voice, video, and gaming traffic to the only eNB in our system. This incurs high load at eNB and in the next observed slot we can see that the sixth UE has switched its traffic to the gNB. ## VI Conclusions In this study, we have proposed a novel method that can perform RAT specific and QoS aware traffic steering using DQN. It gains 6% and 10% increase in average system throughput compared to the Q-learning and heuristic-based baseline respectively. Moreover, it achieves 23% and 33% times decrease in network delay compared to the baselines. Apart from the better performance in terms of the KPIs, the proposed method can perform RAT specific traffic steering ensuring efficient use of network resources. Lastly, the proposed DQN-based traffic steering can successfully perform load balancing in an optimal way as whenever high load is induced to a particular RAT, traffic is steered to another RAT dynamically. ## Acknowledgement This work has been supported by MITACS and Ericsson Canada, and NSERC Collaborative Research and Training Experience Program (CREATE) under Grant 497981.
2302.04558
Quantum information processing with superconducting circuits: a perspective
The last five years have seen a dramatic evolution of platforms for quantum computing, taking the field from physics experiments to quantum hardware and software engineering. Nevertheless, despite this progress of quantum processors, the field is still in the noisy intermediate-scale quantum (NISQ) regime, seriously limiting the performance of software applications. Key issues involve how to achieve quantum advantage in useful applications for quantum optimization and materials science, connected to the concept of quantum supremacy first demonstrated by Google in 2019. In this article we will describe recent work to establish relevant benchmarks for quantum supremacy and quantum advantage, present recent work on applications of variational quantum algorithms for optimization and electronic structure determination, discuss how to achieve practical quantum advantage, and finally outline current work and ideas about how to scale up to competitive quantum systems.
G. Wendin
2023-02-09T10:49:56Z
http://arxiv.org/abs/2302.04558v1
# Quantum information processing with superconducting circuits: a perspective ###### Abstract The last five years have seen a dramatic evolution of platforms for quantum computing, taking the field from physics experiments to quantum hardware and software engineering. Nevertheless, despite this progress of quantum processors, the field is still in the noisy intermediate-scale quantum (NISQ) regime, seriously limiting the performance of software applications. Key issues involve how to achieve quantum advantage in useful applications for quantum optimization and materials science, connected to the concept of quantum supremacy first demonstrated by Google in 2019. In this article we will describe recent work to establish relevant benchmarks for quantum supremacy and quantum advantage, present recent work on applications of variational quantum algorithms for optimization and electronic structure determination, discuss how to achieve practical quantum advantage, and finally outline current work and ideas about how to scale up to competitive quantum systems. _Keywords--_ Quantum computing, superconducting qubits, quantum advantage, quantum algorithms, NISQ, VQE, QAOA. ###### Contents * 1 Introduction * 2 Overview * 2.1 Quantum processor systems: hardware and software * 2.2 Quantum algorithms * 2.3 Quantum supremacy * 2.4 Performance metrics * 2.4.1 Cross entropy benchmarking - XEB * 2.4.2 Quantum volume - QV * 2.4.3 Relevance of metrics for usefulness * 3 Applications * 3.1 Quantum approximate optimization algorithm - QAOA * 3.1.1 QAOA basics * 3.1.2 QAOA applied to air transportation - tail assignment * 3.2 Variational quantum eigensolver - VQE * 3.2.1 VQE basics * 3.2.2 VQE applied to chemistry: * 3.3 Simulating physical systems on engineered quantum platforms * 3.3.1 Quantum transport and localization: * 3.3.2 Quantum information scrambling: * 3.3.3 Many-body Hilbert space scarring: * 4 Key issues * 4.1 Noise and loss of information - a common experience. * 4.2 Fighting imperfections and noise in quantum processors * 4.2.1 Quantum error suppression: * 4.2.2 Quantum error mitigation: * 4.2.3 Quantum error correction: * 4.3 Scaling up for practical quantum advantage * 4.3.1 QPU-centric approach: * 4.3.2 HPC-centric approach: * 4.4 Useful NISQ digital quantum advantage - mission impossible? * 5 Future directions * 5.1 Improved and alternative superconducting qubits * 5.2 Hybrid distributed computing * 5.3 Continuous variables - computing with resonators * 5.4 Biochemistry and life science - drivers of quantum computing? * 5.5 Final perspective ## 1 Introduction Since around 1980, quantum computing (QC) and quantum simulation (QS) have gone from fantasy to possibility, from concept to application, from basic science to engineering [1, 2, 3, 4, 5, 6, 7, 8, 9]. In 2012, at a workshop at Benasque in the Spanish Pyrenees, at a memorable session we discussed in particular the future of QC. Myself, I predicted 20-30 years for useful applications, while Rainer Blatt emphasized that if we did not have any decisive results in 5 years, QC would soon be dead. It seems we were both right: QC did take off around 2017 at engineering levels, while really useful competitive applications showing practical quantum advantage are probably still 10-20 years ahead. The intense discussions in the European quantum community then led to the 2016 Quantum Manifesto [10] and to the EU Quantum Flagship [11] setting sail in 2018. From 2019, the field has seen a huge development of platforms for quantum computing and simulation with superconducting devices and systems [12, 13, 14, 15, 16], including demonstration of quantum supremacy [12, 13, 14, 17]. However, we are living in the era of noisy intermediate-scale quantum (NISQ) devices [18], and it is currently impossible to build superconducting quantum processing units (QPU) where one can entangle more than about 20 qubits with high probability during the coherence time. Which means there is no time for useful computation with deep quantum circuits, only time to characterize the device and demonstrate physical entanglement - impressive but not necessarily useful. Nevertheless, IBM is now scaling superconducting QPUs to more than 1000 qubits in 2023 and over 4000 qubits in 2025 [16, 19], aiming for seamless integration of high-performance computers (HPC) and QPU accelerators. Google [20] seems to focus on modest-size quantum error corrected QPUs for large-scale quantum computational breakthroughs already by 2029, while at the same time there seems to be consensus [21, 22, 23] that practical quantum advantage may take much longer to achieve. To create useful applications showing quantum advantage, it is necessary to scale up QPUs and related classical-quantum hybrid (HPC+QC) infrastructure. This explains a number of current trends: (i) stay at "small" scales (\(\leq\) 100 qubits) and try to solve coherence problems and create useful applications before scaling up; (ii) go for large scales (\(\geq\) 1000 qubits) and try to implement quantum error correction for quantum advantage or superiority while scaling up; (iii) scale up and solve large-scale hardware (HW) and software (SW) integration at systems levels, waiting for practical quantum advantage for use cases to emerge. The question of the feasibility of powerful quantum computers beating classical super-HPC hinges on that it will be ultimately possible to perform quantum error correction (QEC). When John Martinis' group was able to demonstrate that their superconducting quantum circuits were at the surface code threshold for fault tolerance [24], then the field opened up and went from quantum physics toward quantum computing, scaling up HW and SW [25, 26, 27, 28, 29, 30, 31, 32], and implementing significant quantum algorithms and quantum physics experiments [25, 26, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38], including demonstrations of significant steps toward QEC [39, 40, 41, 42, 43]. Much of the current discussion concerns how to get from proofs of concept to useful applications. Consider the way quantum computing is being promoted by Google [20]: "Within the decade, Google aims to build a useful, error-corrected quantum computer. This will accelerate solutions for some of the world's most pressing problems, like sustainable energy and reduced emissions to feed the world's growing population, and unlocking new scientific discoveries, like more helpful AI." This tells us that the world's most pressing problems, like benchmarking climate models, are already subject to large-scale calculations, pushing super-HPCs to their limits set by NP-hard problems. The intended role of QPUs is to provide quantum superiority and to go far beyond those limits. However, in the short term, during this decade, this can only be achieved by experimental quantum co-processors running specific subroutines addressing classically hard problems, omnipresent in industrial use cases [44, 45]. In the short term, industry will effectively be co-developing quantum algorithms as subroutines, and benchmark them against competing classical algorithms. If this leads to quantum advantage already in the short term, that will be a great bonus. The important thing to understand is that lack of quantum advantage for now does not jeopardize powerful computing - exascale super-HPC platforms will continue addressing the world's most pressing problems, and eventually QPU accelerators may provide quantum leaps. This perspective article can be looked upon as a self-contained second part of a research and review paper [1] that stopped short at the beginning of the current engineering era of scaling up devices and building quantum computing ecosystems. The purpose is to focus on addressing the quite dramatic development during the subsequent five years, trying to "predict the future" based on current visions, roadmaps, efforts and investments that aim for the next ten years [46, 47, 48], outlining a sustainable quantum evolution that hopefully survives the quantum hype [49, 50]. To be able to do so in this brief article, we will frequently refer to [1] and to recent reviews for basic background, technology, and methods. The review will focus on superconducting technology and systems based on circuit quantum electrodynamics (cQED) [2, 6], but will also provide glimpses of the broader development. ## 2 Overview ### Quantum processor systems: hardware and software With NISQ devices with limited coherence time, the necessary circuit depths [51] are much too long to achieve reasonable accuracy. It is therefore necessary to break up the quantum circuits in short, low depth, pieces that can be run on quantum processing units (QPU) during the coherence time. These often make use of variational quantum algorithms (VQA) where one calculates the expectation value of a cost function, e.g. a Hamiltonian, using a parameterized trial function. A classical high-performance computer (HPC) controls and executes the classical optimization loop: computes averages, searches for improved energies, computes new parameters, and updates the quantum circuit. Figure 1 illustrates the basics of quantum computing from a user perspective. The program code is typically prepared on a small classical computer, then submitted to the application programming interface (API) of an HPC frontend. The HPC interprets the quantum part and prepares the code defining the quantum circuit. This is finally loaded into the stack of the QPU. In the case of an ideal QPU, the code is executed on the QPU until solution is achieved, and the results finally read out and sent back to the HPC for post-processing. In the NISQ world of QPUs, the quantum execution has to be limited to low-depth (shallow) quantum circuits that can be executed within the coherence time. This necessitates repeated quantum-classical processing loops for optimization of variational problems. Here the classical computer is a bottleneck. The HPC calculation takes much longer time than the QPU execution time, even if the HPC responds without delay (no latency) and that the QPU backend is available immediately on request. This is hybrid HPC+QC computation, and is algorithm dependent. Quantum advantage in the NISQ era depends critically on efficient representation and coding of problems. Here there is a distinct difference between decision and optimization problems on the one hand and, e.g., computational problems like electron structure and energy level determination on the other. ### Quantum algorithms An ideal digital quantum computer executes perfect gates on ideal qubits with infinite coherence time. It evolves the time-evolution operator \(e^{-iHt}\) corresponding to a given Hamiltonian describing the problem (see e.g. [1]). \(e^{-iHt}\) is then broken down into a product of factors for the different terms of the Hamiltonian. These factors are finally represented in terms of quantum gates constituting a quantum circuit. In this case, the role of a classical computer is exclusively pre- and post-processing: preprocessing to construct the quantum circuit and post-processing to read out and treat the results. Both of these are in principle NP-hard. To get desired results, the QC must be able to run for long times to execute deep quantum circuits, which requires perfect qubits and gates. With NISQ devices, it is not possible to run e.g. phase-estimation algorithms to compute the energies of molecules - the needed quantum circuits are far too long with respect to the coherence time. This has led to alternative approaches, calculating the expectation value of the problem Hamiltonian with respect to parametrized trial functions and then optimizing the parameters for lowest energy. Variational quantum algorithms (VQA) are generally based on constructing parametrized trial functions to compute and minimize the expectation value of a cost function. In the quantum case, the specific quantum computation involves computing the expectation value of a Hamiltonian cost function, while the classical computer prepares the trial function, computes the energy, updates the trial function parameters and minimizes the energy in an optimization loop. Extensive discussions and reviews of quantum methods and algorithms are presented Figure 1: HPC + QC. The user prepares a program and submits it to the classical frontend. The HPC prepares the quantum circuit and sends it to the QC. The HPC/QC registers have N bits/qubits, i.e. \(n=2^{N}\) possible configurations/states. The HPC register can only be in one of \(2^{N}\) states: \(|00...00\rangle,|00...01\rangle,|00...10\rangle,.....|11...11\rangle\) at each instance of time \(t\), while the QC register can be in a _superposition of all states_: \(f_{1}(t)|00...00\rangle+f_{2}(t)|00...01\rangle+f_{3}(t)|00...10\rangle,.....+f _{n-1}(t)|11...10\rangle+f_{n}(t)|11...11\rangle\). This describes _time-dependent quantum superposition and entanglement_ and can, at best, lead to exponential quantum advantage. in [52, 53, 54, 55, 56, 57, 58]. ### Quantum supremacy John Preskill was the first one to explicitly introduce the concept of quantum supremacy in a 2012 paper discussing quantum computing and the entanglement frontier [17]. In 2016, Boixo et al. then wrote a paper on how to characterize quantum supremacy in near-term devices [59] preparing for the 2019 Google experiment to demonstrate quantum supremacy [12]. The idea was to measure the output of a pseudo-random quantum circuit (Fig. 2) to produce a distribution of samples, and to compute the cross-entropy describing the "overlap" between the quantum and classical distributions (see Hangleiter and Eisert [60] for a review of quantum random sampling). Aaronson and Chen [61] put this on complexity-theoretic foundations. They noted, that for the sampling tasks, not only simulation but even verification might need classical exponential time. This made it advantageous to directly consider the probability of observed bitstrings, rather than the distribution of sampled bitstrings. To this end, they showed [61] that there's a natural average-case hardness assumption (Heavy Output Generation, HOG), which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that is passed by the outputs of the quantum sampling procedure. The Quantum Volume benchmark of IBM (see Sect. 2.4.2) is based on HOG. As mentioned, Google based their quantum supremacy demonstration [12] on sampling quantum and classical distributions, calculating the cross-entropy as described in [59]. Cross-entropy benchmarking (XEB) has the advantage that it provides deeper insight than HOG, including measures of fidelity, and allows tracing of the development from small processors to devices that can only be simulated approximately. The Google paper [12] stated that an HPC would take 10 000 years. That statement immediately met with a rebuttal on the IBM Research Blog by Perdnault et al. [62], explaining that an ideal simulation of the same task in a conservative, worst-case estimate could be performed on a classical system in 2.5 days and with far greater fidelity. Therefore the quantum supremacy threshold had not been met by Google using 53 qubits. This was of course valid criticism, but effectively just delaying the inevitable. An experiment that more decisively passed the quantum supremacy threshold was soon announced by Chinese researchers [13, 14] using the Zuchongzhi processor, closely following Google's Figure 2: Control operations for generating the pseudo-random quantum circuits for Google’s quantum supremacy benchmarking protocol [12]. Adapted from [12]. recipes, to demonstrate distinct quantum computational advantage. In the most recent experiment [14] using 60-qubit 24-cycle random circuit sampling, the state-of-the-art HPC classical simulation would have taken tens of thousands of years, while Zuchongzhi 2.1 only took about 4.2 h, thereby significantly enhancing the quantum computational advantage. As emphasized by Perdnault et al. [62], quantum supremacy is a threshold that does not automatically certify the quantum processor to be useful for running useful algorithms. However, it does benchmark the quality of the processor. The original Google experiment used a 53-qubit quantum processor that implements a large two-qubit gate quantum circuit of depth 20, with 430 two-qubit and 1,113 single-qubit gates, and with predicted total fidelity of \(F_{XEB}=0.2\%>0\). The condition for quantum supremacy: \(F_{XEB}>0\), is based on statistics of creating an ensemble of a million runs of quantum circuits. The problem is that to solve e.g. a quantum chemistry problem based on 53 qubits, the depth of the quantum circuit would have to be in the range of a million. What is needed for useful problems challenging HPCs is to be able to run perfect quantum circuits with a number of 2-qubit gates much larger than the circuit width (number of qubits). The name of the game is how to achieve practical quantum advantage. ### Performance metrics #### 2.4.1 Cross entropy benchmarking - XEB The task is to sample the \(2^{N}\) bitstring output of a pseudo-random quantum circuit (Fig. 2). Cross-entropy benchmarking (XEB) compares the probability for observing a bitstring experimentally with the corresponding ideal probability computed via simulation on a classical computer. For a given circuit, one collects the measured bitstring sample \(\{x_{i}\}\) and computes the linear XEB fidelity [12] \[F_{XEB}=2^{N}\langle P(x_{i})\rangle_{i}-1 \tag{1}\] where \(N\) is the number of qubits, \(P(x_{i})\) is the probability of the _experimental_ bitstring \(\{x_{i}\}\) computed for the _ideal quantum circuit,_ and the average is over the observed bitstrings. \(F_{XEB}\) is correlated with how often one samples high-probability bitstrings. If the distribution is uniform, then \(\langle P(x_{i})\rangle_{i}=1/2^{N}\) and \(F_{XEB}=0\). Values of \(F_{XEB}\) between 0 and 1 correspond to the probability that no error has occurred while running the circuit. In the Google case [12], the computed values are very small, \(F_{XEB}\sim 10^{-3}\). This may represent proof of principle, but hardly provides any useful result. One needs to have \(F_{XEB}\sim 1\) to be able to run algorithms, useful or not. To demonstrate quantum supremacy one must achieve a high enough \(F_{XEB}\) for a circuit with sufficient width and depth such that the classical computing cost of \(P(x_{i})\) for the full circuit is intractable. \(P(x_{i})\) must be calculated classically by simulating the ideal quantum circuit, which is formallly intractable in the region of quantum supremacy. Since at least 2016 it has been understood that Random Circuit Sampling (RCS), the task to sample the \(2^{N}\) bitstring output of a pseudo-random quantum circuit, will not scale to arbitrarily many qubits without error-correction [61]. Bouland et al. [63] provided strong complexity theoretic evidence of classical hardness of RCS, placing it on par with the best theoretical proposals for supremacy. However, very recently Aharonov et al. [64, 65] produced a polynomial time classical algorithm for sampling from the output distribution of a noisy random quantum circuit. This gives strong evidence that, in the presence of a constant rate of noise per gate, random circuit sampling (RCS) cannot be the basis of a _scalable_ experimental violation of the extended Church-Turing thesis. Noise kills entanglement and makes RCS classically tractable (provided the HPC has enough memory to do the calculation). However, the algorithm does not directly address finite-size RCS-based quantum supremacy experiments [64], so the result is not directly applicable to current attempts to invalidate the quantum supremacy results [12, 13, 14] using classical HPC. Feng and Pan [67] solved the Google sampling problem classically in about 15 h on a computational cluster with 512 GPUs with state fidelity 0.0037 (Google 0.00224), and claimed that it would only take few dozen seconds on an exascale machine, much faster than Google. Clearly it provides some satisfaction to demonstrate in practice that an HPC can beat the noisy 53q Sycamore QPU. However, a more challenging target for the HPC may now be to beat the 66q Zuchongzhi 2.1 with its 60-qubit 24-cycle RCS [14]. #### 2.4.2 Quantum volume - _QV_ The fundamental challenges in the NISQ era can be illustrated using the concept of Quantum Volume (QV) introduced by IBM [70]. QV is linked to system error rates, and quantifies the largest random circuit of equal width and depth that a specific computer can successfully implement given decoherence, gate fidelities, connectivity, and more [70, 71]. QV is a benchmarking protocol based on the execution of a pseudo-random quantum circuit with a fixed but generic form producing a bitstring \(\{x\}\) (Fig. 3). QV quantifies the largest random circuit \(U\) of equal width \(N\) (number of qubits) and depth \(d\) (number of layers) that the computer successfully implements: \[U=U(d),...,U(2)U(1) \tag{2}\] The ideal output distribution is \[p_{U}(x)=|\langle x|U|0\rangle|^{2} \tag{3}\] where \(\{x\}\) is an observable bit string. Benchmarking the QV, one runs circuits with an increasing number of cycles \(d=1,....,d_{max}\) with \(d=N\), and measures the success rate for increasing the depth \(d\) until one reaches a prescribed success threshold. To define when a model circuit U has been successfully implemented in practice, Cross et al. [70] use the heavy output generation (HOG) problem formulated by Aaronson and Chen [61]: "Given as input a random quantum circuit C (drawn Figure 3: IBM QV pseudo-random quantum circuit [70] consisting of \(d\) layers (depth) of random permutations \(\pi\) of the \(N\) qubit labels, followed by random SU(4) two-qubit gates. When the circuit width \(N\) is odd, one of the qubits is idle in each layer. From [70]. from some suitable ensemble), generate output strings \(x_{1},.....,x_{k}\), at least a \(2/3\) fraction of which have greater than the median probability in C's output distribution." This means that the set of output probabilities \(p_{U}(x)\) are sorted in ascending order of probability, and the heavy (high probability) output generation problem is to produce a set of output strings \(\{x\}\) such that more than two-thirds are heavy, i.e. greater than the median probability. Aaronson and Chen [61] state that: "HOG is easy to solve on a quantum computer, with overwhelming success probability, by the obvious strategy of just running C over and over and collecting k of its outputs", and demonstrate [61] that HOG is exponentially hard for a classical computer. The important thing is that the approach [61] makes no reference to sampling or relation problems. Thus, one can shift focus from sampling algorithms to algorithms that simply estimate amplitudes. Pelofske et al. [71] recently published a guide to the QV: "Quantum Volume in Practice: What Users Can Expect from NISQ Devices". QV provides a standard benchmark to quantify the capability of NISQ devices. Interestingly, the QV values achieved in the tests [71] typically lag behind officially reported results and also depend significantly on the classical compilation effort. This is important to have in mind when popular articles announce quantum computing breakthroughs in terms of higher QV values. #### 2.4.3 Relevance of metrics for usefulness The definition of QV: \(d=N\), stops short of benchmarking what is needed for useful applications. Useful algorithms often require the quantum circuit depth \(d\) to be much larger than the width \(N\) (number of qubits): \(d>>N\). This is typically the case when describing the ground-state energy of a molecule with reasonable accuracy. For example, a small molecule like HCN can be described (STO-6G basis) with \(N=14\) and \(d\approx 3000\approx 200N\)[72]. Similarly, HCN (6-31G basis) can be described using Qiskit with \(N=69\) and \(d=6\times 10^{6}\sim 87000N\)[73]. These huge circuit depths can most likely be reduced with improved compilation methods (see e.g. [72]), but nevertheless indicate the nature of the problem to perform useful calculations. For comparison, instead of using random circuits and XEB or QV/HOG as targets, one can generate specific quantum states showing genuine multipartite entanglement (GME) with sufficient fidelity. Mooney et al. [74] investigated multiple quantum coherences of Greenberger-Horne-Zeilinger (GHZ) states on 11 to 27 qubits prepared on the IBM Quantum Montreal (ibmq_montreal) device (27 qubits), applying quantum readout error mitigation and parity verification error detection to the states. In this way, a fidelity of \(0.546\pm 0.017>0.5\) was recorded for a 27-qubit GHZ state, demonstrating rare instances of GME across the full device. Although this experiment may feel more interesting and useful than testing with random circuits, it nevertheless demonstrates that there is a very low probability for creating a 27 qubit GHZ state. For it to be useful, the GHZ state must be created with 100% probability to serve as starting point for useful information processing. ## 3 Applications ### Quantum approximate optimization algorithm - QAOA The Quantum Approximate Optimization Algorithm (QAOA) was proposed as a heuristic variational method for solving NP-hard combinatorial optimization problems on near-term quantum computers [75, 76], and constitutes one of the most widespread and active current methods for using NISQ computers [19, 77, 78, 79, 100]. #### 3.1.1 QAOA basics The QPU prepares a variational quantum state \(|\psi(\gamma,\beta)\rangle\) with \(N\) qubits starting from an initial uniform superposition of all possible computational basis states \(|+\rangle^{\otimes N}\) generated by Hadamard gates from \(|0\rangle^{\otimes N}\) (Fig. 4). The second step of QAOA is then to apply an alternating sequence \(U=U(p),...,U(2)U(1)|+\rangle^{\otimes N}\) of two parametrized non-commuting quantum gates \(U(i)=U(i,\beta_{i})U(i,\gamma_{i})=e^{-i\beta_{i}\hat{B}}e^{-i\gamma_{i}\hat{C}}\), followed by measurement generating an \(n\)-qubit bitstring. Many repetitions (shots) of the same circuit generates a distribution of bitstrings used to evaluate the cost function \(\langle\psi(\gamma,\beta)|C|\psi(\gamma,\beta)\rangle\). The variational parameters are then updated in a closed loop using a classical optimizer to minimize the cost function. #### 3.1.2 QAOA applied to air transportation - tail assignment Industrial optimization has a long history [101], one of the most famous applications being Toyota's Just-In-Time production system first implemented in 1973 [102]. There is a huge recent literature on optimization for industrial engineering and logistics (see e.g. [101, 102, 103, 104]), since around 2015 often referred to as Industry 4.0 [105, 104]. In the following we will discuss one specific example addressing airline scheduling [106]: the performance of the QAOA algorithm for optimizing small but realistic instances of logistic scheduling relevant to airlines. The problem addressed is called Tail Assignment (TAS) [107, 108, 109], assigning individual aircraft (identified by the number on its tail fin) to Figure 4: Quantum circuit for the quantum adiabatic optimization algorithm (QAOA). The QAOA for a problem specified by the Ising Hamiltonian \(\hat{C}\). An alternating sequence of the Ising Hamiltonian \(\hat{C}\) and the transvers mixing Hamiltonian \(\hat{B}\) is applied to an equal superposition of \(N\) qubits, producing a trial state function \(|\psi(\gamma,\beta)\rangle=\prod_{l=1}^{p}e^{-i\beta_{l}\hat{B}}e^{-i\gamma_{l} \hat{C}}|+\rangle^{n}\). Measurement of the qubit state produces a specific \(N\)-qubit bitstring, and many repetitions (shots) of the identical quantum circuit (loop not shown) creates a distribution used for estimating the cost function \(\langle\psi(\gamma,\beta)|C|\psi(\gamma,\beta)\rangle\). A classical optimization algorithm minimizes the cost function by varying the angles \(\gamma,\beta\). The level \(p\) represents the depth of the circuit, determining the number of variational parameters and gates used in the trial function \(|\psi(\gamma,\beta)\rangle\). A large circuit width \(N\) requires a (very) large depth \(p\) for accuracy. Adapted from [88]). particular routes, deciding which individual aircraft (tail) should operate each flight. A full approach to TAS is discussed in detail by Svensson et al. [109], separating TAS into a generation problem and a selection problem. In this way, the complex rules only affect the generation problem, whereas the selection problem is often a pure Set Cover or Set Partitioning problem. The TAS generation problem is responsible for generating the complex aircraft routes. A flight is a connection between two airports. A set of flights operated in sequence by the same aircraft (tail) is called a route [108]. To formulate the TAS problem, let \(F\) denote the set of flights \(f\), \(T\) the set of tails \(t\), and \(R\) the set of all legal routes \(r\). In order for a route to be considered legal to operate, it needs to satisfy a number of constraints. In a full problem description one would include various costs, like the cost of flying a route and the cost of leaving a flight unassigned. In the decision version of TAS, the goal is to find any solution satisfying all the constraints, disregarding the costs. Essential aspects of the full TAS selection problem can then be reduced to an Exact Cover decision problem with the constraint \[\sum_{r\in R}a_{fr}x_{r}=1;\quad x_{r}\in\{0,1\} \tag{4}\] The constraint matrix \(\{a_{fr}\}\) defines the relationship between \(F\) and \(R\) and tells whether a flight \(f\) is included in route \(r\): \(a_{fr}=1\) if flight \(f\) is covered by route \(r\) and \(0\) otherwise. Given the _generated constraint matrix_\(\{a_{fr}\}\), the solutions for the decision variable \(x_{r}\) will follow from the solution of the Exact Cover decision problem: \(x_{r}=1\) if route \(r\) should be used in the solution, and is \(0\) otherwise. The constraint can be turned into a cost function \[C=\qquad\sum_{f\in F}(\qquad\sum_{r\in R}a_{fr}x_{r}-1)^{2} \tag{5}\] that can be converted to the classical QUBO model - Quadratic Unrestricted Binary Optimization, which then maps over onto the quantum Ising model [110, 111, 112]. In the final cost function, the Ising Hamiltonian, the constants (external field and spin interactions) are then determined by the constraint matrix \(\{a_{fr}\}\) (Eq. 4). Vikstal et al. [108] reduced real-world instances obtained from real flight scheduling to instances with 8, 15 and 25 decision variables, which could be solved using QAOA on a quantum computer laptop simulator with 8, 15 and 25 qubits (routes) respectively. For these small instances, the problem was reduced to an exact cover problem with one solution in each instance. The same TAS problem was studied by Willsch et al. [113] mapped onto a 40-qubit problem and 472 flights. For each of the 40 routes, the constraint matrix defines all flights that are covered by this route. As explained, the exact cover problem is to find a selection of routes (i.e., a subset of rows of the constraint matrix) such that all 472 flights are covered exactly once. The exact cover problem was programmed on D-Wave Advantage and 2000Q. The problem instance has the unique ground state \(|00000000010100100110010000010000000000110\rangle\), where each qubit represents a flight route. The ground state contains nine 1's meaning that for this particular instance, the solution consists of nine routes. Each route is assigned to an aircraft. All other states represent invalid solutions, in the sense that not all 472 flights are covered exactly once. ### Variational quantum eigensolver - VQE #### 3.2.1 VQE basics The Variational Quantum Eigensolver (VQE) implements the Rayleigh-Ritz variational principle (Fig. 5) [114, 115]: \[E(\theta)=\langle\psi(\theta)|\hat{H}|\psi(\theta)\rangle\geq E_{0} \tag{6}\] The VQE is a classical-quantum hybrid algorithm where the trial function \(|\psi(\theta)\rangle\) is created in the qubit register by gate operations. Calculating the expectation value on a QPU, the energy is estimated via quantum state tomography of each of the Pauli operator products of \(\hat{H}\). In quantum simulations on an HPC, the state vector is available classically, and the expectation value of H can be evaluated directly. The VQE scales badly for large molecules (due to repeated measurements/tomography to form the expectation value of the Hamiltonian, \(\left\langle\hat{H}\right\rangle\). Nevertheless, the VQE is the common approach for small molecules with present NISQ QPUs [116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128]. The phase-estimation algorithm (PEA) scales better, but involves much deeper circuits, puts much higher demands on the coherence time of the q-register, and needs advanced QEC. #### 3.2.2 VQE applied to chemistry: For an overview of applications to chemistry, see reviews [117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131] and specific applications [132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 22, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 287, 289, 280, 283, 285, 286, 287, 288, 289, 281, 289, 282, 284, 286, 287, 288, 289, 291, 285, 286, 287, 288, 289, 292, 289, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 33, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 382, 383, 385, 387, 388, 389, 391, 384, 386, 388, 389, 392, 385, 389, 393, 40, 40, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 43, 44, 44, 45, 46, 47, 48, 49, 42, 44, 48, 49, 43, 44, 45, 46, 48, 49, 44, 49, 45, 47, 49, 46, 49, 47, 48, 49, 40, 411, 42, 43, 44, 44, 45, 46, 49, 48, 49, 40, 411, 42, 43, 44, 45, 46, 49, 41, 42, 43, 44, 47, 49, 45, 48, 49, 40, 411, 42, 43, 44, 45, 46, 49, 42, 44, 47, 49, 43, 45, 46, 49, 40, 411, 42, 43, 44, 46, 49, 44, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 42, 44, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 43, 46, 49, 44, 45, 47, 49, 48, 49, 40, 41, 42, 43, 44, 46, 49, 45, 49, 40, 41, 42, 43, 44, 46, 49, 44, 47, 48, 49, 41, 42, 44, 45, 46, 49, 42, 44, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 40, 41, 42, 43, 44, 46, 47, 48, 49, 41, 42, 44, 49, 43, 45, 46, 49, 40, 41, 42, 43, 44, 47, 48, 49, 41, 45, 46, 49, 42, 45, 47, 49, 41, 43, 48, 49, 42, 45, 46, 49, 43, 47, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 47, 49, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 49, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 48, 49, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 49, 49, 40, 41, 42, 43, 44, 45, 46, 49, 42, 45, 47, 48, 49, 40, 41, 43, 45, 46, 49, 40, 41, 42, 43, 44, 46, 47, 48, 49, 41, 45, 49, 42, 45, 47, 49, 43, 46, 49, 41, 45, 47, 49, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 40, 41, 42, 43, 44, 45, 47, 49, 46, 47, 48, 49, 49, 40, 41, 42, 43, 44, 49, 45, 47, 49, 41, 45, 49, 46, 47, 49, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 40, 41, 45, 49, 42, 46, 49, 49, 40, 41, 42, 43, In VQE calculations for quantum chemistry [114, 115] one typically starts from an ansatz of the quantum state \(|\psi(\theta)\rangle=U(\theta)|\psi_{ref}\rangle\) with variational parameters \(\theta\), where \(U(\theta)\) is a unitary operator describing the quantum circuit, and \(|\psi_{ref}\rangle\) is the initial state. \(U(\theta)\) could be a heuristic "hardware efficient" quantum circuit [28] or a more elaborate unitary coupled cluster (UCC) expansion, with a Hartree-Fock [115, 1, 120] or multi-configuration [121, 122] initial reference states. The UCC ansatz of the quantum state \(|\psi(\theta)\rangle\): \[|\psi(\theta)\rangle=\hat{U}(\theta)|\psi_{ref}\rangle=e^{T(\theta)-T(\theta) ^{\dagger}}|\psi_{ref}\rangle \tag{7}\] can be expanded: \[T(\theta)=T_{1}+T_{2}+T_{3}+....+T_{N} \tag{8}\] producing \(1,2,3,....,N\) electron-hole excitations from the N-electron reference state. The first two terms \[T_{1}=\sum_{pq}t(\theta)_{pq}\ c_{p}^{\dagger}c_{q};\ \ \ T_{2}=\sum_{pqrs}t( \theta)_{pqrs}\ c_{p}^{\dagger}c_{q}^{\dagger}c_{r}c_{s} \tag{9}\] with fermionic creation (\(c_{i}^{\dagger}\)) and annihilation (\(c_{i}\)) operators generate single (S) and double (D) excitations and produce the parametrized UCCSD trial-state approximation. In particular \(t(\theta)_{pq}=\theta_{i}\) and \(t(\theta)_{pqrs}=\theta_{j}\) for all combinations of the indices \(pqrs\). The trial-state fermionic operator \(U(\theta)\) must now be mapped onto qubit spin operators. Common transformations (codings) are Jordan-Wigner (JW), Bravyi-Kitaev (BK) and Parity, all designed to impose the anticommutation rules. In the case of the UCC ansatz, the exponential is expanded into exponentials of large numbers of products of Paul spin-operators acting on qubits. The initial trial state is then constructed through entangled quantum circuits: combinations of parametrized 1q-rotation gates and entangling 2q gates. The size of the quantum circuit can finally be reduced by qubit reduction schemes. All this results in a state vector for the trial state. The fermionic operators \(c_{i}^{\dagger}\) and \(c_{i}\) in the molecular Hamiltonian \[\hat{H}=\sum_{pq}h_{pq}c_{p}^{\dagger}c_{q}+\frac{1}{2}\sum_{pqrs}h_{pqrs}c_{p }^{\dagger}c_{q}^{\dagger}c_{r}c_{s} \tag{10}\] must also be expanded in products of Pauli spin-operators using codings like JW, BK or Parity, resulting in the generic interaction form: \[\hat{H}=\sum_{i\alpha}h_{i\alpha}\ \sigma_{i\alpha}+\sum_{i\alpha,j\beta}h_{i \alpha,j\beta}\ \sigma_{i\alpha}\sigma_{j\beta}+\sum_{i\alpha,j\beta,k\gamma}h_{i\alpha,j \beta,k\gamma}\ \sigma_{i\alpha}\sigma_{j\beta}\sigma_{k\gamma}+....... \tag{11}\] where \(\sigma_{i\alpha}\) corresponds to the Pauli matrix \(\sigma_{\alpha}\) for \(\alpha\in\{0,x,y,z\}\), acting on the \(i\)-th qubit. The expectation value \(\left\langle\hat{H}\right\rangle\) can then be calculated in two ways: (1) State-vector approach: direct calculation of by matrix operations; (2) Measurement approach: generating an ensemble of identical trial states and measuring the Pauli operators of the Hamiltonian terms \(\hat{H}_{i}\). The original UCC exponential (Eq. 7) is then expanded into exponentials of large numbers of products of Paul spin-operators acting on qubits: \(e^{-i\theta\sigma_{1z}\sigma_{2z}}\); \(e^{-i\theta\sigma_{1z}\sigma_{2z}\sigma_{3z}}\), etc. The parametrized initial entangled quantum circuit \(U(\theta)\) for the UCCSD trial state is then finally constructed through combinations of parametrized one-qubit rotation gates and entangling two-qubit CNOT gates, resulting in a state vector \(|\psi(\theta)\rangle=U(\theta)|\psi_{HF}\rangle\) for the trial state. Lolur et al. [74] have benchmarked the VQE as implemented in the Qiskit software package on laptops and HPCs [133], applying it to computing the ground state energy of water, H\({}_{2}\)O, hydrogen cyanide, HCN, and a number of related molecules. The energies have been determined using the Qiskit statevector backend to directly calculate \(\langle\psi(\theta)|\hat{H}|\psi(\theta)\rangle\) through matrix multiplication rather than repeated measurement. Clearly, substantial classical computational resources are needed to compute these systems on classical HPC quantum simulators. It is evident that for problems with QChem-inspired ansatze, even small numbers of qubits lead to large numbers of gates. And this is then amplified by the variational procedure with many parameters and iterations. The large number of gates will severely limit the types of molecules that can be used for benchmarking real quantum HW. And it will also limit what can be simulated on HPC quantum simulators. QChem problems will provide serious challenges and benchmarks for testing HPC and quantum HW NISQ implementations. To utilize VQE and achieve near chemical accuracy will be extremely challenging for NISQ processors. It is problematic or impossible to achieve chemical accuracy with conventional HPC VQE-simulators already for small molecules such as HCN. But, there is no way around it: one must benchmark and challenge existing quantum HW and SW with available resources. The water molecule is a kind of "gold standard" even for forefront HPC applications, and H\({}_{2}\)O is an excellent candidate for testing the VQE on quantum HW. Nevertheless, at the present stage of the NISQ era, one has to start with "easy" applications and simple approximations just to benchmark the quantum HW. The concept of "hardware-efficient" trial functions [27] is an attempt to short-circuit systematic UCCSD approaches and still introduce essential electron correlation. The recently developed adaptive VQE [128, 129] and related further developments [130, 131] provide a more systematic approach to including electron correlation processes in order of monotonically decreasing weight. Nevertheless, the electron-correlation problem is computationally hard (NP-hard), so there is no easy way around it. State-of-the-art HPC computation of accurate molecular energies based on the Schrodinger equation defines the resources needed, and they are indeed huge [133, 134]. HPC quantum simulators cannot be more efficient than systematic HPC brute-force Full CI calculations. Quantum advantage will be possible by definition as soon as a quantum register exceeds the available RAM memory of an HPC. But to profit from that potential quantum advantage, the QPU must be able to run the q-algorithm to solution, and that will involve a very large number of gate operations even for the VQE. So, this is the ultimate challenge of the NISQ era. ### Simulating physical systems on engineered quantum platforms Feynman's original idea was to simulate quantum systems with engineered quantum systems. Quantum simulation with analog quantum circuits tunes the interactions in a controllable quantum substrate to describe the Hamiltonian of the system to be simulated, and then anneals the systems toward their ground states by lowering the temperature. A comprehensive review [9] describes how quantum simulation can be performed already today through special-purpose analog quantum simulators, arguing that a first practical quantum advantage already exists in the case of specialized applications of analog devices. A particular example is quantum simulation of 2D antiferromagnets with hundreds of Rydberg atoms trapped in an array by optical tweezers [135], and Lamata et. al. [136] and Yu et al. [137] have decuppet digital-analog quantum simulation for superconducting circuits. The recent development of large-scale superconducting arrays makes it possible to design qubit circuits that simulate a specific physical device, and perform experiments on it [33, 35, 36, 37, 38, 39, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142]. In this way Arute et al. [33] simulated separation of the dynamics of charge and spin in the Fermi-Hubbard model, Neill et al. [35] simulated the electronic properties of a quantum ring created in the Sycamore substrate, and Mi et al. [37] investigated discrete time crystals in an open-ended, linear chain of 20 superconducting transmon qubits that were isolated from the two-dimensional Sycamore grid. We will now discuss a few recent experiments addressing transport, information scrambling and scarring in quantum circuits. #### 3.3.1 Quantum transport and localization: Tight-binding lattice Hamiltonians are canonical models for particle transport and localization phenomena in condensed-matter systems. To study the propagation of entanglement and observe Anderson and Wannier-Stark localization, Karamlou et al. [138] experimentally investigate quantum transport in one- and two-dimensional tight-binding lattices, emulated by a fully controllable 3 x 3 array of superconducting qubits in the presence of site-tunable disorder strengths and gradients. The dynamics are hard to observe in natural solid-state materials, but they can be directly emulated and experimentally studied using engineered quantum systems. The close agreement between the experimental results, simulations, and theoretical predictions [138] results from high-fidelity, simultaneous qubit control and readout, accurate calibration, and taking into account the relevant decoherence mechanisms in the system. Karamlou et al. [138] emphasize that although the experiments are performed on a small lattice that can still be simulated on a classical computer, they demonstrate a platform for exploring larger, interacting systems where numerical simulations become intractable. #### 3.3.2 Quantum information scrambling: Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system, leading to the loss of local recoverability of quantum information [139, 140, 141, 142]. Following [141], the approach is based on measuring the out-of-time-order correlator (OTOC) \(C=\langle|\hat{O}^{*}(t)\hat{M}^{*}\hat{O}(t)\hat{M}|\rangle\) between a unitary local perturbation operator \(\hat{O}(t)\) and unitary operator \(\hat{M}\) which is a Pauli operator on a different qubit. Scrambling means a local perturbation is rapidly amplified over time. During the time evolution, \(\hat{O}(t)\) becomes increasing nonlocal, which leads to decay of correlation function due to the spreading of the excitation all over the system. The perturbation operator can be modeled as \(\hat{O}(t)=\sum_{i}w_{i}\hat{B}_{i}\), where \(\hat{B}_{i}=b_{1}(i)\otimes b_{2}(i)\otimes b_{3}(i)....\) is a string of single-qubit basis operators acting on different qubits, and \(w_{i}\) are the weights of the operator strings. Scrambling involves two different mechanisms: (i) Operator spreading, and (ii) generation of operator entanglement. Operator spreading (i) means that the strings of single-qubit basis operators \(\hat{B}_{i}\) get expanded, spreading over more qubits, while (ii) generation of operator entanglement is reflected in the growth in time of the minimum number of terms needed to expand \(\hat{O}(t)=\sum_{i}w_{i}\hat{B}_{i}\) with a broad distribution of coefficients \(w_{i}\). By measuring the OTOC, Mi et al. [141] experimentally investigated the dynamics of quantum scrambling on a 53-qubit Sycamore quantum processor. Engineering quantum circuits that distinguished between operator spreading and operator entanglement, they showed that operator spreading is captured by an efficient classical model, while operator entanglement in idealized circuits requires exponentially scaled computational resources to simulate. However, the quantum-supremacy discussion of the influence of noise, making possible classical simulation of large noisy QPUs, suggests that the noise level needs to be reduce substantially before exponentially scaled computational resources are needed. Recently Braumuller et al. [142] also probed quantum information propagation with out-of-time-ordered correlators (OTOC). They implemented a \(3\times 3\) two-dimensional hard core Bose-Hubbard lattice with a superconducting circuit, studied its time reversibility, and measured out-of-time-ordered correlators. The method [142] relies on the application of forward and backward time evolution steps implemented by interleaving blocks of unitary time evolution and single-qubit gates. Extracting OTOCs made it possible to study quantum information propagation in lattices with various numbers of particles, enabling observation of a signature of many-body localization in the 2D hard-core Bose-Hubbard model. Braumuller et al. [142] propose that applying the technique to larger lattices may improve our understanding of quantum thermodynamics and black-hole dynamics, as well as of using many-body systems for quantum memories. In addition, experimentally accessing OTOCs in large quantum circuits may provide a powerful benchmarking tool to study future quantum processors. But again, here noise will likely become an issue. #### 3.3.3 Many-body Hilbert space scarring: Zhang et al. [36] have studied many-body Hilbert space scarring (QMBS) on a superconducting processor. QMBS is a weak form of ergodicity breaking in strongly interacting quantum systems, meaning that the system does not visit all parts of phase space. This presents opportunities for mitigating thermalization-induced decoherence due to scrambling (eignestate thermalization hypothesis, ETH) in quantum information processing applications. Utilizing a programmable superconducting processor with 30 qubits and tunable couplings, Zhang et al. [36] create Hilbert space scarring in a non-constrained model in different geometries, including a linear chain and quasi-one-dimensional comb geometry, by approximately decoupling from the qubit substrate. By reconstructing the full quantum state through quantum state tomography on four-qubit subsystems, they provide strong evidence for QMBS states by measuring qubit population dynamics, quantum fidelity and entanglement entropy after a quench from initial unentangled states. The QMBS is found to be robust to various imperfections such as random cross-couplings between qubits, and it persists beyond 1D systems. The experimental findings also broaden the realm of scarring mechanisms and identify correlations in QMBS states for quantum technology applications. Comparing with other qubit platforms, Zhang et al. [36] state that the superconducting platform can process the same quantum information in a shorter time, implying advantages of QMBS in a superconducting platform for more practical quantum-sensing and metrology applications. ## 4 Key issues ### Noise and loss of information - a common experience. A common classical experience might illuminate what quantum computing is facing in the present NISQ era. Onboard an airplane, listening to music using the cheap versions of headphones offered by airlines in economy class can be a less-than-satisfactory experience. To start with, the headphone sound emitters have narrow bandwidth and large distortion, certainly not improving the limited quality or the source itself. Then the high-frequency background noise in the cabin from the air conditioning and engines may swamp the music signal. And finally, the sensitivity and frequency response of passenger's ears, and the processing in the brain [143, 144, 145], may be less than perfect, making it difficult to discriminate against the noise. For the traveller, high-quality headphones with noise suppression therefore make a big difference. Then the external noise from the environment is processed in real time: recorded, inverted, and subtracted. This is a useful analogy in the case of a single qubit in a noisy environment. However, in order to describe the influence of noise on a multi-qubit processor, one might better illustrate the situation in terms of the "Cocktail Party Syndrome" [145], referring to the difficulty to entertain a meaningful conversation within a group of people in a very noisy environment. Here, also simultaneous "two-body interactions" between members of the group add "correlated" noise to the "random" noise from the background. In our classical example, one can informally define error suppression, error mitigation, and error correction: Error suppression means creating high-quality hardware: the signal input is perfect; the classical bits are perfect; the sound generators in the headphones are perfect. Error mitigation means eliminating background cocktail party (channel) noise, e..g. via noise inverting devices, as well as eliminating noise generated within the group (e.g. noise within the brain from alcohol consumption; tinnitus; etc.). Error mitigation would also include recording of the session and providing a clean edited transcript afterwords (post-processing). Error correction means coding the information such that any errors can be traced and corrected, in real time or at the end. When it comes to real quantum computers, similar concepts and actions apply: one talks about quantum error suppression (QES), quantum error mitigation (QEM), and quantum error correction (QEC). ### Fighting imperfections and noise in quantum processors Key issues concern the impact of imperfections and noise on computational capacity. NISQ devices are noisy, which creates decoherence and computational errors due to qubit relaxation and dephasing. One talks about three main types of noise: incoherent, coherent, and correlated. JJ qubits are embedded in a sea of fluctuating defects creating stochastic charge fluctuations - incoherent noise - capable of driving unwanted qubit transitions causing relaxation. Moreover, qubit control via microwave waveguides and magnetic flux lines is subject to stochastic fluctuations influencing the precision of quantum operations. These fluctuating fields may lead to systematic miscalibrations, drift, and crosstalk - coherent noise - that in principle can be reversed [146, 3, 147]. Finally, recent work shows that impact of cosmic rays can generate quasiparticles that create correlated charge noise and qubit relaxations on a length scale of hundreds of micrometer [148, 149]. In order to mitigate the effects of noise, one talks about quantum error suppression (QES), quantum error mitigation (QEM), and quantum error correction (QEC). #### 4.2.1 Quantum error suppression: QES refers to various efforts to maximize the Quantum Volume by improving the fabrication and operation of the quantum hardware (HW). (This excludes active feedback, treated as QEC). On the fabrication side, the main issues concern minimizing decoherence [148, 149, 150, 151, 152, 153, 154] and cross talk [154, 155]. For a given QPU circuit, the issue is to maximize gate fidelities and speed via optimal control (e.g. pulse shaping) based on advanced characterization of the device [156]. #### 4.2.2 Quantum error mitigation: QEM aims to produce accurate expectation values of observables. It refers to various software methods to alleviate the effects of noise on computational results during execution of an algorithm on a QPU [157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 22, 23, 23, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. The first scheme (ZNE) does not make any assumption about the noise model other than it being weak and constant in time [157]. The second scheme (PEC) can tolerate stronger noise; however, it requires detailed knowledge of the noise model [157]. _Zero noise extrapolation, ZNE_, works by physically increasing the impact of noise, determining a curve describing how the expectation value of some observable varies with noise. Variation of the noise strength can be done/simulated by varying the 1q- and 2q-gate times. Given enough points to determine the variation, the curve is then extrapolated back to zero noise, providing a best estimate of the expectation value. This has been implemented successfully in a number of experimental and theoretical investigation [163, 164, 29, 165]. The zero-noise extrapolation requires sufficient control of the time evolution to implement the rescaled dynamics and hinges on the assumption of a large time-scale separation between the dominant noise and the controlled dynamics [157]. _Probabilistic error cancellation (PEC)_ works by measuring the noise spectrum and applying an inverted quasi-probability distribution to the result of the computation via post-processing [157, 158, 159, 160, 161, 162]. PEC requires a full characterization of the noisy computational operations. To obtain this to sufficient precision is challenging in practice [157]. Nevertheless, Song et al. [160] experimentally demonstrated that PEC based on a combination of gate set tomography (GST) and quasi-probability decomposition can substantially reduce the error in quantum computation on a noisy quantum device. Moreover, Van den Berg et al. [161] have presented a practical protocol for learning and inverting a sparse noise model that is able to capture correlated noise and scales to large quantum devices, demonstrating PEC on a superconducting quantum processor with crosstalk errors. In contrast, Leyton-Ortega et al. [166] present a method to improve the convergence of variational algorithms by replacing the hardware implementation of certain Hermitian gates with their inverses, resulting in noise cancellation and a more resilient quantum circuit. This is demonstrated on superconducting quantum processors running the variational quantum eigensolver (VQE) algorithm to find the H2 ground-state energy. Another QEM method has been developed by Lolur et al. [124] for quantum chemical computations on NISQ devices - Reference-State Error Mitigation (REM). The method relies on determining the exact error in energy due to hardware and environmental noise for a reference wavefunction that can be feasibly evaluated on a classical computer. REM is shown to drastically improve the computational accuracy at which total energies of molecules can be computed using current quantum hardware. #### 4.2.3 Quantum error correction: QEC refers to methods to code quantum information into logical qubits that can be measured, errors detected and identified, and logical qubits restored [167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 173, 170, 171, 172, 174, 175, 176, 177, 178, 179, 181, 190, 182, 183, 184, 185, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 22, 23, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 88, 89, 91, 83, 85, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 101, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 152, 154, 159, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 183, 185, 187, 188, 189, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 212, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 151, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 84, 85, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 99, 11, 12, 13, 14, 14, 14, 14, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 90, 84, 85, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 101, 99, 11, 102, 103, 104, 105, 106, 107, 108, 109, 110, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 14, 145, 146, 147, 148, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 159, 161, 170, 171, 18 is high, the logical error probability increases with increasing system size, while sufficiently low physical error rates will lead to the desired exponential suppression of logical errors. Acharya et al. [43] show that their experiment lies in a crossover regime where increasing system size initially suppresses the logical error rate before, due to finite-size effects, later increasing it. They estimate that component performance must improve significantly to achieve practical scaling. In any case, the work demonstrates the first step in the process of suppressing logical errors by scaling a quantum error-correcting code. ### Scaling up for practical quantum advantage The concept of Quantum Advantage (QA) has emerged as a reaction to the more dramatic notion of Quantum Superiority (QS) [17]. In principle, QS is what we need - exponential advantage of over classical computers. However, this is only possible with functional QEC. QA emphasizes enhanced performance relative to specific classical algorithms for real-world use cases, typically addressing variational problems like QAOA and VQE. Currently there seems to be two opposite uses of practical QA (PQA): (i) effectively describing QS in huge QEC machines, and (ii) mainly providing some useful speedup relative to classical algorithms. In the present NISQ era there are essentially two ways to look at the scaling up of QPUs, what we will refer to as _QPU-centric_ and _HPC-centric_. #### 4.3.1 QPU-centric approach: Here QPUs are scaled up in tune with HW progress to push the limits in experiments testing quantum supremacy [12, 13, 14], QEC [39, 40, 41, 42, 43], and physics [25, 26, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. The main role of the classical computer is to serve a single QPU with pre- and post-processing for running quantum circuits. The maximum number of qubits in the QPU so far is 72 [43]. Further scaling up the number of qubits will make it possible to systematically build larger logical qubits and larger code distances. In a blog post in May 2021 [20], Erik Lucero at Google began with a bold statement: "Within the decade, Google aims to build a useful, error-corrected quantum computer." The key issue in 2029 will most likely be: "useful to whom?" Researchers or industry? #### 4.3.2 HPC-centric approach: Here the HPC supercomputer seamlessly integrates collections of parallel CPUs, GPUs and QPUs. The quantum big picture is to boost classical performance by including QPU subroutines approximately solving specific NP-hard problems that form classical bottlenecks. In this sense, the IBM roadmap and philosophy are _HPC-centric_, even though it is described by IBM as quantum-centric supercomputing [16, 19]. The maximum number of qubits is currently 433, in the Osprey QPU [177]. Osprey will be operated as a system of small parallel QPUs to achieve a computational advantage in the near term by combining multiple QPUs through circuit knitting techniques [19, 178], improving the quality of solutions through error suppression and mitigation, and focusing on heuristic versions of quantum algorithms with asymptotic speedups. The IBM Q Experience has created an ecosystem based on Qiskit, providing a versatile programming and testbed environment [19, 179], beginning to look like an industry standard. However, super-polynomial speedup does not belong to the NISQ era, and practical quantum advantage is elusive. Realistically, industrial users will not profit from quantum accelerators in the near term, so how can this large quantum effort be justified? The answer seems to be that IBM is going for useful quantum advantage via quantum parallel processing provided by QPU accelerators integrated in an efficient runtime HPC environment. Again the question is: useful to whom? And is useful quantum advantage possible without QEC? The Quantum Volume (QV) effectively represents the size of a qubit register for which one can entangle all of its qubits in a single shot. Currently, for IBM the best value is \(QV=2^{9}=512\), corresponding to a 9 qubit quantum circuit 9 levels deep. This means that there is no point running algorithms requiring more than that. Instead, one can configure the operating system to run a number of 9-qubit mini-QPUs in parallel to speed up the rate for creating measured distributions, thus reducing the time to solution for computing expectation values of physical variables, like energy. Scale, quality, and speed are three key attributes to measure the performance of near-term quantum computers [16, 19]. In the NISQ era, the QPU will spend very little time computing compared with the time spent by the CPU on pre- and post-processing before and after every call to the QPU. Calls that will be very frequent when solving variational problems. Circuit Layer Operations per Second (CLOPS) [180] is a measure correlated with how many QV circuits (mini-QPUs) a QPU can execute per unit of time, therefore shortening the time to solution. At the IBM Summit 2022, Jay Gambetta [181] pledged that in 2024 IBM will offer a system that will generate reliable outcomes running 100 qubits with gate depth of 100: "So creating this \(100\times 100\) device will really allow us to set up a path to understand how can we get quantum advantage in these systems and lay a future going forward." It must be noted, however, that this does not mean executing a 100q quantum circuit with depth 100 coherently "in a single shot", achieving \(QV=2^{100}\) - that would be a quantum-earth shaking demonstration of quantum superiority. Microsoft has developed a framework for quantum resource estimation [23], to estimate resources required across the stack layers for large-scale quantum applications, finding (as expected) that hundreds of thousands to many millions of physical qubits are needed to achieve practical quantum advantage. Beverland et al. [23] maintain that the best solution is a monolithic QPU with, say, 10 million controllable, fast, and small qubits. The stack control system must be able to run millions of parallel high-fidelity gates at high speed, as well as reading out millions of qubits in parallel. Not surprisingly, no qubit technology currently implemented satisfies all of these requirements. However, Microsoft suggests that the recent proposals of electro-acoustic qubits [182], and the topological qubit approach based on Majorana Zero Modes [183] might do it in future. Practical quantum advantage is on the horizon but needs to be accelerated through a variety of breakthrough techniques, The Microsoft view is that these research directions can be best studied in the context of resource estimation to unlock quantum at scale. ### Useful NISQ digital quantum advantage - mission impossible? The short answer is: yes, unfortunately probably mission impossible in the NISQ era. There are two fundamental questions: (1) Does the physical problem itself provide a quantum advantage? And (2), does a quantum algorithm have any advantage over a corresponding classical algorithm? Both question were the orignal drivers of QC: the exponential advantage of Shor's algorithm for factorization into prime numbers. However, the need for QEC put that problem far in the future. Instead, Matthias Troyer and coworkers [184] promoted quantum chemistry for catalysts as the most useful killer application motivating the quest for scaling up QC. The paper argues that "quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources", but at the same time concludes that "The required space and time resources for simulating FeMoco are comparable to that of Shor's factoring algorithm. Berry et al. [185] improved on those results, obtaining circuits requiring less surface code resources, despite using a larger and more accurate active space. Nevertheless, also this needs extensive QEC and is far beyond NISQ computers. Liu et al. [186] further elaborate on the potential benefits of quantum computing in the molecular sciences, i.e., in molecular physics, chemistry, biochemistry, and materials science, emphasizing the competition with classical methods that will ultimately decide on the usefulness of quantum computation for molecular science. Lee et al. [187] have examined the case for the exponential quantum advantage (EQA) hypothesis for the central task of ground-state determination in quantum chemistry. Key for EQA is for the quantum state preparation to be exponentially easy compared to classical heuristics, which is far from clear and perhaps not even likely [188, 189]. Identifying relevant quantum chemical systems with strong evidence of EQA remains an open question [187]. The second question: "does a quantum algorithm have any advantage over a corresponding classical algorithm?" is currently a hot topic. Understanding whether e.g. quantum machine learning (QML) algorithms present a genuine computational advantage over classical approaches is extremely challenging. It seems that quantum inspired classical algorithms "dequantizing" quantum algorithms [190, 191, 192, 193] can compete in polynomial time as long as one is not demanding exponentially accurate results. Tang and coworkers [190] developed a dequantization framework for analysing QML algorithms to produce formal evidence against exponential quantum advantage. These are fully classical algorithms that, on classical data, perform only polynomially slower than their quantum counterparts. The existence of a dequantized algorithm means that its quantum counterpart cannot give exponential speedups on classical data, suggesting that the quantum exponential speedups are simply an artifact of state preparation assumptions. QML has the best chance of achieving large speedups whenever classical computation cannot get access to this data (which occurs when input states come from quantum circuits and other physical quantum systems). This does not yet rule out the possibility of large polynomial speedups on classical data, which could still lead to significant performance improvements in practice with sufficiently good quantum computers [190]. Lloyd et al. [194], however, proposed an algorithm for topological data analysis (TDA) that could not to be directly dequantized using the same techniques, raising the question whether a greater speedup was possible with TDA algorithms. This question has now been analyzed in depth by Berry et al. [195], proposing a dequantization of the quantum TDA algorithm which shows that having exponentially large dimension and Betti number is necessary for super-polynomial advantage. The speedup is quartic, which will not be killed by QEC overhead [22]. Based on that, Berry et al. [195] estimate that tens of billions of Toffoli gates will be suffcient to estimate a Betti number that should be classically intractable. This number of Toffoli gates is considered to be reasonable for early generations of fully fault-tolerant quantum computers [195], falling somewhere in between quantum chemistry applications and Shor's algorithm in terms of the resources required for quantum advantage. How this goes together with the very recent result presented by Akhalwaya et al. [196] remains to be understood. Quoting the authors: "NISQ-TDA, the first fully implemented end-to-end quantum machine learning algorithm needing only a linear circuit-depth, that is applicable to non-handcrafted high-dimensional _classical data_, with potential speedup under stringent conditions. The algorithm neither suffers from the data-loading problem nor does it need to store the input data on the quantum computer explicitly. Our approach includes three key innovations: (a) an efficient realization of the full boundary operator as a sum of Pauli operators; (b) a quantum rejection sampling and projection approach to restrict a uniform superposition to the simplices of the desired order in the complex; and (c) a stochastic rank estimation method to estimate the topological features in the form of approximate Betti numbers. We present theoretical results that establish additive error guarantees for NISQTDA, and the circuit and computational time and depth complexities for exponentially scaled output estimates, up to the error tolerance. The algorithm was successfully executed on quantum computing devices, as well as on noisy quantum simulators, applied to small datasets. Preliminary empirical results suggest that the algorithm is robust to noise." Reconnecting here to quantum chemistry, Gharibian and Le Gall [193] have shown how to design classical algorithms that estimate, with constant precision, the singular values of a sparse matrix, implying that the ground state energy in quantum chemistry can be solved efficiently with constant precision on a classical computer. However, Gharibian and Le Gall also prove that with inverse-polynomial precision, the same problem becomes BQP-complete, suggesting that the superiority of quantum algorithms for chemistry stems from the improved precision achievable in the quantum setting. Finally, Huang et al. [197] investigate quantum advantage in learning from experiments that processes quantum data with a quantum computer. That could have substantial advantages over conventional experiments in which quantum states are measured and outcomes are processed with a classical computer. Huang et al. prove that quantum machines can learn from exponentially fewer experiments than the number required by conventional experiments. They do that by assuming having access to data obtained from quantum enhanced experiments like quantum sensing systems and stored in quantum memory (QRAM), allowing the QPU to process quantum input data. Exponential advantage is shown for predicting properties of physical systems, performing quantum principal component analysis, and learning about physical dynamics. Huang et al. [197]: "Although for now we lack suitably advanced sensors and transducers, we have conducted proof-of-concept experiments in which quantum data were directly planted in our quantum processor." In the absence of perfect physical qubits, or QEC, quantum memory is a great challenge. Quantum memory may be far away for quantum computing as needed by e.g. Huang et al. [197], but it is essential for the development of quantum repeaters for quantum communication networks. Sullivan et al. [198] investigate random-access quantum memory using chirped-pulse phase encoding. The protocol is implemented using donor spins in silicon coupled to a superconducting cavity, offering the potential for microwave random access quantum memories with lifetimes exceeding seconds. ## 5 Future directions ### Improved and alternative superconducting qubits A recent comprehensive review by Calzona et al. [199] describes and analyzes the basic concepts and ideas behind the implementation of novel superconducting circuits with intrinsic protection against decoherence at the hardware level. The review explains the basics and performance of state-of-the-art transmons and other single-mode superconducting quantum circuits, and goes on to describe multi-mode superconducting qubits, toward the realization of fully protected qubits engineered in systems with more than one degree of freedom and/or characterized by the presence of specific symmetries. Regarding state-of-the-art tantalum-based transmons [200, 201], Tennant et al. [202] performed low-frequency charge-noise spectroscopy on Ta-based transmons and found distinctly different behaviour compared with Al- and Nb-based transmons. They conclude that the temperature-dependent behavior of the neighboring charge-configuration transitions is caused by jumps between local charge configurations in the immediate vicinity of the transmon. This is in contrast to Al- and Nb-based transmons which are dominated by a distribution of TLSs giving rise to 1/f noise, apparently ruling out a collection of TLSs as the basis of the quasi-stable charge offsets in Ta-based transmons. Very different types of superconducting devices are semiconductor-superconductor hybrid structures containing Andreev bound states (ABS) [203] and topological Majorana zero modes (MZM) [204, 205, 206]. Recently Pikulin et al. [205] developed an experimental protocol (Topological Gap Protocol, TGP) to determine the presence and extent of a topological phase with Majorana zero modes in a hybrid semiconductor-superconductor three-terminal device with two normal leads and one superconducting lead. Now Aghaee et al. [206] have presented measurements and simulations of InAs-Al hybrid three-terminal devices that are consistent with the observation of topological superconductivity and Majorana zero modes, passing the TGP. Passing the protocol indicates a high probability of detection of a topological phase hosting Majorana zero modes as determined by large-scale disorder simulations and is a prerequisite for experiments involving fusion and braiding of Majorana zero modes. ### Hybrid distributed computing In Sect. 4.3.2 we talked about distributed quantum processing, running many QPU modules in parallel. It is a matter of debate whether future large scale algorithms can be run on monolithic or modular QPUs fitting inside a single fridge, or whether the algorithms have to be distributed over several fridges and locations. Eventually one will be able to work with clusters of quantum computers connected via local or global quantum networks, but for now this represents a great challenge waiting for great breakthroughs and a quantum infrastructure. On a smaller scale, Andrew Cleland and collaborators have done some pioneering work to connect superconducting qubit circuits in different fridges connected by a microwave cable [207, 208], producing multi-qubit entanglement, purification and protection in a quantum network. The microwave connection is photonic, but is limited to local clusters. The needed interfaces for long-distance optical connections are emerging [209], but will probably remain research endeavors for quite some time [210, 211, 212]. On a larger scale, Ang et al. [48] have developed architectures for superconducting modular, distributed, or multinode quantum computers (MNQC), employing a 'co-design' inspired approach to quantify overall MNQC performance in terms of hardware models of internode links, entanglement distillation, and local architecture. In the particular case of superconducting MNQCs with microwave-to-optical interconnects, Ang et al. [48], describe how compilers and software should optimize the balance between local gates and internode gates, discuss when noisy quantum internode links have an advantage over purely classical links, and introduce a research roadmap for the realization of early MNQCs. This roadmap illustrates potential improvements to the hardware and software of MNQCs and outlines criteria for evaluating the improvement landscape, from progress in entanglement generation to the use of quantum memory in entanglement distillation and dedicated algorithms such as distributed quantum phase estimation. As a concrete example, DiAdamo et al. [213] consider an approach for distributing the variational quantum eigensolver (VQE) algorithm over distributed quantum computers with arbitrary number of qubits in a systematic approach to generate distributed quantum circuits for quantum computing. This includes a proposal for software-based system for controlling quantum systems at the various classical and quantum hardware levels. DiAdamo et al. [213] emphasize that much effort has gone into distributed computing in the classical computing domain. And since the overlap between the fields is high, one can use this knowledge to design robust and secure distributed quantum computers, and as quantum technologies improve, this may become a reality. ### Continuous variables - computing with resonators In this field, the resonator modes are the logical qubits, and the transmon qubits provide ancillas for loading and readout. The Yale group is leading the development, and has recently demonstrated some decisive breakthroughs [214]. The name of the game is to construct logical qubits from linear combinations of (already long-lived) resonator states representing the Gottesman-Kitaev-Preskill (GKP) bosonic code, encoding a logical qubit into grid states of an oscillator. Sivak et al. [214] demonstrate a fully stabilized and error-corrected logical qubit whose quantum coherence is significantly longer than that of all the imperfect quantum components involved in the QEC process, beating the best of them with a coherence gain of \(G\approx 2.3\). This was achieved by combining innovations in several domains including the fabrication of superconducting quantum circuits and model-free reinforcement learning [215]. To correct for single-photon loss, Kudra et al. [216] have implemented two photon transitions that excite the cavity and the qubit at the same time. The additional degree of freedom of the qubit makes it possible to implement a coherent, unidirectional mapping between spaces of opposite photon parity. The successful experimental implementation, when supplemented with qubit reset, is suitable for autonomous quantum error correction in bosonic systems, opening up the possibility to realize a range of non-unitary transformations on a bosonic mode. For full scale QEC, various groups have recently investigated the concatenation of CV and DV codes, such as concatenating the single-mode GKP code with the surface code. Instead, Guillaud and Mirrahimi [217] present a 1D repetition code based on a cat code as the base qubit for which the noise structure is modified in such a way that quantum error correction becomes of similar complexity as classical error correction and can be performed using a simple repetition code. According to [217], the specific noise structure can be preserved for a set of fundamental operations which at the level of the repetition code lead to a universal set of protected logical gates. Regarding scaling up CV resonator technology, Axline et al. [218] have experimentally realized on-demand, high-fidelity state transfer and entanglement between two isolated superconducting cavity quantum memories. By transferring states in a multiphoton encoding, Axline et al. [218] show that the use of cavity memories and state-independent transfer creates the striking opportunity to deterministically mitigate transmission loss with quantum error correction. The results establish a basis for deterministic quantum communication across networks, and will enable modular scaling of CV superconducting quantum circuits. The size of superconducting 3D microwave resonators makes it challenging to scale up large 3D multi-qubit CV systems. An alternative may be provided by nanomechanical phononic nanostructures [209]. Chu et al. [219] experimentally demonstrated strong coupling between a superconducting transmon qubit and the long-lived longitudinal phonon modes of a high-overtone bulk acoustic wave disk resonator (HBAR) formed in thin-film aluminium nitride (AlN). Recently, von Lupke et al. [220] demonstrated HBAR parity measurement in the strong dispersive regime of circuit quantum acoustodynamics, providing basic building blocks for constructing acoustic quantum memories and processors. Moreover, Schrinski et al. [221] measured long-lived HBAR Wigner states, monitoring the gradual decay of negativities over tens of microseconds. Wollack et al. [222] use a superconducting transmon qubit to control and read out the quantum state of a pair of nanomechanical resonators made from thin-film lithium niobate (LN). The device is capable of fast swap operations, used to deterministically manipulate the nonclassical and entangled mechanical quantum states. This creates potential for feedback-based operation of quantum acoustic processors. Finally, Chamberland et al. [182] have presented a comprehensive architectural analysis for a proposed fault-tolerant quantum computer based on cat codes concatenated with outer quantum error-correcting codes applied to a system of acoustic resonators coupled to superconducting circuits with a two-dimensional layout. ### Biochemistry and life science - drivers of quantum computing? In computational science there is the well-established method of multiscale modeling [223] that gave the Nobel Prize in Chemistry in 2014 to Arieh Warshel for modeling biological functions, from enzymes to molecular machines [224]. Multiscale modelling describes methods that simulate continuum-scale behaviour using information derived from computational models of finer scales in the system, down to molecular quantum levels, bridging across multiple length and time scales. It is then natural to consider using a quantum computer to address the case of a quantum system embedded in a multiscale environment. Cheng et al. [225] review these methods and propose the embedding approach as a method for describing complex biochemical systems, with the parts treated at different levels of theory and computed with hybrid classical and quantum algorithms. Having come this far, we understand however that many commonly held views on the power of digital quantum computing, especially in NISQ times, are problematic. Cheng et al. [225] illustrate this problem : "Chemistry is considered as one of the more promising applications to science of near term quantum computing. Recent work in transitioning classical algorithms to a quantum computer has led to great strides in improving quantum algorithms and illustrating their quantum advantage." The bottom line is that if one wants to treat biochemical molecules that contain active regions that cannot be properly explained with traditional algorithms on classical computers, then one should not expect any quantum advantage from NISQ quantum computers. That said, Cheng et al. [225] provide a useful overview of how multiscale modeling involving quantum computers is going to enable biomolecular problems to be tackled in the future. To this must added healthcare, life science and artificial intelligence in a broad sense. From the huge data bases of cell biology and human diseases one can design network models describing the networks of Life. In particular Barabasi, Loscalzo and collaborators [226, 227] developed the science of network medicine, and machine learning is essential for creating models for therapies that can design and control the action of drugs [228, 229]. Biochemistry and life science are already, as always, at the focus of high-performance computing, driving the development of exascale and post-exascale supercomputers, experiencing the limitations and bottlenecks. These are topics and areas that would profit immensely from quantum advantage. Maniscalco et al. [230] recently published a forward-looking white paper: "Quantum network medicine: rethinking medicine with network science and quantum algorithms", and posit that quantum computing may be a key ingredient in enabling the full potential of network medicine, laying the foundations of a new era of disease mechanism, prevention, and treatment. ### Final perspective This great vision reflects the mission of the entire field of quantum computing - to achieve the elusive Quantum Advantage. Fortunately there are very few problems of importance to mankind that rely on the imminent arrival of quantum computers with QA. Quantum computers will evolve in ways that seemed impossible not long ago. And they will provide platforms for fantastic experiments explaining deep fundamental physics and quantum information. And to make that available to the science community for experimentation should be the mission of the QC community. However, the usefulness beyond classical computing and algorithms is a very different matter. It depends on practical QA, and remains to be established in practical applications. Practical QA always has to be measured against the performance of classical algorithms and computers. HPC-people regard QPUs as a kind of GPUs, closely integrated and expected to accelerate the CPUs when handling NP-hard problems. The HPC+QC integration is happening right now, and during the next five years it will be taken to high levels - IBM is one example. For sure this will challenge and boost the development of competitive classical algorithms and dedicated hardware - but useful QA will remain problematic in the NISQ era. This is amply illustrated by Goings et al. [231] discussing how to "explore the quantum computation and classical computation resources required to assess the electronic structure of cytochrome P450 enzymes (CYPs) and thus define a classical-quantum advantage boundary". One conclusion [231] is that a large classically intractable CYP model simulation may need 5 million qubits and nearly 10 billion Toffoli gates, and may take 100 QPU hours. Another, less surprising, conclusion is that deep classical chemical insight is essential for guiding quantum algorithms and defining the computational frontier for chemistry. In that light, the most important near-term use of superconducting quantum processors may be to follow Feynman's original idea and create experiments in large superconducting controllable multi-qubit networks that are impossible for classical computers to simulate. The next assessment of the future of QC is planned for 2028 [232] and then we can perhaps compare notes. ## Acknowledgement This work was supported from the Knut and Alice Wallenberg Foundation through the Wallenberg Center for Quantum Technology (WACQT).
2306.06860
Extreme and statistical properties of eigenvalue indices of simple connected graphs
We analyze graphs attaining the extreme values of various spectral indices in the class of all simple connected graphs, as well as in the class of graphs which are not complete multipartite graphs. We also present results on density of spectral gap indices and its nonpersistency with respect to small perturbations of the underlying graph. We show that a small change in the set set of edges may result in a significant change of the spectral index like, e.g., the spectral gap or spectral index. We also present a statistical and numerical analysis of spectral indices of graphs of the order $m\le 10$. We analyze the extreme values for spectral indices for graphs and their small perturbations. Finally, we present the statistical and extreme properties of graphs on $m\le 10$ vertices.
Sona Pavlikova, Daniel Sevcovic, Jozef Siran
2023-06-12T04:17:44Z
http://arxiv.org/abs/2306.06860v1
# Extreme and statistical properties of eigenvalue indices of simple connected graphs ###### Abstract We analyze graphs attaining the extreme values of various spectral indices in the class of all simple connected graphs, as well as in the class of graphs which are not complete multipartite graphs. We also present results on density of spectral gap indices and its nonpersistency with respect to small perturbations of the underlying graph. We show that a small change in the set set of edges may result in a significant change of the spectral index like, e.g., the spectral gap or spectral index. We also present a statistical and numerical analysis of spectral indices of graphs of the order \(m\leq 10\). We analyze the extreme values for spectral indices for graphs and their small perturbations. Finally, we present the statistical and extreme properties of graphs on \(m\leq 10\) vertices. Keywords: Graph spectrum; spectral index; extreme properties of eigenvalues; distribution of eigenvalues; complete multipartite graphs; 2000 MSC: 05C50 05B20 05C22 15A09 15A18 15B36 ## 1 Introduction In theoretical chemistry, biology, or statistics, spectral indices and properties of graphs representing the structure of chemical molecules or transition diagrams for finite Markov chains play an important role (cf. Cvetkovic [11, 12], Brouwer and Haemers [8] and references therein). In the past decades, various graph energies and indices have been proposed and analyzed. For example, the sum of absolute values of eigenvalues is called the matching energy index (cf. Chen and Liu [29]), the maximum of the absolute values of the least positive and largest negative eigenvalue is related to the HOMO-LUMO index (see Mohar [34, 35], Li et al. [30], Jaklic et al. [26], Fowler et al. [20]), their difference is related to the HOMO-LUMO separation gap (cf. Gutman and Rouvray [22], Li et al. [30], Zhang and An [45], Fowler et al. [19]). The spectrum \(\sigma(G_{A})\equiv\sigma(A)\) of a simple nonoriented connected graph \(G_{A}\) on \(m\) vertices is given by the eigenvalues of its adjacency matrix \(A\): \[\lambda_{max}\equiv\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{m}\equiv \lambda_{min}.\] For a simple graph (without loops and multiple edges) we have \(A_{ii}=0\), and so \(\sum_{i=1}^{m}\lambda_{i}=trace(A)=0\). Hence \(\lambda_{1}>0,\lambda_{m}<0\). In what follows, we shall denote \(\lambda_{+}(A)\), and \(\lambda_{-}(A)\) the least positive and largest negative eigenvalues of a symmetric matrix \(A\) having positive and negative real eigenvalues. Let us denote by \(\Lambda^{gap}(A)=\lambda_{+}(A)-\lambda_{-}(A)\) and \(\Lambda^{ind}(A)=\max(|\lambda_{+}(A)|,|\lambda_{-}(A)|)\) the spectral gap and the spectral index of a symmetric matrix \(A\). Furthermore, we define the spectral power \(\Lambda^{pow}(A)=\sum_{k=1}^{m}|\lambda_{k}|\). Clearly, all three indices \(\Lambda^{gap},\Lambda^{ind}\), and \(\Lambda^{pow}\) depend on positive \(\sigma_{+}(A)=\{\lambda\in\sigma(A),\lambda>0\}\), and negative \(\sigma_{-}(A)=\{\lambda\in\sigma(A),\lambda<0\}\) parts of the spectrum of the matrix \(A\). In fact, \(\lambda_{+}(A)=\min\sigma_{+}(A),\ \lambda_{-}(A)=\max\sigma_{-}(A)\), and \(\Lambda^{pow}(A)=\sum_{\lambda\in\sigma_{+}(A)}\lambda-\sum_{\lambda\in \sigma_{-}(A)}\lambda\). In the past decades, various concepts of introducing inverses of graphs based on inversion of the adjacency matrix have been proposed. In general, the inverse of the adjacency matrix does not need to define a graph again because it may contain negative elements (cf. [23]). Godsil [21] proposed a successful approach to overcome this difficulty, which defined a graph to be (positively) invertible if the inverse of its nonsingular adjacency matrix is diagonally similar (cf. [44]) to a nonnegative integral matrix representing the adjacency matrix of the inverse graph in which positive labels determine edge multiplicities. In the papers [36, 37], Pavlikova and Sevcovic extended this notion to a wider class of graphs by introducing the concept of negative invertibility of a graph. In chemical applications, the spectral gap \(\Lambda^{gap}\) of a structural graph of a molecule is related to the so-called HOMO-LUMO energy separation gap of the energy of the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). Following Huckel's molecular orbital method [25], eigenvalues of a graph that describes an organic molecule are related to the energies of molecular orbitals (see also Streitwieser [42, Chapter 5.1]). Finally, according Aihara [1, 2], it is energetically unfavorable to add electrons to a high-lying LUMO orbital. Hence, a larger HOMO-LUMO gap implies a higher kinetic stability and low chemical reactivity of a molecule. Furthermore, the HOMO-LUMO energy gap generally decreases with the number of vertices in the structural graph (cf. [3]). In this paper, we analyze the extreme and statistical properties of the spectrum of all simple connected graphs. It includes the analysis of maximal and minimal eigenvalues, as well as indices such as, e.g., spectral gap, spectral index, and the power of spectrum. We analyze graphs that attain extreme values of various indices in the class of all simple connected graphs, as well as in the class of graphs that are not complete multipartite graphs. We also present results on the density of spectral gap indices and its nonpersistency with respect to small perturbations of the underlying graph. We show that a small change in the set set of edges may result in a significant change of the spectral gap or spectral index. We also present a statistical and numerical analysis of indices of graphs of order \(m\leq 10\). The paper is organized as follows. In Section 2 we first recall the known results on extreme values of maximal and minimal eigenvalues of adjacency matrices. We also report the number of all simple connected graphs due to McKay [33]. Next, we analyze the extreme values for indices for completed multipartite graphs and their small perturbations. In Section 3 we focus our attention on the statistical and extreme properties of graphs on \(m\leq 10\) vertices. ## 2 Extreme properties of indices Denote by \(c_{m}\) the number of simple non-isomorphic connected graphs on \(m\) vertices. According to the McKay's list of all simple connected graphs [33] the numbers \(c_{m},m\leq 10\), are summarized in Table 1. Although there exists an approximation formula for the number of labelled simple connected graphs of the given order \(m\) and number of edges (cf. Bender, Canfield, and McKay [6]) for small values of \(m\) the number \(c_{m}\) can be approximated by the following compact the quadratic exponential function: \[c_{m}\approx\omega_{0}10^{\omega_{1}(m-9)+\omega_{2}(m-9)^{2}},\ \ \ \mbox{where}\ \omega_{0}=261080,\ \ \omega_{1}=1.4,\ \ \omega_{2}=0.09. \tag{1}\] This formula is exact for \(m=9\) and gives good approximation results for other orders \(m\leq 10\) (see Fig. 1). Recall the following well-known facts: the maximal value of \(\lambda_{max}=\lambda_{1}\) over all simple connected graphs on the \(m\) vertices is equal to \(m-1\), and it is attained by the complete graph \(K_{m}\). The minimal value of \(\lambda_{max}\) is equal to \(2\cos(\pi/(m+1))\), and it is attained for the path graph \(P_{m}\). Furthermore, the lower bound for the minimal eigenvalue \begin{table} \begin{tabular}{l||l|l|l|l|l|l|l|l|l} \hline \(m\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline total \# & 1 & 2 & 6 & 21 & 112 & 853 & 11117 & 261080 & 11716571 \\ \hline \end{tabular} \end{table} Table 1: Numbers of all simple connected graphs on \(m\leq 10\) vertices. Figure 1: The numbers \(c_{m}\) of all simple connected as a function of number of vertices (blue solid line), and its approximation by means of the approximation formula (1) (red dashed line). \(\lambda_{m}\geq-\sqrt{\lfloor m/2\rfloor\lceil m/2\rceil}\) was independently proved in [10, 24, 38]. The lower bound is attained for the complete bipartite graph \(K_{m_{1},m_{2}}\) where \(m_{1}=\lceil m/2\rceil,m_{2}=\lfloor m/2\rfloor\). The maximum value of \(\lambda_{min}\) on all simple connected graphs on the \(m\) vertices is equal to \(-1\), and it is attained for the complete graph \(K_{m}\). ### Indices for complete multipartite graphs and their perturbations The aim of this section is to analyze indices and their extreme values for simple connected graphs on the \(m\) vertices. **Proposition 1**.: _Let us denote \(K_{m_{1},\ldots,m_{k}}\) the complete multipartite graph where \(1\leq m_{1}\leq\cdots\leq m_{k}\) denote the sizes of parts, \(m_{1}+\cdots+m_{k}=m\), and \(k\geq 2\) is the number of parts. Then the spectrum of the adjacency matrix \(A\) of \(K_{m_{1},\ldots,m_{k}}\) satisfies \(\sigma(A)\subseteq[-m_{k},m-m/k]\). If \(m_{i}<m_{i+1}\) then there exists a single eigenvalue \(\lambda\in(-m_{i+1},-m_{i})\). If \(m_{i}=m_{i+1}=\cdots=m_{i+j}\) then \(\lambda=-m_{i}\) is an eigenvalue of \(A\) with multiplicity \(j\)._ _Finally, \(0<\lambda_{+}(A)\leq m-m/k\) and \(-m/k\leq\lambda_{-}(A)<0\). As a consequence, \(\Lambda^{gap}(A)\leq m,\Lambda^{ind}(A)\leq m-1\), \(\Lambda^{pow}(A)\leq 2(m-m/k)\). The equalities for the indices \(\Lambda^{gap}(A),\Lambda^{ind}(A)\) are reached by the complete graph \(G_{A}=K_{m}\)._ Proof.: The adjacency matrix \(A\) of \(K_{m_{1},\ldots,m_{k}}\) has the block form: \[A=\mathbf{1}\mathbf{1}^{T}-diag(D_{1},\ldots,D_{k}),\] where \(\mathbf{1}=(1,\ldots,1)^{T}\in\mathbb{R}^{m}\), and \(D_{i}\) is the \(m_{i}\times m_{i}\) matrix consisting of ones. Now, if \(\lambda\) is a nonzero eigenvalue of \(A\) with an eigenvector \(x=(x_{1},\ldots,x_{m})^{T}\) then \[\alpha-\alpha_{p}=\lambda x_{l},\quad\text{for each}\ \ l=\mu_{p-1}+1,\ldots,\mu_{p}, \quad\ \alpha_{p}=\sum_{l=1+\mu_{p-1}}^{\mu_{p}}x_{l},\quad\mu_{p}=\sum_{r=1}^{p}m_{ r}, \tag{2}\] for \(p=1,\ldots,k\). Here \(\alpha=\sum_{p=1}^{k}\alpha_{p}=\sum_{j=1}^{m}x_{j}\). For example, if \(p=1\) then \(\sum_{j=1}^{m_{1}}x_{j}=\alpha m_{1}/(\lambda+m_{1})\) provided that \(\lambda\neq-m_{1}\). Similarly, we can proceed with the remaining parts \(m_{2},\ldots,m_{k}\). In the case \(\alpha=0\) we have \(\lambda\in\{-m_{1},\ldots,-m_{k}\}\). In the case \(\alpha\neq 0\) we conclude \(\lambda\not\in\{-m_{1},\ldots,-m_{k}\}\), and the eigenvalue \(\lambda\) satisfies the rational equation: \[\psi(\lambda)=1,\quad\text{where }\psi(\lambda)=\sum_{i=1}^{k}\frac{m_{i}}{ \lambda+m_{i}}. \tag{3}\] Conversely, if \(\lambda\not\in\{-m_{1},\ldots,-m_{k}\}\) satisfies \(\psi(\lambda)=1\) then it is easy to verify that the nontrivial vector \(x\in\mathbb{R}^{m}\), \[x=(\underbrace{y_{1},\ldots,y_{1}}_{m_{1}\text{ times}},\underbrace{y_{2}, \ldots,y_{2}}_{m_{2}\text{ times}},\ldots,\underbrace{y_{k},\ldots,y_{k}}_{m_{k} \text{ times}})^{T},\quad\text{where }y_{i}=\frac{m_{i}}{\lambda+m_{i}},\] is an eigenvector of \(A\), i.e. \(Ax=\lambda x\). In what follows, we shall derive necessary bounds on eigenvalues of \(A\). Suppose to the contrary that \(\lambda<-m_{k}\) is an eigenvalue of \(A\). Then \(\lambda+m_{i}\leq\lambda+m_{k}<0\) for any \(i=1,\ldots,k\), and so \(\psi(\lambda)<0<1\). Therefore, \(\lambda\geq-m_{k}\) for any eigenvalue \(\lambda\in\sigma(A)\). To derive an upper bound for the positive eigenvalue of \(A\) we introduce an auxiliary function \(\phi(\xi_{1},\ldots,\xi_{k})=\sum_{i=1}^{k}\frac{\xi_{i}}{\lambda+\xi_{i}}\) where \(\lambda>0\) is fixed. The function \(\phi:\mathbb{R}^{k}\rightarrow\mathbb{R}\) is concave. Using the Lagrange function \(\mathscr{L}(\xi,\mu)=\phi(\xi_{1},\ldots,\xi_{k})-\mu\sum_{i=1}^{k}\xi_{i}\) it is easy to verify that \(\phi\) achieves the unique constrained maximum in the set \(\{\xi\in\mathbb{R}^{k},\sum_{i=1}^{k}\xi_{i}=m\}\) at the point \(\hat{\xi}=(m/k,\ldots,m/k)^{T}\). Therefore, for any \(\lambda>0\) we have \[\psi(\lambda)=\sum_{i=1}^{k}\frac{m_{i}}{\lambda+m_{i}}=\phi(m_{1},\ldots,m_{k })\leq\phi(m/k,\ldots,m/k)=\frac{m}{\lambda+m/k}.\] If \(\lambda>0\) is a positive eigenvalue of \(A\) then \(\psi(\lambda)=1\) and so \(\lambda+m/k\leq m\), that is, \(0<\lambda\leq m-m/k\). Therefore, \(\sigma(A)\subset[-m_{k},m-m/k]\). In the trivial case of an equipartite graph \(K_{m_{1},\ldots,m_{k}}\) with \(m_{1}=\cdots=m_{k}=m/k\) we obtain \(\lambda_{-}(A)\geq-m_{k}=-m/k\) and \(\lambda_{+}(A)\leq m-m/k\). Thus, \(\Lambda^{gap}\leq m\), and \(\Lambda^{ind}\leq m-m/k\leq m-1\). This estimate also follows from the results of [17] and [15]. Therefore, for any \(1\leq l<k\) we conclude that \(\Lambda^{gap}(A)=\lambda_{+}(A)-\lambda_{-}(A)\leq m-m/k-(-m/k)=m\). Similarly, \(\Lambda^{ind}(A)\leq m-1\). Now, consider a non-equipartite graph \(K_{m_{1},\ldots,m_{k}}\) with \(m_{1}=\cdots=m_{l}<m_{l+1}\leq\cdots\leq m_{k}\) where \(1\leq l<k\). Suppose that \(l=1\), that is, \(1\leq m_{1}<m_{2}\leq\cdots\leq m_{k}\). The function \(\psi\) is strictly decreasing in the interval \((-m_{2},-m_{1})\) with infinite limits \(\pm\infty\) when \(\lambda\rightarrow-m_{2}\) and \(\lambda\rightarrow-m_{1}\), respectively. Therefore, there exists a unique eigenvalue \(\lambda\in(-m_{2},-m_{1})\) of the matrix \(A\). We have \(m_{1}+(k-1)m_{2}\leq\sum_{i=1}^{k}m_{i}=m\). Define \(\tilde{\lambda}=-m_{1}/k-m_{2}(k-1)/k\). Then \(\tilde{\lambda}\geq-m/k\). In what follows we shall prove that \(\psi(\tilde{\lambda})\geq 1\). The function \(\xi\mapsto\xi/(\tilde{\lambda}+\xi)\) decreases for \(\xi>-\tilde{\lambda}\). Therefore \[\psi(\tilde{\lambda}) \geq \frac{m_{1}}{\tilde{\lambda}+m_{1}}+(k-1)\frac{m_{2}}{\tilde{ \lambda}+m_{2}}=-\frac{k}{k-1}\frac{m_{1}}{m_{2}-m_{1}}+k(k-1)\frac{m_{1}}{m_{ 2}-m_{1}}\] \[= \frac{k}{k-1}\frac{(k-1)^{2}m_{2}-m_{1}}{m_{2}-m_{1}}\geq\frac{k }{k-1}>1,\] because \(k\geq 2\). Since \(\psi\) is strictly decreasing in the interval \((-m_{2},-m_{1})\) we have \(-m/k\leq\tilde{\lambda}<\lambda\) because \(\psi(\lambda)=1\). In the case \(l\geq 2\) we can apply a simple perturbation argument. Indeed, let us perturb the adjacency matrix \(A\) by a small parameter \(0<\varepsilon\ll 1\) as follows: \[A^{\varepsilon}=\mathbf{1}\mathbf{1}^{T}-diag((1-\varepsilon)D_{1},D_{2}, \ldots,D_{l-1},(1+\varepsilon)D_{l},D_{l+1},\ldots,D_{k}).\] It corresponds to the perturbation \(m_{1}^{\varepsilon}=(1-\varepsilon)m_{1},m_{l}^{\varepsilon}=(1+\varepsilon)m _{l}\). All remaining \(m_{i}\) remain unchanged for \(i\neq 1\) and \(i\neq l\). Then for the corresponding perturbed function \(\psi^{\varepsilon}\) there exists a solution \(\lambda^{\varepsilon}\in(m_{1}-\varepsilon,m_{1})\) of the equation \(\psi^{\varepsilon}(\lambda^{\varepsilon})=1\). Since the spectrum of \(A^{\varepsilon}\) depends continuously on the parameter \(\varepsilon\to 0\), we see that \(\lambda^{\varepsilon}\rightarrow\lambda=-m_{1}=\cdots=-m_{l}\) is an eigenvalue of the graph \(G_{A}\) provided that \(l\geq 2\). In this case \(\lambda=-m_{1}\geq-m/k\). A complete multipartite graph \(G_{A}=K_{m_{1},m_{2},\ldots,m_{k}}\) has exactly one positive eigenvalue \(\lambda_{1}>0\) (cf. Smith [14]). Since \(\sum_{i=1}^{m}\lambda_{i}=0\) we have \(\Lambda^{pow}(A)=\sum_{i=1}^{m}|\lambda_{i}|=2\lambda_{1}\leq 2(m-m/k)\). The spectrum of the complete graph \(K_{m}\) consists of eigenvalues \(m-1\), and \(-1\) with multiplicity \(m-1\). Therefore, \(\Lambda^{gap}=m,\Lambda^{ind}=m-1\), as claimed. **Remark 1**.: _The main idea of the proof of Proposition 1 is a non-trivial generalization of the interlacing theorem [17, Theorem 1] due to Esser and Harary. It is based on a solution \(\lambda\) to the dispersion equation (3), that is \(\psi(\lambda)=1\) (see [17, Eq. (9)]). In [17, Corollary 1] they showed that \(\sigma(A)\subseteq[-m_{k},m-m_{1}]\). Because \(km_{1}\leq\sum_{i=1}^{k}m_{i}=m\), we obtain \(m-m/k\leq m-m_{1}\). Using the concavity of the function \(\phi:\mathbb{R}^{k}\to\mathbb{R}\) and the constrained optimization argument, we were able to improve this estimate. We derived the estimate \(\sigma(A)\subseteq[-m_{k},m-m/k]\) which yields optimal bounds \(\Lambda^{\text{gap}}\leq m,\Lambda^{\text{ind}}\leq m-1\) derived in Proposition 1. Furthermore, we introduced a novel analytic perturbation technique to handle the case when the sizes \(m_{1}=\dots=m_{l}\) of parts coincide._ **Remark 2**.: _It follows from the proof of Proposition 1 that \(\lambda\) is an eigenvalue of \(A\) if and only if the vector \(z=(\alpha_{1},\dots,\alpha_{k})^{T}\in\mathbb{R}^{k}\) (see (2)) is an eigenvector of the \(k\times k\) matrix \(\mathscr{A}\), i.e. \(\mathscr{A}z=\lambda z\), where \(\mathscr{A}_{ij}=m_{i}\) for \(i\neq j\), \(\mathscr{A}_{ii}=0\)._ _As a consequence, the spectrum of the complete bipartite graph \(K_{m_{1},m_{2}}\) consists of \(m_{1}+m_{2}-2\) zeros and \(\pm\sqrt{m_{1}m_{2}}\). Therefore, \(\Lambda^{\text{gap}}(K_{m_{1},m_{2}})=\Lambda^{pow}(K_{m_{1},m_{2}})=2\sqrt{m _{1}m_{2}}\), and \(\Lambda^{\text{ind}}(K_{m_{1},m_{2}})=\sqrt{m_{1}m_{2}}\). Furthermore, if \(m\) is even, then \(\Lambda^{gap}(K_{m/2,m/2})=m=\Lambda^{\text{gap}}(K_{m})\), i.e., the complete bipartite graph \(K_{m/2,m/2}\) as well as the complete graph \(K_{m}\) maximize the spectral gap \(\Lambda^{\text{gap}}\). The smallest example is the complete graph \(K_{4}\) with eigenvalues \(\{3,-1,-1,-1\}\) and the circle \(C_{4}\equiv K_{2,2}\) with eigenvalues \(\{2,0,0,-2\}\) that yields the same maximum value of \(\Lambda^{gap}=4\)._ _Similarly, one can derive the equation for spectrum of the complete tripartite graph \(K_{m_{1},m_{2},m_{3}}\). It leads to the following depressed cubic equation \(\lambda^{3}+r\lambda+s=0\) with \(r=-(m_{1}m_{2}+m_{2}m_{3}+m_{1}m_{3}),s=-2m_{1}m_{2}m_{3}\). However, the discriminant \(\Delta=-(4r^{3}+27s^{2})\) is positive for a non-equipartite graph, and there are three real roots of the depressed cubic. With regard to Galois theory, roots cannot be expressed by an algebraic expression, and Cardano's formula leads to "casus irreducibilis"._ **Proposition 2**.: _Let us consider the class of all simple connected graphs on \(m\) vertices. The following statements regarding the indices \(\Lambda^{gap},\Lambda^{ind}\) and \(\Lambda^{pow}\) hold._ * _If_ \(G_{A}\) _is not a complete multipartite graph of order_ \(m\)_, then_ \(\Lambda^{gap}(A)\leq m-1,\Lambda^{ind}(A)\leq m/2\) _for_ \(m\) _even, and_ \(\Lambda^{\text{gap}}(A)\leq m-3/2,\Lambda^{ind}(A)\leq\sqrt{m^{2}-1}/2\) _for_ \(m\) _odd._ * _The maximum value of_ \(\Lambda^{pow}\) _on the_ \(m\leq 7\) _vertices is equal to_ \(2m-2\)_, and it is attained for the complete graph_ \(K_{m}\)_. For_ \(m=7\) _there are two maximizing graphs with_ \(\Lambda^{pow}=12\) _- the complete graph_ \(K_{7}\) _and the noncomplete graph shown in Fig._ 4_. Starting_ \(m\geq 8\) _the maximal_ \(\Lambda^{pow}\) _is attained by noncomplete graphs depicted in Fig._ 5 _for_ \(8\leq m\leq 10\)_._ Proof.: According to Smith [14], a simple connected graph has exactly one positive eigenvalue (i.e. \(\lambda_{2}(A)\leq 0\)) if and only if it is a complete multipartite graph \(K_{m_{1},\dots,m_{k}}\) where \(1\leq m_{1}\leq\dots\leq m_{k}\) denotes the sizes of parts, \(m_{1}+\dots+m_{k}=m\), and \(k\geq 2\) is the number of parts (see [14, Theorem 6.7]). To prove a), let us consider a graph \(G_{A}\) different from any complete multipartite graph \(K_{m_{1},\dots,m_{k}}\). Therefore, \(\lambda_{2}(A)>0\). We combine this information with the result due to D. Powers regarding the second largest eigenvalue \(\lambda_{2}(A)\). According to [38] (see also [39], [40]), for a simple connected graph \(G_{A}\) on \(m\) vertices we have the following estimate for the second largest eigenvalue \(\lambda_{2}(A)\): \[-1\leq\lambda_{2}(A)\leq\lfloor m/2\rfloor-1\] (see also Cvetkovic and Simic [13]). Since \(\lambda_{2}(A)>0\) we have \(0<\lambda_{+}(A)\leq\lambda_{2}(A)\leq\lfloor m/2\rfloor-1\), and \(-\sqrt{\lfloor m/2\rfloor\lceil m/2\rceil}\leq\lambda_{min}(A)\leq\lambda_{-} (A)<0\). Hence the spectral gap \(\Lambda^{gap}=\lambda_{+}(A)-\lambda_{-}(A)\leq\sqrt{\lfloor m/2\rfloor\lceil m /2\rceil}+\lfloor m/2\rfloor-1\). If \(m\) is even, it leads to the estimate \(\Lambda^{gap}\leq m-1\). If \(m\) is odd, then it is easy to verify \(\Lambda^{gap}\leq m-3/2\). Analogously, \(\Lambda^{ind}\leq m/2\) if \(m\) is even, and \(\Lambda^{ind}\leq\sqrt{m^{2}-1}/2\) if \(m\) is odd. The part b) is contained in Section 3 dealing with statistical properties of eigenvalue indices. Recall that for the complete bipartite graph \(K_{m,m}\) the spectrum consists of zeros and \(\pm m\). As a consequence \(\lim_{m\to\infty}\Lambda^{gap}(K_{m,m})=\infty\). The next result shows that a small change in a large graph \(K_{m,m}\) caused by the removal of a single edge may result in a huge change in the spectral gap. **Proposition 3**.: _Let us denote by \(K_{m,m}^{-e}\) the bipartite noncomplete graph constructed from the complete bipartite graph \(K_{m,m}\) by deleting exactly one edge. Then its spectrum consists of \(2m-4\) zeros and four real eigenvalues_ \[\lambda^{\pm,\pm}=\pm\left(1-m\pm\sqrt{m^{2}+2m-3}\right)/2. \tag{4}\] _For the spectral gap we have \(\Lambda^{gap}(K_{m,m}^{-e})=1-m+\sqrt{m^{2}+2m-3}\), and_ \[2\sqrt{1-2/(m+1)}<\Lambda^{gap}(K_{m,m}^{-e})<2\sqrt{1-1/m}.\] _As a consequence, \(\lim_{m\to\infty}\Lambda^{gap}(K_{m,m}^{-e})=2\)._ Proof.: Without loss of generality, we may assume that the adjacency matrix \(A\) of the graph \(K_{m,m}^{-e}\) has the form \[A=\left(\begin{array}{cc}0&\mathbf{1}\mathbf{1}^{T}\\ \mathbf{1}\mathbf{1}^{t}&0\end{array}\right)-\left(\begin{array}{c}0\\ e_{1}\end{array}\right)(e_{1},0)-\left(\begin{array}{c}e_{1}\\ 0\end{array}\right)(0,e_{1}),\] where \(\mathbf{1}=(1,\ldots,1)^{T},e_{1}=(1,0,\ldots,0)^{T}\in\mathbb{R}^{m}\). Assume that \(\lambda\) is an eigenvalue of \(A\), and \((0,0)\neq(x,y)\in\mathbb{R}^{m}\times\mathbb{R}^{m}\) is an eigenvector. Denote \(\alpha=\sum_{i=1}^{m}x_{i},\ \beta=\sum_{i=1}^{m}y_{i}\). Then \[\beta-y_{1}=\lambda x_{1},\quad\alpha-x_{1}=\lambda y_{1},\quad\beta=\lambda x _{i},\quad\alpha=\lambda y_{i},\quad i=2,\ldots,m.\] Assuming \(\lambda=\pm 1\) leads to an obvious contradiction, as it implies \(\alpha=\beta=0\), and \(x=0,y=0\). The matrix \(A\) has zero eigenvalue \(\lambda=0\), with \(2(m-1)\) dimensional eigenspace \(\{(x,y)\in\mathbb{R}^{m}\times\mathbb{R}^{m},x_{1}=y_{1}=0\}\). Therefore, for \(\lambda\neq\pm 1,0\) we have \(x_{1}=(\alpha-\beta\lambda)/(1-\lambda^{2}),\ y_{1}=(\beta-\alpha\lambda)/(1- \lambda^{2}),\) and \(x_{2}=\beta/\lambda,y_{i}=\alpha/\lambda,\quad i=2,\ldots,m\). It results in a system of two linear equations for \(\alpha,\beta\): \[\alpha=\frac{m-1}{\lambda}\beta+\frac{\alpha-\beta\lambda}{1-\lambda^{2}}, \quad\beta=\frac{m-1}{\lambda}\alpha+\frac{\beta-\alpha\lambda}{1-\lambda^{2}},\] which has a non-trivial solution \((\alpha,\beta)\neq(0,0)\) provided that \(\lambda\neq\pm 1,0\), is a solution of the following dispersion equation: \[\left(\frac{1}{1-\lambda^{2}}-1\right)^{2}-\left(\frac{m-1}{\lambda}-\frac{ \lambda}{1-\lambda^{2}}\right)^{2}=0.\] After rearranging terms, \(\lambda\) is a solution of the cubic equation \[\pm\lambda^{3}+m\lambda^{2}-m+1=0,\] having roots \(\mp 1\) (which are not eigenvalues of \(A\)), and four other roots \(\lambda^{\pm,\pm}\) given as in (4), as claimed. The rest of the proof easily follows. A similar property to the result of Proposition 3 regarding indices can be observed when adding one edge to a complete bipartite graph, that is, destroying the bipartiteness of the original complete bipartite graph by small perturbation. **Proposition 4**.: _Let us denote by \(G_{A}=K_{m,m}^{+e}\) a graph of the order \(2m\) constructed from the complete bipartite graph \(K_{m,m}\) by adding exactly one edge to the first part. Then its spectrum consists of \(2m-4\) zeros and four real eigenvalues \(\lambda^{(1),(2),(3),(4)}\) where \(\lambda^{(4)}=\lambda_{-}(A)=-1\), and three other roots \(\lambda^{(3)}<-1<0<\lambda^{(2)}<\lambda^{(1)}\) solve the cubic equation \(\lambda^{2}(1-\lambda)-m(m-2-m\lambda)=0\). The smallest positive eigenvalue has the form \(\lambda_{+}(A)\equiv\lambda^{(2)}=1-2/m-2/m^{3}+O(m^{-4})\) as \(m\to\infty\). As a consequence, \(\lim_{m\to\infty}\Lambda^{gap}(K_{m,m}^{+e})=2\), and \(\lim_{m\to\infty}\Lambda^{ind}(K_{m,m}^{+e})=1\)._ Proof.: It is similar to the proof of the previous Proposition 3. Arguing similarly as before, one can show that \(\lambda^{(4)}=-1\) is an eigenvalue with multiplicity one. The other nonzero eigenvalues are roots of the cubic equation \(\lambda^{2}(1-\lambda)-m(m-2-m\lambda)=0\) which can be transformed into a depressed cubic equation with a positive discriminant \(\Delta\). Thus, it has three distinct real eigenvalues \(\lambda^{(1),(2),(3)}\). Performing the standard asymptotic analysis, we conclude \(\lambda_{+}(A)=\lambda^{(2)}=1-2/m-2/m^{3}+O(m^{-4})\) as \(m\to\infty\), as claimed. **Remark 3**.: _In [18] it is shown that for a bipartite graph \(K_{m_{1},m_{2}}\) of the order \(m=m_{1}+m_{2}\) and the average valency \(d\) of vertices, one has \(\lambda_{m/2}-\lambda_{1+m/2}\leq\sqrt{d}\)._ We end this section with the following statement regarding the density of values of the spectral index \(\Lambda^{gap}\) in the class of complete bipartite graphs. **Proposition 5**.: _For every pair of real numbers \(0\leq\delta<\gamma<1\), there exist an order \(m\) and a complete bipartite graph \(K_{m_{1},m_{2}}\) of the order \(m=m_{1}+m_{2}\) such that \(m-\gamma\leq\Lambda^{gap}(K_{m_{1},m_{2}})\leq m-\delta\)._ Proof.: Recall the known fact (see, e.g. [16]) that the set of fractional parts \(\sqrt{m}-[\sqrt{m}]\) of roots of all positive integers \(m\) is dense in the interval \([0,1)\). Hence, there exists an integer \(m_{2}\), such that \(\sqrt{\delta}\leq\sqrt{m_{2}}-[\sqrt{m_{2}}]\leq\sqrt{\gamma}\). Take \(m_{1}:=[\sqrt{m_{2}}]^{2}\leq m_{2}\). Then \(\sqrt{\delta}\leq\sqrt{m_{2}}-\sqrt{m_{1}}\leq\sqrt{\gamma}\). By squaring and rearranging terms, we obtain \((m_{1}+m_{2})-c\leq 2\sqrt{m_{1}m_{2}}\leq(m_{1}+m_{2})-d\). Now we take the bipartite graph \(K_{m_{1},m_{2}}\), of order \(m=m_{1}+m_{2}\). Since \(\Lambda^{gap}(K_{m_{1},m_{2}})=2\sqrt{m_{1}m_{2}}\) the claim follows. ### Indices for noncomplete graphs The purpose of this section is to analyze indices for noncomplete multipartite graphs. **Proposition 6**.: _If \(G_{A}\) is a bipartite but not complete bipartite graph, with the average vertex degree \(d\), and the multiplicity of the zero eigenvalue of the order \(k\), then_ \[\Lambda^{gap}(G_{A})\leq 2\sqrt{\frac{d(m-2d)}{m-k-2}}. \tag{5}\] Proof.: Let \(G_{A}\) be a bipartite but not complete bipartite graph with adjacency matrix \(A\) having null space of dimension \(k\). Since \(G_{A}\) is not complete bipartite, we have \(k\leq m-4\). It follows that \(m\) and \(k\) have the same parity, so that \(m-k=2r\) for some positive integer \(r\geq 2\). By bipartiteness of \(G_{A}\) we may assume that its eigenvalues have the form \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{r}>0=\lambda_{r+1}=\ldots= \lambda_{r+k}>-\lambda_{r}\geq\ldots\geq-\lambda_{2}\geq-\lambda_{1}\), so that \(\lambda_{+}=\lambda_{r}\) and \(\lambda_{-}=-\lambda_{r}\). The earlier used fact that \(\lambda_{1}\geq d\) trivially implies that \[\sum_{i=1}^{r}\lambda_{i}^{2}\geq d^{2}+(r-1)\lambda_{+}^{2}\ . \tag{6}\] It is well known that the sum of squares \(\sum_{i=1}^{m}\lambda_{i}^{2}=trace(A^{2})=md\), where \(d\) is the average valency of vertices of \(G_{A}\), that is, \(md/2\) is the number of edges in the graph \(G_{A}\) (cf. Bapat [5]). Combined with the inequality \(\lambda_{1}\geq d\) used earlier, we obtain \[md=2\sum_{i=1}^{r}\lambda_{i}^{2}\geq 2d^{2}+2(r-1)\lambda_{+}^{2}=2d^{2}+(m-k -2)\lambda_{+}^{2} \tag{7}\] and evaluation of \(\lambda_{+}(A)\) from (7) gives \(\lambda_{+}(A)=-\lambda_{-}(A)\leq\sqrt{\frac{d(m-2d)}{m-k-2}}\) which implies the inequality (5) in our statement. **Remark 4**.: _The estimate (5) is nearly optimal. For example, for the graph \(K_{m_{1},m_{1}}^{-e}\) we have \(m=2m_{1}\), \(d=m_{1}-\frac{1}{m_{1}}\) and \(k=m-4\), and (5) for these values gives \(\Lambda^{gap}(K_{m_{1},m_{1}}^{-e})\leq 2\sqrt{1-4/m^{2}}\), which is a slightly worse estimate than the one derived in the analysis of the spectrum of \(K_{m_{1},m_{1}}^{-e}\)._ Finally, we show that the maximal (minimal) eigenvalue can increase (decrease) by adding one vertex to the original graph. **Proposition 7**.: _Assume \(G_{A}\) is a simple connected graph on the vertices \(m\) with the maximal and minimal eigenvalues \(\lambda_{max}(A)\), and \(\lambda_{min}(A)\). Then there exists a graph \(G_{\mathscr{A}}\) on the \(m+1\) vertices constructed from \(G_{A}\) by adding one vertex connected to each of the vertices \(G_{A}\) that has the maximal eigenvalue such that_ \[\lambda_{max}(\mathscr{A})\geq\frac{\lambda_{max}(A)+\sqrt{(\lambda_{max}(A))^ {2}+4}}{2}.\] _Similarly, there exists a vertex \(i_{0}\) of \(G_{A}\) such that the graph \(G_{\mathscr{A}}\) on \(m+1\) vertices constructed from \(G_{A}\) by adding a pendant vertex to the vertex \(i_{0}\) has the minimal eigenvalues satisfying the estimate_ \[\lambda_{min}(\mathscr{A})\leq\frac{\lambda_{min}(A)-\sqrt{(\lambda_{min}(A))^{ 2}+4/m}}{2}.\] Proof.: The sum of all eigenvalues of the symmetric matrix \(A\) is zero because the trace of \(A\) is zero. Hence \(\lambda_{min}(A)<0<\lambda_{max}(A)\). Let \(\mathscr{A}\) be the \((m+1)\times(m+1)\) adjacency matrix of the graph \(G_{\mathscr{A}}\) obtained from \(G_{A}\) by adding a vertex connected to a subset of vertices of \(G_{A}\). Its adjacency matrix \(\mathscr{A}\) has the block form \[\mathscr{A}=\left(\begin{array}{cc}A&e\\ e^{T}&0\end{array}\right), \tag{8}\] where \(e=(e_{1},\ldots,e_{m})^{T}\), \(e_{i}\in\{0,1\}\). The maximal eigenvalue \(\lambda_{max}(\mathscr{A})\) can be computed by means of the Rayleigh ratio, i.e. \[\lambda_{max}(\mathscr{A})=\max_{x\in\mathbb{R}^{m},\xi\in\mathbb{R}}\frac{(x ^{T},\xi)\left(\begin{array}{cc}A&e\\ e^{T}&0\end{array}\right)\left(\begin{array}{c}x\\ \xi\end{array}\right)}{|x|^{2}+\xi^{2}}=\max_{x\in\mathbb{R}^{m},\xi\in \mathbb{R}}\frac{x^{T}Ax+2(e^{T}x)\xi}{|x|^{2}+\xi^{2}},\] where \(|x|\) is the Euclidean norm of the vector \(x\). Let \(\hat{x}\) be an eigenvector for corresponding to the maximal eigenvalue \(\lambda_{max}(A)\), that is, \(A\hat{x}=\lambda_{max}(A)\hat{x}\). Then \[\lambda_{max}(\mathscr{A})\geq\max_{\xi\in\mathbb{R}}\frac{\lambda_{max}(A)+2 (e^{T}\hat{x})\xi}{1+\xi^{2}}=\lambda_{max}(A)\max_{\xi\in\mathbb{R}}\frac{1+ \alpha\xi}{1+\xi^{2}},\] where \(\alpha=2(e^{T}\hat{x})/\lambda_{max}(A)\). Let us introduce the auxiliary function \(\psi:\mathbb{R}\rightarrow\mathbb{R}\), \(\psi(\xi)=(1+\alpha\xi)/(1+\xi^{2})\), where \(\alpha\in\mathbb{R}\) is a parameter. Using the first-order necessary condition it is easy to verify that the maximum of the function \(\psi\) is attained at \(\xi=(-1+\sqrt{1+\alpha^{2}})/\alpha\). As a consequence, we have \[\max_{\xi}\frac{1+\alpha\xi}{1+\xi^{2}}=\frac{1+\sqrt{1+\alpha^{2}}}{2}>0.\] Notice that the adjacency matrix contains only nonnegative elements. With regard to the Perron-Frobenius theorem, an eigenvector corresponding to the maximal eigenvalue \(\lambda_{max}(A)\) is nonnegative, i.e. \(\hat{x}\geq 0\). Consider the vector \(e=(1,\ldots,1)^{T}\) consisting of ones. It corresponds to the new vertex connected to all the vertices of \(G_{A}\). Then \((e^{T}\hat{x})^{2}=(\hat{x}_{1}+\cdots+\hat{x}_{m})^{2}\geq|\hat{x}|^{2}=1\) because all \(\hat{x}_{i}\geq 0\) are nonnegative. Inserting the parameter \(\alpha^{2}=4(e^{T}\hat{x})^{2}/(\lambda_{max}(A))^{2}\geq 4/(\lambda_{max}(A))^{2}\) we obtain \(\lambda_{max}(\mathscr{A})\geq\frac{1}{2}(\lambda_{max}(A)+\sqrt{(\lambda_{ max}(A))^{2}+4})\), as claimed. Similarly, let \(\bar{x}\) be the unit eigenvector corresponding to the minimal eigenvalue \(\lambda_{min}(A)\), that is, \(A\bar{x}=\lambda_{min}(A)\bar{x},|\bar{x}|=1\). Let \(i_{0}\) be the index such that \(|\hat{x}_{i_{0}}|=\max_{i}|\hat{x}_{i}|\). Since \(|\hat{x}|=1\) we have \(|\hat{x}_{i_{0}}|\geq 1/\sqrt{m}\). Assume that the graph \(G_{\mathscr{A}}\) is constructed from \(G_{A}\) by adding one vertex connected to the vertex \(i_{0}\). That is \(e=(e_{1},\ldots,e_{m})^{T}\), \(e_{i_{0}}=1\), and \(e_{i}=0\) for \(i\neq i_{0}\). Then \((e^{T}\hat{x})^{2}=(\hat{x}_{i_{0}})^{2}\geq 1/m\). Hence \[\lambda_{min}(\mathscr{A})=\min_{x\in\mathbb{R}^{m},\xi\in\mathbb{R}}\frac{x^{ T}Ax+2(e^{T}x)\xi}{|x|^{2}+\xi^{2}}\leq\min_{\xi\in\mathbb{R}}\frac{\lambda_{ min}(A)+2(e^{T}\bar{x})\xi}{1+\xi^{2}}=\lambda_{min}(A)\max_{\xi\in\mathbb{R}} \frac{1+\alpha\xi}{1+\xi^{2}}\] because \(\lambda_{min}(A)<0\). Here \(\alpha=2(e^{T}\bar{x})/\lambda_{min}(A)\). Consider the index \(i_{0}\) for which \(|x_{i_{0}}|\) is maximal. Then \((\bar{x}_{0})^{2}\geq 1/m\), and \[\lambda_{min}(\mathscr{A})\leq\lambda_{min}(A)\frac{1+\sqrt{1+\alpha^{2}}}{2} \leq\frac{\lambda_{min}(A)-\sqrt{(\lambda_{min}(A))^{2}+4/m}}{2},\] and the proof of the proposition follows. ## 3 Statistical properties of indices The purpose of this section is to report statistical results on maximal (minimal) eigenvalues, and indices for the class of all simple connected graphs on \(m\leq 10\) vertices. In Table 2 the operators \(E,\sigma,\mathcal{S}\) and \(\mathcal{K}\) represent the mean value, standard deviation, skewness and kurtosis of the corresponding sets of eigenvalues \(\lambda_{max}\), and \(\lambda_{min}\), respectively. For larger \(m\) the skewness \(\mathcal{S}(\lambda_{max})\) approaches zero and the kurtosis \(\mathcal{K}(\lambda_{max})\) tends to \(3\) meaning that the distribution of maximal eigenvalues of all simple connected graphs on the \(m\) vertices becomes normally distributed as \(m\) increases. The skewness \(\mathcal{S}(\lambda_{min})<0\) is negative and the kurtosis \(\mathcal{K}(\lambda_{min})>3\) meaning that the distribution of minimal eigenvalues of connected graphs on the \(m\) vertices is skewed to the left. It has fat tails (leptokurtic distribution) because it has positive excess kurtosis \(\mathcal{K}(\lambda_{min})-3>0\) as \(m\) increases. We employed the list of all simple connected graphs due to B. McKay which is available at the repository [33]. We calculated the spectra for all graphs and the corresponding indices. Calculating indices for \(m=10\) is a computationally complex task, since the number \(11716571\) of all simple connected graphs is very large. To our knowledge, a consolidated list of connected nonisomorphic graphs is not available for orders \(m\geq 11\). Interestingly enough, for the values of \(m\leq 7\) the maximum value of \(\Lambda^{pow}\) is achieved for the complete graph \(K_{m}\) with the eigenvalues \(\{m-1,-1,\ldots,-1\}\) and the maximal value \(\Lambda^{pow}=2m-2\). For \(m=7\) there are exactly two graphs with the same maximal value \(\Lambda^{pow}=12\). The noncomplete maximizing graph with eigenvalues \(\{5,1,-1,-1,-1,-1,-2\}\) is shown in Fig. 4. Starting from the degree \(m=8\) the maximal value of \(\Lambda^{pow}\) is attained for noncomplete graphs shown in Fig. 5. In Fig. 6 we show graphs on \(5\leq m\leq 10\) minimizing \(\Lambda^{gap}\). Path graphs \(P_{m}\) minimize \(\Lambda^{gap}\) and \(\Lambda^{ind}\) for \(m=2,3,4\) (see Table 2). In Fig. 7 we show graphs on \(m=6,7,9,10\) minimizing \(\Lambda^{ind}\). For \(m=5,8\) the minimizing graphs are the same as those for \(\Lambda^{gap}\) shown in Fig. 7 (see Table 2). \begin{table} \begin{tabular}{l||l|l|l|l|l|l|l|l|l|l} \hline \(m\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ total \# & 1 & 2 & 6 & 21 & 112 & 853 & 11117 & 261080 & 11716571 \\ \hline \hline \(E(\lambda_{max})\) & 1 & 1.7071 & 2.1802 & 2.6417 & 3.0582 & 3.4856 & 3.9288 & 4.4001 & 4.8895 \\ \(\sigma(\lambda_{max})\) & 0 & 0.4142 & 0.5228 & 0.5968 & 0.6368 & 0.6562 & 0.6595 & 0.6529 & 0.6471 \\ \(\mathcal{S}(\lambda_{max})\) & - & 0 & 0.5096 & 0.5171 & 0.4142 & 0.2855 & 0.1536 & 0.0608 & 0.0132 \\ \(\mathcal{K}(\lambda_{max})\) & - & 1 & 1.9715 & 2.6351 & 2.9901 & 3.0804 & 3.0578 & 3.0313 & 3.0096 \\ max(\(\lambda_{max})\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ min(\(\lambda_{max})\) & 1 & 1.4142 & 1.6180 & 1.7321 & 1.8019 & 1.8478 & 1.8794 & 1.9021 & 1.9190 \\ \hline \(E(\lambda_{min})\) & -1 & -1.2071 & -1.5655 & -1.7911 & -2.0302 & -2.2264 & -2.4191 & -2.6018 & -2.7756 \\ \(\sigma(\lambda_{min})\) & 0 & 0.2929 & 0.3305 & 0.2981 & 0.3012 & 0.2995 & 0.2994 & 0.2915 & 0.2832 \\ \(\mathcal{S}(\lambda_{min})\) & - & 0 & 0.5740 & 0.2506 & -0.4079 & -0.5438 & -0.4937 & -0.4121 & -0.3927 \\ \(\mathcal{K}(\lambda_{min})\) & - & 1 & 2.7899 & 4.2278 & 4.1917 & 3.5318 & 3.3933 & 3.3626 & 3.3289 \\ max(\(\lambda_{min}\)) & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ min(\(\lambda_{min}\)) & -1 & -1.4142 & -2 & -2.4495 & -3 & -3.4641 & -4 & -4.4721 & -5 \\ \hline max(\(\Lambda^{gap})\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ min(\(\Lambda^{gap}\)) & 2 & 2.8284 & 1.2360 & 1.0806 & 0.7423 & 0.6390 & 0.3468 & 0.2834 & 0.1565 \\ max(\(\Lambda^{ind}\)) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 9 \\ min(\(\Lambda^{ind}\)) & 1 & 1.4142 & 0.6180 & 0.6180 & 0.4142 & 0.3573 & 0.1826 & 0.1502 & 0.0841 \\ max(\(\Lambda^{pow}\)) & 2 & 4 & 6 & 8 & 10 & 12 & 14.3253 & 17.0600 & 20 \\ min(\(\Lambda^{pow}\)) & 2 & 2.8284 & 3.4642 & 4.0000 & 4.4722 & 4.8990 & 5.2916 & 5.6568 & 6.0000 \\ \hline \end{tabular} \end{table} Table 2: Descriptive statistics of the maximal(minimal) eigenvalues \(\lambda_{max}\) (\(\lambda_{min}\)), spectral gap \(\Lambda^{gap}\), spectral index \(\Lambda^{ind}\), and spectral power \(\Lambda^{pow}\) for all simple connected graphs on \(m\leq 10\) vertices. Figure 2: Histograms of distribution of maximal (top row) and minimal (bottom row) eigenvalues for all simple connected graphs on \(7\leq m\leq 9\) vertices. For their statistical properties, see Table 2. Figure 4: The noncomplete graph on \(m=7\) vertices with eigenvalues \(\{5,1,-1,-1,-1,-1,-2\}\) maximizing the value \(\Lambda^{pow}=12\) in the class of all simple connected graphs of the degree \(m=7\). Figure 3: Histograms of distribution of \(\Lambda^{gap}\) (top row), \(\Lambda^{ind}\) (middle row), and \(\Lambda^{pow}\) (bottom row) for all simple connected graphs on \(7\leq m\leq 9\) vertices. For their statistical properties, see Table 2. Figure 5: noncomplete graphs on \(8\leq m\leq 10\) vertices maximizing \(\Lambda^{pow}\) which is greater than the value \(=2m-2\) attained by the complete graph \(K_{m}\). For values of \(\Lambda^{pow}\) see Table 2. Figure 6: Graphs on \(5\leq m\leq 10\) minimizing \(\Lambda^{gap}\). For values of \(\Lambda^{pow}\) see Table 2. Figure 7: Graphs on \(5\leq m\leq 10\) minimizing \(\Lambda^{ind}\). For values of \(\Lambda^{pow}\) see Table 2. Conclusions In this paper we analyzed the spectral properties of all simple connected graphs. We focus our attention to the class of graphs which are complete multipartite graphs. We also present results on density of spectral gap indices and its nonpersistency with respect to small perturbations of the underlying graph. We also analyzed the spectral properties of graphs different from those of complete multipartite graphs. We presented statistical and numerical analysis of the indices \(\Lambda^{gap},\Lambda^{ind}\), and \(\Lambda^{pow}\) of graphs of order \(m\leq 10\). ## Acknowledgments Support of the Slovak Research and Development Agency under the projects APVV-19-0308 (SP, JS), and APVV-20-0311 (DS) is kindly acknowledged.
2302.05646
Low-energy scatterings and pseudopotential of polarized quadrupoles
We investigate the low-energy scattering properties of two identical particles interacting via the polarized quadrupolar interaction. It is shown that a series of $s$- and $p$-wave resonances appear for identical bosons and fermions, respectively, as the strength of the quadrupolar interaction increases. Interestingly, scattering resonances also appear on the generalized scattering length corresponding to the coupling between the $s$ and $d$ waves. This observation inspires us to propose a new pseudopotential for the quadupolar interaction. We also explore the bound-state properties of two particles in both free space and harmonic traps.
Fulin Deng, Wenxian Zhang, Su Yi
2023-02-11T10:19:15Z
http://arxiv.org/abs/2302.05646v2
# Low-energy scatterings and pseudopotential of polarized quadrupoles ###### Abstract We investigate the low-energy scattering properties of two identical particles interacting via the polarized quadrupolar interaction. It is shown that a series of \(s\)- and \(p\)-wave resonances appear for identical bosons and fermions, respectively, as the strength of the quadrupolar interaction increases. Interestingly, scattering resonances also appear on the generalized scattering length corresponding to the coupling between the \(s\) and \(d\) waves. This observation inspires us to propose a new pseudopotential for the quadrupolar interaction. We also explore the bound-state properties of two particles in both free space and harmonic traps. ## I Introduction Interatomic interactions in ultracold atomic gases are of fundamental importance in determining the properties of the systems. Under a ultralow temperature, the van der Waals force between two neutral atoms can be described by a contact potential characterized by a single \(s\)-wave scattering length. Such a simplification has gained great success in cold atomic physics (see, e.g., Refs. [1; 2]). In addition to the isotropic contact interaction, the long-range and anisotropic dipole-dipole interaction may become dominant and lead to the dipolar quantum gases for particles with large magnetic moments [3; 4; 5; 6; 7; 8]. For the mean-field treatment of the dipolar gases, an important step is to identify that the interaction between two identical bosonic dipoles can be modeled by a pseudopotential consisting of a \(s\)-wave contact potential and a bare dipole-dipole interaction potential whose scattering amplitude reproduces that of the real potential away from scattering resonance [9; 10; 11]. Another interesting platform for exploring the novel properties of interactions is the ultracold gases consisting of particles possessing large quadrupole moment, such as alkaline-earth and rare-earth atoms in the metastable \({}^{3}P_{2}\) states [12; 13; 14; 15; 16; 17; 18; 19; 20] and homonuclear diatomic molecules [21; 22; 23; 24]. Following the approach for the dipolar interaction, a widely used pseudopotential for the quadrupole-quadrupole interaction (QQI) between two polarized (along the \(z\) axis) bosonic quadrupoles is [25; 26; 27; 28] \[\mathcal{V}_{\rm qq}(\mathbf{r})=\frac{4\pi\hbar^{2}a_{00}}{M}\delta( \mathbf{r})+g_{Q}\frac{Y_{40}(\hat{\mathbf{r}})}{r^{5}}, \tag{1}\] where \(M\) is the mass of the bosons, \(a_{00}\) is the \(s\)-wave scattering length, \(g_{Q}=\Theta^{2}/(\sqrt{\pi}\varepsilon_{0})\) is the QQI strength with \(\Theta\) being the electric quadrupole moment and \(\varepsilon_{0}\) the vacuum permittivity, \(r=|\mathbf{r}|\), and \(\hat{\mathbf{r}}=\mathbf{r}/r\). In a relevant study, Pikovski considered the general form for the anisotropic interaction [29]. Armed with pseudopotential Eq. (1), various properties of quadrupolar gases were investigated for ultracold quadrupolar Bose gases. In particular, Li _et al._ studied the shapes, stability, mobility, and collisions of solitons of quadrupolar gases in two-dimensional lattices [25]. Lahrz _et al._ explored the exotic roton excitations in a two-dimensional quadrupolar Bose-Einstein condensates (BECs) [26]. Andreev studied the Bogoliubov spectrum of a BEC in the presence of both dipolar and quadrupolar interactions [27]. Wang and Yi investigated the ground-state properties and stability of quadrupolar condensates by numerically solving the Gross-Pitaevskii equation [28]. Furthermore, Bhongale _et al._ showed that QQI might lead to the unconventional Bardeen-Cooper-Schrieffer and charge density wave phases for quadrupolar Fermi gases trapped in a 2D square optical lattice [30]. Huang _et al._ found that quadrupolar Fermi gases in coupled one-dimensional tubes supported the triplet superfluid and spin-density wave phases [31]. More recently, using the diffusion Monte Carlo technique, Astrakharchik _et al._ predicted a quantum phase transition from a gas to a solid phase in a two-dimensional Bose system with quadrupolar interactions [32]. For the experimental detection of the quadrupolar effects, Lahrz _et al._ proposed to measure the mean-field induced frequency shift in a two-dimensional optical square lattice of Yb or Sr atoms in the \({}^{3}P_{2}\) state [33]. Interestingly, Han _et al._ experimentally observed the quadrupolar blockade in a gas of Rb atoms [34]. Although the pseudopotential Eq. (1) is widely used, its validity has not been strictly checked through scatterings calculations [9; 10; 11]. In this work, we study the low-energy scattering of two identical particles interacting via a simple model potential consisting of a hard core and a bare QQI. We show that, similar to the dipolar scatterings [35; 36; 11], a series of broad resonances appear on the \(s\)-wave scattering length as the quadrupolar interaction strength increases. Interestingly, sequences of broad resonances also appear on the generalized scattering lengths of the \(p\) wave for fermions and of the \(s\) and \(d\) wave coupling for bosons, which is in striking contrast to dipolar scatterings. We further show that the scattering amplitudes for higher partial waves with incoming (\(l\)) and outgoing (\(l^{\prime}\)) channels satisfying \(l+l^{\prime}\geq 4\) are determined by the first Born approximation. These observations inspire us to propose new pseudopotentials for quadrupolar interaction which incorporates the anisotropic short-range contributions due to the \(p\)-wave scattering for fermions and the \(s\) and \(d\) waves coupling for bosons. Finally, to further understand the scattering resonances, we also study the bound-state properties of two particles in both free space and harmonic traps. The rest of this paper is organized as follows. In Sec. II, we introduce the model potential and give a brief analysis to the threshold behavior of the quadrupolar scattering. In Sec. III, we present results on the generalized scattering lengths and the bound-state properties in the vicinity of collision resonances. We also propose a new pseudopotential for the quadrupolar interaction between identical bosons or fermions. Finally, we conclude in Sec. IV. ## II Formulation We consider the collisions between two identical polarized quadrupoles. The interaction potential is modeled as \[V_{\rm model}({\bf r})=\left\{\begin{array}{ll}g_{Q}Y_{40}(\hat{\bf r})/r^{5 },&\mbox{for $r>r_{c}$,}\\ \infty,&\mbox{for $r\leq r_{c}$,}\end{array}\right. \tag{2}\] where we have introduced a short-distance truncation \(r_{c}\) such that the interaction potential is a hard sphere for \(r<r_{c}\) and a pure quadrupolar interaction for \(r>r_{c}\). Apparently, the QQI is anisotropic as it depends on the polar angle \(\theta\) of \(\hat{\bf r}\). In particular, the QQI is repulsive along \(\theta=0^{\circ}\) and \(90^{\circ}\) and is most attractive along \(\theta=49.1^{\circ}\). We point out that the quadrupolar interactions were also involved in the studies of the cold collisions between the metastable alkaline-earth atoms [37; 38; 39]. The quadrupolar scatterings are described by the Schrodinger equation \[\left[-\frac{\hbar^{2}\nabla^{2}}{2\mu}+V_{\rm model}({\bf r})\right]\psi({ \bf r})=E\psi({\bf r}), \tag{3}\] where \(\mu=M/2\) is the reduce mass of the colliding atoms and \(E=\hbar^{2}k^{2}/(2\mu)\) is the incident energy with \(k=|{\bf k}|\) being the incident momentum. To proceed, we expand the scattering wave function in terms of the partial waves, i.e., \(\psi({\bf r})=\sum_{lm}r^{-1}\phi_{lm}(r)Y_{lm}(\hat{\bf r})\), where \(l\) is the orbital angular momentum quantum number and \(m\) is projection quantum number. After substituting the partial wave expansion into Eq. (3), we obtain, for \(r>r_{c}\), the coupled equations \[\left[\frac{d^{2}}{dr^{2}}-\frac{l(l+1)}{r^{2}}+k^{2}\right]\phi _{lm}-\frac{2\bar{g}_{Q}}{r^{5}}\sum_{l^{\prime}m^{\prime}}\zeta_{lm}^{l^{ \prime}m^{\prime}}\phi_{l^{\prime}m^{\prime}}=0, \tag{4}\] where \(\bar{g}_{Q}=g_{Q}\mu/\hbar^{2}\) characterizes the quadrupolar interaction strength and is of the dimension of volume and \[\begin{split}\zeta_{lm}^{l^{\prime}m^{\prime}}&= \int d\hat{r}Y_{lm}^{*}(\hat{\bf r})Y_{40}(\hat{\bf r})Y_{l^{\prime}m^{\prime}} (\hat{\bf r})=(-1)^{m}\\ &\times\sqrt{\frac{9(2l+1)(2l^{\prime}+1)}{4\pi}}\begin{pmatrix}l ^{\prime}&4&l\\ -m^{\prime}&0&m\end{pmatrix}\begin{pmatrix}l^{\prime}&4&l\\ 0&0&0\end{pmatrix}.\end{split} \tag{5}\] Apparently, because the model potential \(V_{\rm model}\) conserves the \(z\) component of the orbital angular momentum, \(\zeta_{lm}^{l^{\prime}m^{\prime}}=0\) if \(m\neq m^{\prime}\). Without loss of generality, we shall restrict ourselves to the \(m=0\) case as scattering channels with different \(m\) quantum numbers can be treated separately. Moreover, the \(3j\) symbols in Eq. (5) imply that \(\zeta_{lm}^{l^{\prime}m^{\prime}}\) is nonzero only when \(|l-l^{\prime}|\leq 4\leq l+l^{\prime}\) and \(l+l^{\prime}\) is even. As a result, QQI does not directly contribute to the lowest partial waves due to \(\zeta_{00}^{00}=\zeta_{00}^{20}=0\) for bosons and \(\zeta_{10}^{10}=0\) for fermions. At sufficiently large \(r\), the scattering wave functions satisfy the asymptotical boundary conditions \[\phi_{l0}(r)\xrightarrow{r\to\infty}Y_{l0}^{*}(\hat{\bf k})rj_{l}(kr)-\sum_{ l^{\prime}}K_{l0}^{l^{\prime}0}(k)Y_{l^{\prime}0}^{*}(\hat{\bf k})rn_{l^{ \prime}}(kr), \tag{6}\] where \(j_{l}(x)\) and \(n_{l}(x)\) are the spherical Bessel and spherical Neumann functions, respectively, and \(K_{l0}^{l^{\prime}0}\) is the element of the \(K\) matrix corresponding to the incoming channel (\(l0\)) and the outgoing channels (\(l^{\prime}0\)). Physically, the first term at the right-hand-side of Eq. (6) is the free spherical wave solution and the second one accounts the contributions of the interaction. For a spherical potential, the \(K\) matrix elements reduce to the familiar form \(K_{l0}^{l^{\prime}0}=\delta_{ll^{\prime}}\tan\delta_{l}(k)\) with \(\delta_{l}(k)\) being the phase shift. While for an \(1/r^{n}\)-type anisotropic interaction, \(K_{l0}^{l^{\prime}0}\) behaves in the low-energy limit (\(k\to 0\)) as \(k^{l+l^{\prime}+1}\) if \(l+l^{\prime}<n-3\) and as \(k^{n-2}\) otherwise [40]. Consequently, in the low-energy limit, the generalized scattering lengths are then defined as \[a_{ll^{\prime}}\equiv a_{l0}^{l^{\prime}0}=\left\{\begin{array}{ll}-\lim_{k \to 0}k^{-1}K_{00}^{00},&\mbox{for $l=l^{\prime}=0$,}\\ -\lim_{k\to 0}k^{-3}K_{l0}^{l^{\prime}0},&\mbox{otherwise,}\end{array}\right. \tag{7}\] where \(a_{00}\) has dimension of length and all other \(a_{ll^{\prime}}\)'s are of dimension of volume. To find the generalized scatterings, we numerically integrate Eq. (4) from \(r=r_{c}\) up to \(10^{4}r_{c}\) using Johnson's log-derivative propagator method [41]. The \(K\) matrix elements can then be obtained by matching the scattering wave function with the asymptotic boundary conditions Eq. (6), which subsequently leads to the generalized scattering lengths. ## III Results Before we present the results on the generalized scattering lengths and the bound states, let us first specify the interaction parameter covered in this work. The typical quadrupole moment of alkaline-earth atoms and homonuclear molecules is about \(10-40\) a.u. [42; 43; 44; 45; 46; 47] and, without loss of generality, we choose \(r_{c}=100\) a.u.. Then the dimensionless quadrupolar interaction strength \(\bar{g}_{Q}/r_{c}^{3}\) can be as large as 1000 for Yb atom in \({}^{3}P_{2}\) state (\(\Theta=30\) a.u. [46]), which, as shall be shown, is sufficiently large for the experimental observations of the quadrupolar scattering effects. It can be estimated that the magnetic dipole-dipole interaction between Yb atoms is much smaller than the QQI within the interatomic distance of a few hundreds Bohr radii, a range in which the atomic collision takes place [38]. We therefore neglect the magnetic dipole-dipole interaction in our calculations. Numerically, we solve Eq. (3) with the incident energies \(E/E_{r_{c}}=4\times 10^{-3}\), \(4\times 10^{-4}\), and \(4\times 10^{-6}\), where \(E_{r_{c}}=\hbar^{2}/(\mu r_{c}^{2})\) is a characteristic energy associated with \(r_{c}\). For Yb atoms, these incident energies correspond to temperatures \(8\times 10^{-7}\), \(8\times 10^{-8}\), and \(8\times 10^{-10}\,\mathrm{K}\), respectively. It is found numerically that the generalized scattering lengths quickly converge as the collision energy \(E\) is lowered. Finally, for practical purpose, we introduce a truncation \(l_{\mathrm{cut}}\) for \(l\) in numerical calculations. It turns out that, for all results presented in this work, \(l_{\mathrm{cut}}=34\) for bosons and \(35\) for fermions are sufficient to ensure the convergence of the scattering wave functions. ### Generalized scattering lengths Let us first consider the scatterings between two identical bosons. Figure 1(a) plots \(a_{00}\) for two identical bosons as functions of the quadrupolar interaction strength \(\bar{g}_{Q}\). As can be seen, \(a_{00}\) exhibits a series of resonances as \(\bar{g}_{Q}\) increases, which means that there is effective attractive potential despite of \(\zeta_{00}^{00}=0\), in analogy to the situation in dipolar scatterings. The appearance of these resonances implies that zero-energy bound states continues to emerge as attractive interaction is deepened. The positions of the resonances can then be estimated using the Wentzel-Kramers-Brillouin (WKB) phase of the adiabatic potential curves [48; 49; 50], i.e., \[\phi_{\mathrm{WKB}}^{(i)}(\bar{g}_{Q})=\int_{r_{c}}^{R}dr\sqrt{-2\mu V_{ \mathrm{ad}}^{(i)}(r)/\hbar^{2}}, \tag{8}\] where \(V_{\mathrm{ad}}^{(i)}\) is the \(i\)th adiabatic potential curve obtained by diagonalizing \(V_{\mathrm{model}}+\mathbf{L}^{2}/(2\mu r^{2})\) in the partial-wave basis. Here \(\mathbf{L}\) is the total angular momentum. In Eq. (8), the upper bound of the integral is \(R=\infty\) for the adiabatic curves without a barrier at zero collision energy; otherwise, \(R\) represents the classical turning point of the barrier. In Fig. 1(e) and (f), we plot the lowest six adiabatic potential curves for the bosons with \(\bar{g}_{Q}/r_{c}^{3}=100\) and \(800\), respectively. As can be seen, the lowest adiabatic curve is indeed attractive and becomes deepened as \(\bar{g}_{Q}\) is increased. Except for the lowest adiabatic curve, all higher-lying adiabatic curves possess an energy barrier due to the centrifugal potential. A zero-energy bound state emerges whenever \(\phi_{\mathrm{WKB}}^{(i)}+\pi/4\) passes through an integer multiple of \(\pi\). The markers at the top of Fig. 1(a) denote the positions estimated using the WKB phases. More specifically, the circles (\(\Circle\)), squares (\(\Box\)), and triangle (\(\triangle\)) are obtained using the lowest, first-excited, and second-excited adiabatic potential curves, respectively. Following the convention, the resonances associated with the adiabatic curves without centrifugal barrier are termed potential resonances; otherwise, they are Figure 1: Generalized scattering lengths \(a_{00}/r_{c}\) (a), \(a_{02}/r_{c}^{3}\) (b), \(a_{04}/r_{c}^{3}\) (c), and \(a_{22}/r_{c}^{3}\) (d) of identical bosons as functions of \(\bar{g}_{Q}/r_{c}\). The scattering energies are \(E/E_{r_{c}}=4\times 10^{-3}\) (blue solid line), \(4\times 10^{-4}\) (red dashed line), \(4\times 10^{-6}\) (black dotted line). The resonance positions shown in (a) are predicted by the WKB method using the lowest (\(\Circle\)), first-excited (\(\square\)), and second-excited (\(\triangle\)) adiabatic curves. The dash-dotted lines in (c) and (d) are from the Born approximation Eq. (10) which are in excellent agreement with multi-channel numerical calculations away from resonances. (e) and (f) show the lowest 6 adiabatic curves for \(\bar{g}_{Q}/r_{c}^{3}=100\) and \(800\), respectively. called shape resonances. Although the resonance positions are not predicted accurately due to the anisotropy and long-range nature of QQI, the WKB phase estimation captures all the resonances for the range of interaction strength covered in the figure. In addition, it helps to understand the origin of the broad and narrow resonances. Namely, the adiabatic curves without a centrifugal barrier lead to broad resonances; while the higher-lying adiabatic curves induce narrow shape resonances as the atoms must tunnel through the centrifugal barrier. We then turn to consider the generalized scattering lengths for the higher partial waves. In general, because, away from the shape resonances, the slowly moving particles can hardly tunnel through the centrifugal barrier, the scattering wave function for higher partial waves are essentially undisturbed by the interaction potential. As a result, the generalized scattering lengths are mainly determined by the first Born approximation, just like what has been seen in dipolar scattering. According to the Born approximation, the \(K\) matrix elements can be expressed as \[\widetilde{K}_{l0}^{\prime}=-2\bar{g}_{Q}\zeta_{l0}^{\prime}k^{3}\int_{kr_{r} }^{\infty}\frac{d(kr)}{(kr)^{3}}j_{l^{\prime}}(kr)j_{l}(kr). \tag{9}\] Then in the low-energy limit (\(k\to 0\)), the generalized scattering lengths in the first Born approximation become \[\tilde{a}_{ll^{\prime}}=-\lim_{k\to 0}k^{-3}\widetilde{K}_{l0}^{\prime}=2 \zeta_{l0}^{\prime}\zeta_{ll^{\prime}}\bar{g}_{Q}, \tag{10}\] where \[\chi_{ll^{\prime}}=\frac{48(-1)^{(l-l^{\prime})/2}(l+l^{\prime}-4)!!(l-l^{ \prime}-5)!!}{(l+l^{\prime}+4)!!(l-l^{\prime}+3)!!}. \tag{11}\] Apparently, the first Born approximation gives rise to a linear dependence of the generalized scattering lengths on the interaction strength. In Fig. 1(b)-(d), we present the \(\bar{g}_{Q}\) dependence of the generalized scattering lengths \(a_{ll^{\prime}}\) for higher partial waves. Let us first examine \(a_{04}\) and \(a_{22}\), for which we also plot the corresponding Born approximation results, i.e., Eq. (10), in Fig. 1(c) and (d), respectively. As can be seen, away from scattering resonances, both \(a_{04}\) and \(a_{22}\) are well described by the first Born approximation. In fact, this observation holds true for all partial waves satisfying \(l+l^{\prime}\geq 4\). We note that since the particles are hardly scattered by the short-range potential, the first Born approximation is contributed by the long-range part of the interaction. On the other hand, \(a_{02}\), as shown in Fig. 1(b), exhibits many resonances as \(\bar{g}_{Q}\) grows, in analogy to \(a_{00}\). Moreover, the resonance positions of \(a_{02}\) are identical to those of \(a_{00}\)'s. The underlying reason is that \(\zeta_{00}^{20}=0\) such that the first Born approximation, or, equivalently, the long-range part of the interaction, does not contribute to \(a_{02}\). This behavior is in striking contrast to the dipolar scattering. The generalized scattering lengths for quadrupolar scatterings of two identical fermions are summarized in Fig. 2. Here \(a_{11}\) shows a sequence of resonances as \(\bar{g}_{Q}\) increases, in contrast to the dipolar scatterings where \(a_{11}\) is mainly determined by the Born approximation. This result again is due to the vanishing \(\zeta_{10}^{10}=0\) such that \(a_{11}\) is mostly determined by the short-range part of the interaction. It is worthwhile to mention that, as shown in Fig. 2(d) and (e), although the lowest adiabatic curve always has a energy barrier, the width of the induced resonances are still very broad, compared to those induced by the higher-lying adiabatic curves. Therefore, we shall refer to the resonances induced by the lowest adiabatic curve as the broad resonances and those by higher-lying adiabatic curves as narrow resonances. Finally, as shown in Fig. 2(b) and (c), the generalized scattering length \(a_{ll^{\prime}}\) for higher waves (\(l+l^{\prime}\geq 4\)) are mainly determined by the Born approximation, similar to the case for bosons. Figure 2: Generalized scattering lengths \(a_{11}/r_{c}^{3}\) (a), \(a_{13}/r_{c}^{3}\) (b), and \(a_{33}/r_{c}^{3}\) (c) of identical fermions as functions of \(\bar{g}_{Q}/r_{c}^{3}\). The scattering energies are \(E/E_{r_{c}}=4\times 10^{-3}\) (blue solid line), \(4\times 10^{-4}\) (red dashed line), \(4\times 10^{-6}\) (black dotted line). The resonance positions shown in (a) are predicted by the WKB method using the lowest (\(\bigcirc\)), first-excited (\(\square\)), and second-excited (\(\triangle\)) adiabatic curves. The dash-dotted lines in (b) and (c) are from the Born approximation Eq. (10) which are in excellent agreement with multi-channel numerical calculations away from resonances. (d) and (e) show the lowest 6 adiabatic curves for \(\bar{g}_{Q}/r_{c}^{3}=100\) and 800, respectively. The insets are the zoom-in plots for the lowest adiabatic curves on which the centrifugal barriers appear. ### Pseudopotentials for quadrupolar interactions From the scattering calculations, it is clear that the simple pseudopotential, Eq. (1), for quadrupolar interaction is inappropriate as the contribution of \(a_{02}\) is missed. Here we construct a new quadrupolar pseudopotential by following the approach of Huang and Yang [51] and Derevianko [52]. To this end, we first note that the regularized zero-range pseudopotential can be generally expressed as \[\hat{\mathcal{V}}^{(\text{reg})}=\sum_{ll^{\prime}m}\hat{v}_{lm}^{l^{\prime}m}, \tag{12}\] where \(l\) and \(l^{\prime}\) are even (odd) for bosons (fermions) and \(\hat{v}_{lm}^{l^{\prime}m}\) are operators defined by their action on an arbitrary \(\mathbf{r}\)-dependent wave function \(\psi(\mathbf{r})\)[52], i.e., \[\hat{v}_{lm}^{l^{\prime}m}\psi(\mathbf{r}) =g_{lm}^{l^{\prime}m}\frac{4\pi\delta(\mathbf{r})}{r^{l^{\prime}} }Y_{l^{\prime}m}(\hat{\mathbf{r}})\] \[\quad\times\left[\frac{\partial^{2l+1}}{\partial r^{2l+1}}r^{l+1 }\int Y_{lm}^{*}(\hat{\mathbf{r}})\psi(\mathbf{r})d\hat{\mathbf{r}}\right]_{r \to 0} \tag{13}\] with the coupling coefficients \(g_{lm}^{l^{\prime}m}\) being defined as \[g_{lm}^{l^{\prime}m}=-\frac{\hbar^{2}}{M}\frac{K_{lm}^{l^{\prime}m}}{k^{l+l^{ \prime}+1}}\frac{(2l+1)!!(2l^{\prime}+1)!!}{(2l+1)!}.\] Since, away from resonances, the \(K\) matrix elements, \(K_{lm}^{l^{\prime}m^{\prime}}\), with \(l+l^{\prime}\geq 4\) originate from the Born approximation, their contributions are completely covered by the bare quadrupolar interaction \(\bar{g}_{Q}Y_{40}(\hat{\mathbf{r}})/r^{5}\). Then, based on the scattering calculations, we only need to take care of the \(K_{00}^{00}\), \(K_{00}^{02}\), and \(K_{02}^{00}\) terms for bosons and the \(K_{1m}^{1m}\) terms for fermions in the pseudopotential Eq. (12). Consequently, in the low-energy limit, the pseudopotential for identical bosons and fermions can be straightforwardly written out as \[\hat{\mathcal{V}}_{\text{qq}}^{(B)}\psi(\mathbf{r}) =\frac{4\pi\hbar^{2}a_{00}}{M}\delta(\mathbf{r})\frac{\partial}{ \partial r}(r\psi)+\frac{\sqrt{4\pi}\hbar^{2}a_{02}}{M}\delta(\mathbf{r})\] \[\quad\times\left[\frac{60\pi}{r^{2}}Y_{20}(\hat{\mathbf{r}}) \frac{\partial}{\partial r}(r\psi)+\frac{1}{8}\frac{\partial^{5}}{\partial r^ {5}}r^{3}\!\!\int\!\!d\hat{\mathbf{r}}Y_{20}(\hat{\mathbf{r}})\psi\right]\] \[\quad+g_{Q}\frac{Y_{40}(\hat{\mathbf{r}})}{r^{5}}\psi \tag{14}\] and \[\hat{\mathcal{V}}_{\text{qq}}^{(F)}\psi(\mathbf{r}) =\sum_{m}\frac{6\pi\hbar^{2}a_{1m}^{1m}}{M}\frac{\delta(\mathbf{r })}{r}Y_{1m}(\hat{\mathbf{r}})\] \[\quad\times\left[\frac{\partial^{3}}{\partial r^{3}}r^{2}\int d \hat{\mathbf{r}}Y_{1m}^{*}(\hat{\mathbf{r}})\psi\right]+g_{Q}\frac{Y_{40}( \hat{\mathbf{r}})}{r^{5}}\psi, \tag{15}\] respectively. Here the long-range feature of the QQI is captured by bare quadrupolar interaction and the short-range characteristic is accounted by the \(a_{00}\) and \(a_{\text{Q2}}\) for bosons and by \(a_{1m}\) for fermions. ### Two-body bound states To gain more insight into the collision resonances, we explore the bound-state properties of two quadrupoles in both free space and trapping potentials. In addition, the energy spectrum of two quadrupolar fermionic atoms in a harmonic trap can be used to calculate the virial coefficient, a quantity that determines the high-temperature thermodynamics of strongly interacting gases [53]. The free-space bound state problem is described by Eq. (3) with a negative energy \(E\); while the trapped case is governed by the equation \[\left[-\frac{\hbar^{2}\nabla^{2}}{2\mu}+\frac{1}{2}\mu\omega^{2}r^{2}+V_{ \text{model}}(\mathbf{r})\right]\psi=E\psi, \tag{16}\] where \(\omega\) is the trap frequency. In both cases, we numerically solve the Schrodinger equations using \(B\)-splines (see, e.g., Ref. [54]). In particular, we shall focus on the bound states in the vicinity of collision resonances. Figure 3 summarizes the main results for the bound-state properties of two identical bosons. The dashed lines in Fig. 3(a) and (b) plot the eigenenergies of trapped bosons as function of \(\tilde{g}_{Q}\) around a broad and a narrow resonance, respectively. In the vicinity a resonance, the lowest energy level with positive energy quickly drops and becomes negative as \(\bar{g}_{Q}\) increases, signaling that the two atoms initially bounded by harmonic potential become a deeply bound molecule state bounded by the QQI. Also, in Fig. 3(a) and (b), the solid lines below zero energy in these figures represent the corresponding eigenenergies Figure 3: Bound-state structures of two identical bosons around the resonances at \(\bar{g}_{Q}/r_{c}^{3}=247.0\) (left panels) and \(359.84\) (right panels). (a) and (b) plot the QQI strength dependence of the bound-state energies in free space (solid lines) and trap (dashed lines). (c) and (d) are the bound-state wave function of trapped bosons with energy \(E=-0.5\hbar\omega\) and \(\theta=0^{\circ}\) (dotted line), \(30^{\circ}\) (dash-dotted lines), \(49.1^{\circ}\) (dashed lines), and \(60^{\circ}\) (solid line). Here \(a_{\text{ho}}=\sqrt{\hbar/(\mu\omega)}\) is the oscillator length and the size of the hard core is \(r_{c}=0.01a_{\text{ho}}\). of the bound states in free space, which are in very good agreement with the eigenenergies of the trapped systems as long as the bound energy is sufficiently large. This observation suggests that the effects of the confining potential is negligible for the molecule-like states. As to the bound-state wave functions, Fig. 3(c) shows the rescaled radial wave function, \(r\psi\), of the bound states corresponding to the eigenenergy \(E=-0.5\hbar\omega\) in the vicinity of the broad resonance at \(\bar{g}_{Q}/r_{c}^{3}=247.0\). The amplitude of the wave function is largest along the most attractive direction \(\theta=49.1^{\circ}\) and is considerably lower along the repulsive directions, in particular, along \(\theta=0^{\circ}\). Still this wave function is dominated by the isotropic \(l=0\) wave (\(\sim 99\%\)). This observation remains true for the wave function of a bound state around other broad resonance. On the other hand, the bound-state wave function around a narrow resonance is dramatically different from that around a broad one. As shown in Fig. 3(d), this wave function mainly consists of the three partial waves: \(l=2\) (34%), \(6\) (47%), and \(10\) (\(9.6\%\)). It should be noted that, in the vicinity of a narrow resonance, the partial-wave composition may vary rapidly as the interaction strength changes. In addition, the bound-state wave function around a narrow resonance is more localized at the center than that around a broad one, which is related to the large centrifugal barrier associated with higher partial waves. Finally, for the bound states of two identical fermions, Fig. 4(a) and (b) show the QQI dependence of the eigenenergies around a broad (\(\bar{g}_{Q}/r_{c}^{3}=264.84\)) and a narrow (\(\bar{g}_{Q}/r_{c}^{3}=360.42\)) resonances, respectively. It can be seen that the eigenenergies structure is very similar to that of the bosonic bound states. Moreover, as shown in Fig. 4(c), the wave function around a broad resonance is dominantly contributed by the \(p\) wave (over 88%). While around a narrow resonance [see Fig. 4(d)], the main contributions to the wave function come from a wide range of partial waves, including \(l=3\) (31%), \(5\) (33%), \(7\) (10%), and \(9\) (17%), which is again similar to its bosonic counterpart. ## IV Conclusion In summary, we have studied the low-energy scattering and the bound-state properties of two identical particles interacting via QQI. We numerically compute the generalized scattering lengths \(a_{ll^{\prime}}\) as functions of the quadrupolar interaction strength. It has been shown that the short-range part of the interaction potential gives rise to \(a_{00}\) and \(a_{02}\) for bosons and \(a_{1m}\) for fermions. These generalized scattering lengths exhibit a series of scattering resonances as the quadrupolar interaction strength grows. On the other hand, the long-range part of the interaction, i.e., the bare QQI, contributes to the generalized scattering lengths through the first Born approximation. Consequently, we propose new pseudopotentials that correctly take into account the contributions of \(a_{02}\) and \(a_{1m}\) for bosonic and fermionic quadrupoles, respectively. These pseudopotentials should pave the way for studying the many-body physics of the quadrupolar quantum gases. Finally, for a better understanding the scattering resonances, we have also presented a detailed analysis of the bound-state properties in the vicinity of the resonances. ###### Acknowledgements. We thank Tao Shi for fruitful discussions. This work was supported by the NSFC (Grants No. 12135018 and No. 12047503), by NKRDPC (Grant No. 2021YFA0718304), by NSAF (Grant No. U1930201), and by the Strategic Priority Research Program of CAS (Grant No. XDB28000000).